Prosecution Insights
Last updated: April 19, 2026
Application No. 18/222,199

IDENTIFYING FRAUDULENT ONLINE APPLICATIONS

Final Rejection §101§102§103
Filed
Jul 14, 2023
Examiner
ARAQUE JR, GERARDO
Art Unit
3629
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
State Farm Mutual Automobile Insurance Company
OA Round
4 (Final)
10%
Grant Probability
At Risk
5-6
OA Rounds
5y 4m
To Grant
25%
With Interview

Examiner Intelligence

Grants only 10% of cases
10%
Career Allow Rate
67 granted / 707 resolved
-42.5% vs TC avg
Strong +16% interview lift
Without
With
+15.7%
Interview Lift
resolved cases with interview
Typical timeline
5y 4m
Avg Prosecution
43 currently pending
Career history
750
Total Applications
across all art units

Statute-Specific Performance

§101
27.1%
-12.9% vs TC avg
§103
33.2%
-6.8% vs TC avg
§102
18.4%
-21.6% vs TC avg
§112
18.2%
-21.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 707 resolved cases

Office Action

§101 §102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED CORRESPONDENCE Priority Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. Status of Claims Claims 2, 4, 5, 11, 18, 20, 21, 23, 24, 25 have been amended. Claims 1, 3, 6, 8, 12, 14, 19 have been cancelled. Claims 26, 27 have been added. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 2, 4, 5, 7, 9 – 11, 13, 15 – 18, 20 – 27 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recite: providing an application to a user, wherein the application receives user inputs and retrieve recent online searching activity of the user; receiving a submission of the application from the user, the submission of the application includes; application data entered into the one or more application fields; and data representing the recent online searching activity of the user retrieved by application; determining whether the searching activity and based at least in part on the submission of the application, whether the recent online searching activity of the user indicates a search for the application data; and in response to determining that the recent searching activity indicates a search for the application data, performing an additional review associated with the application, based at least in part on the application data The invention is directed towards the abstract idea of detecting and mitigating identity theft based on the abstract idea of collecting and comparing information and, based on a rule(s), identify options, which corresponds to “Mental Processes” and “Certain Methods of Organizing Human Activities” as it is directed towards steps that can be performed in the human mind and/or with the aid of pen and paper, e.g., receiving an application filled out by a human, collecting searching activity performed by the human, comparing the collected information against it against known/trusted information, and, based on the comparison, determine whether the submitting user performed activities that are indicative of fraud, as well as performing an additional review. The limitations of: providing an application to a user, wherein the application receives user inputs and retrieve recent online searching activity of the user; receiving a submission of the application from the user, the submission of the application includes; application data entered into the one or more application fields; and data representing the recent online searching activity of the user retrieved by application; determining whether the searching activity and based at least in part on the submission of the application, whether the recent online searching activity of the user indicates a search for the application data; and in response to determining that the recent searching activity indicates a search for the application data, performing an additional review associated with the application, based at least in part on the application data are processes that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of a generic processor executing computer code stored on a computer medium, generic computer system, generic user device, and generic virtual application. That is, other than reciting a generic processor executing computer code stored on a computer medium, generic computer system, generic user device, and generic virtual application nothing in the claim element precludes the step from practically being performed in the mind. For example, but for the generic processor executing computer code stored on a computer medium, generic computer system, generic user device, and generic virtual application in the context of this claim encompasses a receive a human filled out application, look at the named applicant (for example), collect information about the human who submitted the application and compare it against known information, and, based on the comparison, determine if the submitter is committing identity fraud and taking a corresponding action to mitigate the incident, in this case, performing an additional review of the form. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of a generic processor executing computer code stored on a computer medium, generic computer system, generic user device, and generic virtual application, then it falls within the “Mental Processes” and “Certain Methods of Organizing Human Activities” groupings of abstract ideas. Accordingly, the claims recite an abstract idea. This judicial exception is not integrated into a practical application. In particular, the claim only recites additional elements – generic processor executing computer code stored on a computer medium, generic computer system, generic user device, and generic virtual application to collect and communicate information, as well as performing operations that a human can perform in their mind and/or with the aid of pen and paper, i.e. comparing collected information and, based on the comparison, flag the incident or generate an alert. The generic processor executing computer code stored on a computer medium, generic computer system, generic user device, and generic virtual application in the steps are recited at a high-level of generality (i.e., as a generic processor executing computer code stored on a computer medium, generic computer system, generic user device, and generic virtual application can perform the insignificant extra solution steps of receiving and transmitting information (See MPEP 2106.05(g) while also reciting that the a generic processor executing computer code stored on a computer medium, generic computer system, generic user device, and generic virtual application are merely being applied to perform the steps that can be performed in the human mind and/or with the aid of pen and paper; "[use] of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more.” Therefore, according to the MPEP, this is not solely limited to computers but includes other technology that, recited in an equivalent to “apply it,” is a mere instruction to perform the abstract idea on that technology (See MPEP 2106.05(f)) such that it amounts no more than mere instructions to apply the exception using a generic processor executing computer code stored on a computer medium, generic computer system, generic user device, and generic virtual application. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a generic processor executing computer code stored on a computer medium, generic computer system, generic user device, and generic virtual application to perform the steps of: providing an application to a user, wherein the application receives user inputs and retrieve recent online searching activity of the user; receiving a submission of the application from the user, the submission of the application includes; application data entered into the one or more application fields; and data representing the recent online searching activity of the user retrieved by application; determining whether the searching activity and based at least in part on the submission of the application, whether the recent online searching activity of the user indicates a search for the application data; and in response to determining that the recent searching activity indicates a search for the application data, performing an additional review associated with the application, based at least in part on the application data. amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Additionally: Claim 4 is directed towards descriptive subject matter describing collected information as well as the extra-solution activity of retrieving information. Claim 5 is directed towards descriptive subject matter describing collected information as well as the extra-solution activity of retrieving information, as well as the recitation of generic technology at a high level of generality and applying it to the abstract idea. Claim 7 is directed towards descriptive subject matter describing collected information as well as the extra-solution activity of retrieving information, as well as the recitation of generic technology at a high level of generality and applying it to the abstract idea. Claim 9 is directed towards the collection and comparison of information and, based on a rule, identify options, in this case, collecting and comparing location information to determine potentially fraudulent activity. Claim 10 is directed towards the recitation of generic technology at a high level of generality and applying it to the abstract idea. Claim 22 is directed towards descriptive subject matter describing collected information as well as the extra-solution activity of retrieving information, as well as the recitation of generic technology at a high level of generality and applying it to the abstract idea. Claim 23 is directed towards the extra solution activity of transmitting information and performing the abstract idea of collection and comparison of information and, based on a rule, identify options, in this case, collecting IP information and online searching activity and comparing it against application data for the purpose of referring to a rule to determine potentially fraudulent activity. Claim 24 is directed towards the mental process of a human reviewing information and taking an action based on the reviewed information, in this case, flagging (either vocally and/or using a pen and paper) that there is potentially fraudulent activity or generating an alert (either vocally and/or using a pen and paper) that there is potentially fraudulent activity. Claim 25 are directed towards the recitation of generic technology at a high level of generality and applying it to the abstract idea. Although the claim recites “train a machine learning algorithm,” the claims and specification fail to provide sufficient disclosure regarding an improvement to how a machine learning algorithm can be trained, but simply recites a high-level generic recitation that a machine learning algorithm is being trained. There is insufficient evidence from the specification to indicate that the use of the machine learning algorithm involves anything other than the generic application of a known technique in its normal, routine, and ordinary capacity or that the claimed invention purports to improve the functioning of the computer itself or the machine learning algorithm. None of the limitations reflects an improvement in the functioning of a computer, or an improvement to other technology or technical field, applies or uses a judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, implements a judicial exception with, or uses a judicial exception in conjunction with, a particular machine or manufacture that is integral to the claim, effects a transformation or reduction of a particular article to a different state or thing, or applies or uses the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception. Even training and applying a machine learning model is simply application of a computer model, itself an abstract idea manifestation. Further, such training and applying of a model is no more than putting data into a black box machine learning operation. The nomination as being a machine learning model is a functional label, devoid of technological implementation and application details. The specification does not contend it invented any of these activities, or the creation and use of such machine learning models. In short, each step does no more than require a generic computer to perform generic computer functions. As to the data operated upon, "even if a process of collecting and analyzing information is 'limited to particular content' or a particular 'source,' that limitation does not make the collection and analysis other than abstract." SAP America, Inc. v. InvestPic LLC, 898 F.3d 1161, 1168 (Fed. Cir. 2018). The Examiner asserts that the scope of the disclosed invention, as presented in the originally filed specification, is not directed towards the improvement of machine learning, but directed towards the collection and comparison of information and, based on a rule, identify options, in this case, identifying identity theft. The specification’s disclosure on machine learning is nothing more than a high general explanation of generic technology and applying it to the abstract idea. Referring to MPEP § 2106.05(f), the training is merely being used to facilitate the tasks of the abstract idea, which provides nothing more than a results-oriented solution that lacks detail of the mechanism for accomplishing the result and is equivalent to the words “apply it,” per MPEP § 2106.05(f). The Examiner asserts that in light of the 2024 Guidance Update on Patent Subject Matter Eligibility, Including on Artificial Intelligence, the claimed invention is analogous to Example 47, Claim 2. Further, the combination of these elements is nothing more than a generic computing system with machine learning model(s). Because the additional elements are merely instructions to apply the abstract idea to a computer, as described in MPEP § 2106.05(f), they do not integrate the abstract idea into a practical application. Finally, claim 25 further recites the mental process of a human reviewing information to make a determination. Claims 26, 27 are directed towards descriptive subject matter describing the information that is being collected and compared to determine if fraud is being commited. The remaining claims are similar to those already discussed above. In summary, the dependent claims are simply directed towards providing additional descriptive factors that are considered for identifying and flagging or generating an alert when fraudulent activity has been detected based on the collection and comparison of information and use of a rule(s). Accordingly, the claims are not patent eligible. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 2, 5, 7, 10 – 11, 15, 16, 18, 21, 22 are rejected under 35 U.S.C. 102(a)(1)/(a)(2) as being anticipated by Fitzgerald et al. (US PGPub 2014/0200929 A1). In regards to claims 2, 11, 18, Fitzgerald discloses (Claim 2) a computer-implemented method of identifying fraudulent virtual applications, the method comprising; (Claim 11) a system comprising; (Claim 18) a non-transitory machine-readable storage medium storing instructions that, when executed, cause a processor to perform operations comprising: (Claim 11) at least one processor; a non-transitory computer-readable media storing computer-executable instructions that, when executed, cause the at least one processor to perform operations comprising: (Fig. 8) In regards to: providing, by a computer system and to a user device, a virtual application including one or more application fields configured to receive user inputs, wherein the virtual application includes software routine configured to retrieve recent online searching (Claim 18: or browsing) activity performed by a user via the user device (¶ 46, 47, 155, 170, 172, 174, 197, 198 wherein the user submits an online application, i.e. insurance claim, through a website and where the user’s device is installed with an application that is configured to retrieve online searching activity performed on the device. As a non-limiting example, Fitzgerald discloses, at least, “Software to implemented methods of the present invention can be (1) installed on, or (2) downloaded onto a mobile device indirectly or directly at any time by an authorized user, through the Internet, SMS text message, or in any other suitable manner and at any suitable time for carrying out a method according to the invention.”; “…if a parent buys a new phone and insures the phone against loss or theft, the parent may desire to give the insured phone to one of his/her children and file an insurance claim to replace the donated phone, claiming it as lost or stolen device”; “allows the detection of circumstances indicating that an owner of the mobile device 800 may be attempting to perpetrate fraud by submitting an inaccurate insurance claim”; “…an entity may receive an insurance claim. … the insurance claim may be reported online…”; “…tracking and loss information may comprise: …a list for Internet access (which may include any information normally associated with web browsing, such as a list of visited web pages, search queries, etc.” …”; “Similarly, the information made available in process 7004 may be utilized to determine whether the mobile device has been used to submit the insurance claim. This information may be useful in evaluating the merits of the claim. For example, if a report has been received that a mobile device is lost and the information made available in process 7004 indicates that the report is being made on the allegedly-lost mobile device, one can infer and perhaps conclude that at least this particular factor may weigh against finding that the insurance claim is valid.”; “Such reported information may include a location of the mobile device, forensics information regarding the mobile device, web browsing history for the mobile device…”; “…the authorized user has submitted an insurance claim … The insurance claim is analyzed, and the corresponding record for the authorized user’s mobile device is retrieved 8130 from the insurance tracking database. … comparing information to the information in the claim, determining veracity and likelihood of the loss type specified in the claim …determining that prior to the reported date of loss, a user had conducted web searches with the mobile device related to how to submit insurance claims.”); receiving, by the computer system, a submission of the virtual application from the user device, wherein the submission of the virtual application includes (¶ 46, 47, 155, 170, 174, 198 wherein the user submits, from their device, the insurance claim to an insurance company (computer system). “Similarly, the information made available in process 7004 may be utilized to determine whether the mobile device has been used to submit the insurance claim. This information may be useful in evaluating the merits of the claim. For example, if a report has been received that a mobile device is lost and the information made available in process 7004 indicates that the report is being made on the allegedly-lost mobile device, one can infer and perhaps conclude that at least this particular factor may weigh against finding that the insurance claim is valid.”): In regards to: application data entered into the one or more application fields; and data representing the (Claim 1: recent) online searching (Claim 18: or browsing) activity of the user device retrieved by the software routine of the virtual application; (¶ 174, 195, 197, 198 wherein the online searching activity indicates a search associated with the application submission, i.e. the submitted insurance claim includes information of the potential fraudster submitting a potentially fraudulent insurance claim and the system utilizes the information found within the insurance claim, e.g., item that is being reported as lost or stolen, and utilizes the stored and provided online searching activity to determine whether the insurance claim is valid or not; “Similarly, the information made available in process 7004 may be utilized to determine whether the mobile device has been used to submit the insurance claim. This information may be useful in evaluating the merits of the claim. For example, if a report has been received that a mobile device is lost and the information made available in process 7004 indicates that the report is being made on the allegedly-lost mobile device, one can infer and perhaps conclude that at least this particular factor may weigh against finding that the insurance claim is valid.”; ¶ 155, 172, 197, 198 wherein the user’s device provides the insurance company with online searching activity conducted on the user’s device when the user is submitting their claim. As a non-limiting example, Fitzgerald discloses, “…tracking and loss information may comprise: …a list for Internet access (which may include any information normally associated with web browsing, such as a list of visited web pages, search queries, etc.”…”, “Such reported information may include a location of the mobile device, forensics information regarding the mobile device, web browsing history for the mobile device…”, and “…determining that prior to the reported date of loss, a user had conducted web searches with the mobile device related to how to submit insurance claims.”); determining, by the computer system and based at least in part on the submission of the virtual application, whether the (Claim 1: recent) online searching (Claim 18: or browsing) activity of the user device indicates a search for application (¶ 195, 197, 198 wherein the online searching activity indicates a search associated with the application submission. As a non-limiting example, Fitzgerald discloses, “…tracking and loss information may comprise: …a list for Internet access (which may include any information normally associated with web browsing, such as a list of visited web pages, search queries, etc.”…”, “Such reported information may include a location of the mobile device, forensics information regarding the mobile device, web browsing history for the mobile device…”, and “…determining that prior to the reported date of loss, a user had conducted web searches with the mobile device related to how to submit insurance claims.”; “Similarly, the information made available in process 7004 may be utilized to determine whether the mobile device has been used to submit the insurance claim. This information may be useful in evaluating the merits of the claim. For example, if a report has been received that a mobile device is lost and the information made available in process 7004 indicates that the report is being made on the allegedly-lost mobile device, one can infer and perhaps conclude that at least this particular factor may weigh against finding that the insurance claim is valid.”); and in response to determining that the (Claim 1: recent) online searching (Claim 18: or browsing) activity indicates a search for application, performing, by the computer system, an additional automated review associated with the virtual application, based at least in part on the application data (¶ 199 wherein in the event that the system determines from, at least, the online search activity that there is a likelihood of fraudulent activity the system generates a communication that is sent to the appropriate authorities or law enforcement officers in response to the system automatically determining that an additional review is needed.). In regards to claims 5, 21, Fitzgerald discloses the computer-implemented method of claim 2 (the non-transitory machine-readable storage medium of claim 18), wherein the online searching (Claim 21: or browsing) activity includes an Internet search history, and wherein the software routine is configured to retrieve the Internet search history from a web browser on the user device (Fitzgerald – ¶ 197, 198 wherein the online searching activity includes Internet search history performed on the device). In regards to claims 6, 15, Fitzgerald discloses the computer-implemented method of claim 2 (the system of claim 11), wherein the submission of the virtual application further includes a field indicating a GPS location of the user device, and wherein performing the additional automated review is further based on the GPS location of the user device (Fitzgerald – ¶ 93, 155, 171, 172 wherein location information of the user’s device, such as, GPS and the device’s IP address, is submitted with the insurance claim submission and used to assist with determining if fraud is being committed. As a non-limiting example, Fitzgerald discloses, “…tracking and loss information may comprise: …a list for Internet access (which may include any information normally associated with web browsing, such as a list of visited web pages, search queries, etc.”…”, “Such reported information may include a location of the mobile device, forensics information regarding the mobile device, web browsing history for the mobile device…”). In regards to claims 7, 16, Fitzgerald discloses the computer-implemented method of claim 2 (the system of claim 11), wherein the submission of the virtual application further includes an IP address of the user device, and wherein the method further comprises: using the IP address to query the user device for additional online searching activity (Fitzgerald – ¶ 93, 94, 155, 171, 172 wherein location information of the user’s device, such as, GPS and the device’s IP address, is submitted with the insurance claim submission and used to assist with determining if fraud is being committed. As a non-limiting example, Fitzgerald discloses, “…tracking and loss information may comprise: …a list for Internet access (which may include any information normally associated with web browsing, such as a list of visited web pages, search queries, etc.”…”, “Such reported information may include a location of the mobile device, forensics information regarding the mobile device, web browsing history for the mobile device…”). In regards to claim 10, Fitzgerald discloses the computer-implemented method of claim 2, wherein receiving the online searching activity includes receiving, via a transceiver of the computer system, the online searching activity over a radio link (Fitzgerald – ¶ 48, 93, 99, 145 wherein the online searching activity is received from the user’s device over a radio link). In regards to claim 22, Fitzgerald discloses the computer-implemented method of claim 2, performing the additional review comprises: receiving, via the submission of the virtual application, at least one of a GPS location of the user device or an IP address of the user device; and performing the additional review based at least in part on the GPS location of the user device or the IP address of the user device (Fitzgerald – ¶ 93, 155, 171, 172 wherein location information of the user’s device, such as, GPS and the device’s IP address, is submitted with the insurance claim submission and used to assist with determining if fraud is being committed. As a non-limiting example, Fitzgerald discloses, “…tracking and loss information may comprise: …a list for Internet access (which may include any information normally associated with web browsing, such as a list of visited web pages, search queries, etc.”…”, “Such reported information may include a location of the mobile device, forensics information regarding the mobile device, web browsing history for the mobile device…”; ¶ 199 wherein in the event that the system determines from, at least, the reported information that there is a likelihood of fraudulent activity the system generates a communication that is sent to the appropriate authorities or law enforcement officers in response to the system automatically determining that an additional review is needed). In regards to claim 23, Fitzgerald discloses the non-transitory machine-readable storage medium of claim 18, the operations further comprising: receiving, via the submission of the virtual application, an IP address of the user device; and determining, based at least in part on the IP address, that the online searching (Claim 23: or browsing) activity indicates a search for the application data (Fitzgerald – ¶ 93, 94, 155, 171, 172 wherein location information of the user’s device, such as, GPS and the device’s IP address, is submitted with the insurance claim submission and used to assist with determining if fraud is being committed. As a non-limiting example, Fitzgerald discloses, “…tracking and loss information may comprise: …a list for Internet access (which may include any information normally associated with web browsing, such as a list of visited web pages, search queries, etc.”…”, “Such reported information may include a location of the mobile device, forensics information regarding the mobile device, web browsing history for the mobile device…”). In regards to claim 24, Fitzgerald discloses the non-transitory machine-readable storage medium of claim 23, the operations further comprising: in response to determining that the online searching (Claim 24: or browsing) activity indicates a search for the application data, performing a fraud mitigation action include at least one of: flagging the submission of the virtual application as potentially fraudulent, or generating an electronic alert that the submission of the virtual application is potentially fraudulent (¶ 199 wherein in the event that the system determines from, at least, the reported information that there is a likelihood of fraudulent activity the system generates a communication that is sent to the appropriate authorities or law enforcement officers in response to the system automatically determining that an additional review is needed). _____________________________________________________________________ Claims 4, 13, 20 are rejected under 35 U.S.C. 103 as being unpatentable over Fitzgerald et al. (US PGPub 2014/0200929 A1) in view of Stibel et al. (US PGPub 2016/0148211 A1). In regards to claims 4, 13, 20, Fitzgerald discloses a system and method for detecting fraudulent behavior when an application is submitted by a user by having a software application, installed on the user’s device, include a plurality of routines/instructions for performing, at least, collecting online searching activity and performing a keyword search on the activity to determine if the activity indicates a search for application data, as well as referring to a plurality of sources to assist with verifying the identity of a user. Despite this, Fitzgerald fails to disclose all online sources that can be used to verify the identity of a user. To be more specific, Fitzgerald fails to explicitly disclose: the computer-implemented method of claim 2 (the system of claim 11; the non-transitory machine-readable storage medium of claim 18), wherein the online searching (Claim 20: or browsing) activity includes social media activity, and wherein the virtual application further includes: a second software routine configured to retrieve the social media activity associated with the user device. However, Stibel, which is also directed towards identify theft detection and prevention, further teaches that it is not only well-known in the art to verify the identity as a user is submitting information, but to also refer to social media activity to protect a user from fraudulent activity. Stibel teaches that information from a user’s social media account can be made private or public, wherein public information makes the user’s information susceptible to other users, i.e. fraudsters, thereby revealing important information about the user that can be used to steal the user’s identity. Stibel is not being provided to teach elements that are already disclosed by Fitzgerald, but to teach another type of keyword, source of information, or type of online searching that can be cross-referenced to determine if a user who is submitted an application has conducted a search indicating social media activity. Accordingly, one of ordinary skill in the art looking upon the teachings of Stibel would have found that incidents exist where a user’s personal information that can lead to their identity being stolen can be found from their social media account. As a result, since Fitzgerald is already collecting browsing history and performing a keyword search on the browsing history, it would have been obvious to one of ordinary skill in the art, knowing, based on the teachings of Stibel, that social media information can be publicly available, to expand Fitzgerald with the ability to perform a keyword search on searching activity associated with social media information. By expanding the functionalities of Fitzgerald to further include a search on social media activity, one of ordinary skill in the art would be able to cast a wider net and increase the likelihood of detecting and preventing fraudulent activity, e.g., identity theft, by determining that the application data is publicly available and cross-referencing this determination with the type of searching a user has been performing to determine that there is a likelihood of fraudulent behavior, e.g., sharing identity information that can be used when submitting an application, especially, since Stibel, similar to Fitzgerald, also discloses the use of an application installed on a user’s device to collect and analyze browser history information and compares the information against a list of users who have been victims of identity theft. (For support see: ¶ 31, 34, 38, 39, 41, 42, 108, 112, 115) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to incorporate into the theft detection and prevention system and method of Fitzgerald, which analyzes collected online searching activity, with the ability to use social media activity information, as taught by Stibel, as this provides a larger pool of information that the theft detection system can refer to in order to assist with identifying potential victims whose information is publicly available and where fraudsters, e.g., fraud rings, can easily search and share with other fraudsters for use when submitting an application, especially since Fitzgerald and Stibel disclose the collection and analysis of online searching activity from the device that the application is being submitted from. ______________________________________________________________________ Claims 8, 9, 17, 25 are rejected under 35 U.S.C. 103 as being unpatentable over Fitzgerald et al. (US PGPub 2014/0200929 A1) in view of Acuña-Rohter (US PGPub 2015/0142595 A1). In regards to claims 8, 17, Fitzgerald discloses a system and method of detecting and preventing fraudulent activity based on the collection and analysis of online searching activity through the use of a predictive module and fraud detection models. Although Fitzgerald discloses the use of models and predictive modules that are collecting and analyzing historical information to predict fraudulent activity, such as identity theft, Fitzgerald fails to explicitly disclose machine learning or whether machine learning can be utilized for detecting fraudulent activity. To be more specific, Fitzgerald fails to explicitly disclose: the computer-implemented method of claim 2 (the system of claim 11), further comprising: providing the online searching activity as input to a trained machine-learning program, wherein performing the additional automated review associated with the virtual application is based at least in part on an output of the trained machine-learning program. However, Acuña-Rohter, which is also directed towards predicting fraudulent activity based on known/historical information and current information, further teaches that it is well-known to utilize machine learning to assist with the mitigation of fraudulent activity. Acuña-Rohter teaches, “Methods for Natural Language Processing include, but are not limited to, parsing, tokenizing, machine learning, part-of-speech tagging, optical character recognition, sentiment analysis, and topic segmentation among others. At Step 1308, one or more processors determines if a Fraud Indication Element should be transmitted to a relevant coupled module to generate an authorization or transaction denial flag (Step 1310) or stored in a database or coupled memory for later retrieval (Step 1312). Step 1308 can decide if a Fraud Indication Element is relevant based on configurable purchase parameters such as if there is already enough relevant information to transmit to a relevant module or if a relevant module already has the information it needs based on receiving relevant data explicitly.” (¶ 147) “…traditional financial transaction systems often use heuristics and/or machine learning algorithms to authorize a transaction. Unfortunately, when a new account holder (e.g., end-user) is acquired, no user data is available. Thus, the financial institution must start from scratch by training their fraud detection algorithm systems to learn about the new account holder and the new account holder's spending habits. As expected, this is a waste of time and money. However, the above-described Historical Profile Transaction database obviates the need to start from scratch as it can be used as an input as training data for a new account holder to a heuristic and/or machine-learning fraud detection system. For example if the Historical Profile Transaction database contains information about a new end-user's transactions and social media network profile, that profile can be used a starting point for fraud detection for a bank issuing a new card to the new end-user. Accordingly, a bank, for example, would no longer have to start their fraud detection algorithms with an empty data set but could use well-known profiles that fit the new user's historical transaction/social media network profile data.” (¶ 175) One of ordinary skill in the art would have found it beneficial to upgrade or improve upon the predictive fraud detection model system with Acuña-Rohter as machine learning provides the known advantage of, for example, identifying new or unknown threats based on an analysis of historical information and provide entities with a starting point when a new application is received. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to incorporate into the predictive module and fraud detection model of Fitzgerald with the ability to use machine learning, as taught by Acuña-Rohter, in order to detect fraudulent activity as machine learning provides an entity with a starting point of early fraud detection when it is faced with a new event. Further, one of ordinary skill in the art of fraud detection and identity theft would have found it obvious to update predictive module and fraud detection model of Fitzgerald with machine learning, as taught in Acuña-Rohter, in order to gain the commonly understood benefits of such adaptation, such as increased reliability, early detection, and providing an entity with a starting point when no user data is available. Accommodating the prior arts more manual and antiquated process with modern electronics, in this case, using machine learning to assist with the detection of fraudulent activity, would have been obvious. As stated in Leapfrog, “applying modern electronics to older mechanical devices has been commonplace in recent years.” In regards to claim 9, Fitzgerald discloses a system and method of collecting and analyzing online searching activity, as well as location information, to assist with the detection of fraudulent behavior. Although Fitzgerald collects and analyzes location information of the user device where an application is being submitted, Fitzgerald fails to explicitly disclose whether this information can be compared against known, trusted, or verified location information of a user to determine if there is possible fraudulent activity occurring. To be more specific, Fitzgerald fails to explicitly disclose: the computer-implemented method of claim 2, further comprising: determining an existing user corresponding to the application data; receiving, from a data source, a current user location of the existing user; determining whether the current user location corresponds to a location of the user device; and performing the additional automated review associated with the virtual application, based at least in part on determining that the current user location does not correspond to the location of the user device. However, Acuña-Rohter, which is also directed towards predicting fraudulent activity based on known/historical information and current information, further teaches that it is not only well-known to collect user location information, but to compare the collected information against known, trusted, or verified location information of a user to determine if there is possible fraudulent activity occurring. Acuña-Rohter teaches that although the location of where a transaction is occurring may be known, it is not certain whether the actual user was present or authorized the transaction. Accordingly, Acuña-Rohter teaches that in addition to collecting location information of where a transaction is taking place it would have been beneficial to collect real-time location information of a user as this allows for the fraud detection system to verify whether the user is within a predetermined location of where the transaction is taking place. Further still, Acuña-Rohter teaches that, for example, social media information of the user can be collected and analyzed to determine and predict the actual location of the user and compare this location information against the location of a physical or virtual merchant and, based on this comparison, authorize the transaction the take place or determine that there is a likelihood of fraudulent behavior. One of ordinary skill in the art would have found it obvious and beneficial to collect as much information about a potential victim and to compare this information against information associated with a transaction as this increases the likelihood of more effectively detecting and preventing fraud. (For support see: ¶ 74, 76, 81, 82, 84, 86, 120, 121) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to incorporate into the fraud detection system and method of Fitzgerald to collect and compare known location of a user (potential victim) with the location of a transaction, as taught by Acuña-Rohter, as this increases the accuracy, effectiveness, and reliability of a fraud detection system in detecting and preventing fraudulent activity by determining whether the user (potential victim) is within a predetermined distance of where the transaction is taking place. In regards to claim 25, Fitzgerald discloses the non-transitory machine-readable storage medium of claim 24, the operations further comprising: in response to performing the fraud mitigation action, performing an additional automated review of the online searching or browsing activity and IP address to make a final fraud determination (¶ 199 wherein in the event that the system determines from, at least, the online search activity that there is a likelihood of fraudulent activity the system generates a communication that is sent to the appropriate authorities or law enforcement officers in response to the system automatically determining that an additional review is needed to determine a final fraud determination.); and […]. Fitzgerald discloses a system and method of detecting and preventing fraudulent activity based on the collection and analysis of online searching activity through the use of a predictive module and fraud detection models. Although Fitzgerald discloses the use of models and predictive modules that are collecting and analyzing historical information to predict fraudulent activity, such as identity theft, Fitzgerald fails to explicitly disclose machine learning or whether machine learning can be utilized for detecting fraudulent activity. To be more specific, Fitzgerald fails to explicitly disclose: training a machine learning model, using at least the final fraud determination and search history data associated with the virtual application, to update a fraud classification rule. However, Acuña-Rohter, which is also directed towards predicting fraudulent activity based on known/historical information and current information, further teaches that it is well-known to utilize machine learning to assist with the mitigation of fraudulent activity. Acuña-Rohter teaches, “Methods for Natural Language Processing include, but are not limited to, parsing, tokenizing, machine learning, part-of-speech tagging, optical character recognition, sentiment analysis, and topic segmentation among others. At Step 1308, one or more processors determines if a Fraud Indication Element should be transmitted to a relevant coupled module to generate an authorization or transaction denial flag (Step 1310) or stored in a database or coupled memory for later retrieval (Step 1312). Step 1308 can decide if a Fraud Indication Element is relevant based on configurable purchase parameters such as if there is already enough relevant information to transmit to a relevant module or if a relevant module already has the information it needs based on receiving relevant data explicitly.” (¶ 147) “…traditional financial transaction systems often use heuristics and/or machine learning algorithms to authorize a transaction. Unfortunately, when a new account holder (e.g., end-user) is acquired, no user data is available. Thus, the financial institution must start from scratch by training their fraud detection algorithm systems to learn about the new account holder and the new account holder's spending habits. As expected, this is a waste of time and money. However, the above-described Historical Profile Transaction database obviates the need to start from scratch as it can be used as an input as training data for a new account holder to a heuristic and/or machine-learning fraud detection system. For example if the Historical Profile Transaction database contains information about a new end-user's transactions and social media network profile, that profile can be used a starting point for fraud detection for a bank issuing a new card to the new end-user. Accordingly, a bank, for example, would no longer have to start their fraud detection algorithms with an empty data set but could use well-known profiles that fit the new user's historical transaction/social media network profile data.” (¶ 175) One of ordinary skill in the art would have found it beneficial to upgrade or improve upon the predictive fraud detection model system with Acuña-Rohter as machine learning provides the known advantage of, for example, identifying new or unknown threats based on an analysis of historical information and provide entities with a starting point when a new application is received. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to incorporate into the predictive module and fraud detection model of Fitzgerald with the ability to use machine learning, as taught by Acuña-Rohter, in order to detect fraudulent activity as machine learning provides an entity with a starting point of early fraud detection when it is faced with a new event. Further, one of ordinary skill in the art of fraud detection and identity theft would have found it obvious to update predictive module and fraud detection model of Fitzgerald with machine learning, as taught in Acuña-Rohter, in order to gain the commonly understood benefits of such adaptation, such as increased reliability, early detection, and providing an entity with a starting point when no user data is available. Accommodating the prior arts more manual and antiquated process with modern electronics, in this case, using machine learning to assist with the detection of fraudulent activity, would have been obvious. As stated in Leapfrog, “applying modern electronics to older mechanical devices has been commonplace in recent years.” ______________________________________________________________________ Claims 26, 27 are rejected under 35 U.S.C. 103 as being unpatentable over Fitzgerald et al. (US PGPub 2014/0200929 A1) in view of Abdelhalim et al. (“The Impact of Google Hacking on Identity and Application Fraud”). In regards to claims 26, Fitzgerald discloses a system and method of detecting and preventing fraudulent activity based on the collection and analysis of online searching activity through the use of a predictive module and fraud detection models. Despite this, Fitzgerald does not disclose all types of information that can be searched for to identify potential identity fraud. To be more specific, Fitzgerald fails to explicitly disclose: the computer-implemented method of claim 2, further comprising: determining an applicant name in the application data in the submission of the virtual application; and determining, within the data representing the online searching activity, a search term matching the applicant name, wherein performing the additional automated review is based at least in part on the search term matching the applicant name However, Abdelhalim teaches that identity theft has been on the rise and that: “The Internet represents an appealing place for fraudsters to collect a host of personal and financial data related to many innocent users. Using the collected data they can impersonate the users and commit different fraudulent activities including application fraud. Mining Internet data for fraudulent purposes is commonly referred to as (black hat) Google hacking.” (Abstract) “In the literature, identity frauds are categorized into two classes, namely application fraud and behavioral fraud. Application fraud corresponds to the process of applying and obtaining an identity certificate (e.g., passport, credit card etc.) using someone else’s identity.” (I. Introduction, ¶ 2) “In this paper, we present and discuss the results of an exploratory experiment of a manual search based on Google that explores the existence of identity information over the web not targeting a specific individual. In the general context of our research, this study is important as it allowed us to have a sense of the types of pieces of information out there that could fall into the wrong hands, and to identity and develop appropriate mechanisms to detect fraudulent activities.” (I. Introduction, ¶ 5) “The main tools used in online identity theft are search engines. … Many users key in their names, addresses, home and work telephones, fax, electronic mail addresses, or credit card numbers without any hesitation. Many such pieces of identity information could be found by thieves on the internet either for free or with little money.” (II. Identity Information Sources, ¶ 1) “In order to achieve an understanding of the impact of this situation and identify an appropriate strategy to mitigate such impact, we conducted some white hat Google hacking experiments over several weeks. We were able to collect social security numbers, dates of birth and addresses for many people. We found this kind of information in online resumes, load statements, tax payments, wanted criminals’’ lists, companies’ advertisements for delivery services, posted petitions for dissolution of marriage, and posted court complaints.” (III. White Hat Google Hacking, ¶ 1) “By manually searching for identity information using Google in a short time period, we were able to collect sensitive identity information for living as well as for dead persons.” (III. White Hat Google Hacking, ¶2) “Application for an identity certificate involves a collection of identity information that an be checked by collecting data online and structuring relevant identity information. By combining and analyzing the information provided by the application and the information collected online, we can detect and report possible identity fraud.” (B. Architecture for an Application Fraud Detector, ¶ 1) “Identity fraud can be detected by screening a particular identity claim. An identity claim occurs when an individual declares a specific identity, for instance, on an official document such as a passport application. The tool will extract the identity information for the individual that could be found online and from other available sources e.g. credit bureaus and check the results against the identity claim, reporting any discrepancy as an anomaly.” (B. Architecture for an Application Fraud Detector, ¶ 4) “The last several years, identity theft has been one of the fastest growing crimes. Unfortunately, the Internet has been facilitating this phenomenon since it represents a tremendous open repository for sensitive identity information available for those who know how to find them, including fraudsters.” (V. Conclusions) One of ordinary skill in the art would have found, in view of the teachings of Abdelhalim, that it is known in the art for fraudsters to conduct online searches on the names of their targets and, consequently, such that searches would be found within the fraudster’s search history. As a result, it would have been obvious to one of ordinary skill in the art to expand the searching parameters of Fitzgerald and/or substitute the applicant’s name for the other searching parameters of Fitzgerald in order to yield the predictable result of searching through a fraudster’s search activity to determine if the name on the application is found within search activity, thereby providing an indication that the fraudster is committing identity theft. Abdelhalim teaches that a wide range of information can be found online and cross-referenced with information provided in an application to determine if fraud is being committed and would have been motivated to incorporate these teachings into application fraud detection system and method of Fitzgerald as it would expand Fitzgerald’s capabilities of identifying fraudulent activities. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to incorporate into the application claim fraud detection system and method of Fitzgerald with the ability to search for the applicant’s name in the application form within the application submitter’s search activity, as taught by Abdelhalim, because identity theft has been on the rise and the Internet provides ample information for a fraudster to conduct a search on their victim and use their victim’s name, as well as other information, when submitting an application form. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention that since each individual element and its function are shown in the prior art, albeit shown in separate references, the difference between the claimed subject matter and the prior art rests not on any individual element or function but in the very combination itself-that is in the substitution of an applicant’s name, as taught by Abdelhalim, for any of the other information found in the search activity disclosed by Fitzgerald. Thus, the simply substitution of one known element for another producing a predictable result renders the claim obvious. In regards to claims 27, the combination of Fitzgerald and Abdelhalim discloses the computer-implemented method of claim 26, further comprising: determining, based on the data representing the online searching activity, that the search term matching the applicant name is associated with a user-initiated search directed to discovering an address or an employment history associated with the applicant name (Abdelhalim – “In order to achieve an understanding of the impact of this situation and identify an appropriate strategy to mitigate such impact, we conducted some white hat Google hacking experiments over several weeks. We were able to collect social security numbers, dates of birth and addresses for many people. We found this kind of information in online resumes, load statements, tax payments, wanted criminals’’ lists, companies’ advertisements for delivery services, posted petitions for dissolution of marriage, and posted court complaints.” (III. White Hat Google Hacking, ¶ 1)“Identity fraud can be detected by screening a particular identity claim. An identity claim occurs when an individual declares a specific identity, for instance, on an official document such as a passport application. The tool will extract the identity information for the individual that could be found online and from other available sources e.g. credit bureaus and check the results against the identity claim, reporting any discrepancy as an anomaly.” (B. Architecture for an Application Fraud Detector, ¶ 4) It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to incorporate into the application claim fraud detection system and method of Fitzgerald with the ability to search for the applicant’s name in the application form within the application submitter’s search activity, as taught by Abdelhalim, because identity theft has been on the rise and the Internet provides ample information for a fraudster to conduct a search on their victim and use their victim’s name, as well as other information, when submitting an application form. It would have also been obvious to one of ordinary skill in the art before the effective filing date of the invention that since each individual element and its function are shown in the prior art, albeit shown in separate references, the difference between the claimed subject matter and the prior art rests not on any individual element or function but in the very combination itself-that is in the substitution of an applicant’s name, as taught by Abdelhalim, for any of the other information found in the search activity disclosed by Fitzgerald. Thus, the simply substitution of one known element for another producing a predictable result renders the claim obvious.). Response to Arguments Applicant's arguments filed 3/9/2026 have been fully considered but they are not persuasive. Rejection under 35 USC 112(b) The rejection under 35 USC 112(b) has been withdrawn due to amendments. Rejection under 35 USC 101 With respect to “Methods of Organizing Human Activity,” MPEP § 2106 II. & C states “managing personal behavior or relationships or interactions between people, (including social activities, teaching, and following rules or instructions)”. Section C further states “Other examples of managing personal behavior recited in a claim include: …ii. considering historical usage information while inputting data, BSG Tech. LLC v. Buyseasons, Inc., 899 F.3d 1281, 1286, 127 USPQ2d 1688, 1691 (Fed. Cir. 2018)” In light of this, the Examiner asserts that the claimed invention does, indeed, fall under this category because it is directed towards collecting browsing activity data that was performed prior to submitting the online application form, i.e. historical usage information (i.e. browsing activity) while inputting data (i.e. inputting data into the form), as well as following rules, i.e. submission of improper data (rule) results in the form being indicative of fraudulent activity. With respect to “Mental Processes”, the Examiner asserts that the applicant’s arguments are conclusionary statements. As was stated in the rejection, the invention is directed towards the abstract idea of detecting and mitigating identity theft based on the abstract idea of collecting and comparing information and, based on a rule(s), identify options as it is directed towards steps that can be performed in the human mind and/or with the aid of pen and paper, e.g., receiving an application filled out by a human, collecting searching activity performed by the human, comparing the collected information against it against known/trusted information, and, based on the comparison, determine whether the submitting user performed activities that are indicative of fraud, as well as performing an additional review. The Examiner asserts that there is insufficient evidence that demonstrates that the claimed invention cannot be practically performed by a human, as evidenced by the example above, because, again, the claimed invention is directed towards collecting and comparing information and, based on a rule(s), identify options, in this case, collecting and comparing application data and browsing activity data to determine if there is an indication of fraudulent activity. Moreover, simply copying and pasting the claim language and stating the conclusionary statement that the invention, therefore, improves upon processing because it includes additional copied and pasted language is not persuasive. The claimed invention does not resolve an issue that arose in technology, improve upon technology, or deeply rooted in technology. Additionally, the claimed invention is also directed towards the abstract idea of collecting data, recognizing data, and storing the recognized data in order to compare the stored and received information and determine if there is potentially fraudulent activity. The Examiner asserts that the concept of data collection, recognition, and storage can be performed by humans. As was already discussed above, the claimed invention is merely utilizing general purpose devices (computing device) to perform the steps of data collection to compare the data against known data and, based on a rule(s), identify options, i.e. determine whether there is potentially fraudulent activity based on the comparison of the information. Although one may argue that the human mind is unable to process and recognize the electronic stream of data that is being received, transmitted, stored, and etc. by the computing device, the Examiner asserts that this is insufficient to overcoming the rejection under 35 USC 101 (see Content Extraction and Transmission LLC v Wells Fargo Bank, National Association and Cyberfone where the system uses categories to organize, store, and transmit information, which was considered by the courts to be an abstract idea). The claims in Alice Corp v CLS Bank also required a computer that processed streams of data, but nonetheless were found to be abstract. There is no “inventive concept” in the claimed invention's use of a general-purpose computing device to perform activities commonly used in the technical field (Content Extraction and Transmission LLC v Wells Fargo Bank, National Association). At most, the claims attempt to limit the abstract idea of recognizing and storing information using the devices to a particular environment. Such a limitation has been held insufficient to save a claim in this context. In this case, reciting and applying a generic virtual application to collect and transmit collected data is insufficient to overcoming the rejection for the reasons stated above. Further still, performing an additional automated review based on the collected and analyzed (comparing and using a rule(s)) is also insufficient to overcoming the rejection and encompasses activities that a human can perform simply by reviewing the data. Finally, with respect to “well-understood, routine, and conventional activity,” no analysis relying on this standard has been provided. The Examiner has provided evidence in the rejection and rebuttal provided above that the claimed invention is not patent eligible and directed towards “Mental Processes” and “Certain Methods of Organizing Human Activities”. The claimed invention does not improve upon technology, resolve an issue in technology, or deeply rooted in technology, but directed towards reciting generic technology at a high level of generality and applying it to the abstract idea, as was discussed above, e.g., receiving an application filled out by a human, collecting searching activity performed by the human, comparing the collected information against it against known/trusted information, and, based on the comparison, determine whether the submitting user performed activities that are indicative of fraud, as well as performing an additional review. Rejection under 35 USC 102/103 The Examiner asserts that the applicant’s arguments are directed towards newly amended limitations and are, therefore, considered moot. However, the Examiner has responded to the newly submitted amendments, which the arguments are directed to, in the rejection above, thereby addressing the applicant’s arguments. Pertinent Arguments The Examiner asserts that Fitzgerald discloses a virtual application that includes a software routine, i.e. software, because a user downloads an application/software that allows the user to submit claims, i.e. electronic document, form, or the like, and further includes software code/routine that conducts a search of the user’s search activity to determine if the user is attempting to commit fraud and performing an additional automated review based on its fraud analysis, as discussed in the rejection and discussed in the interview held on March 4, 2026. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure can be found in the attached PTO-892 Notice of References Cited. Metnick (US Patent 12,511,673 B2) – which discloses reviewing searching activity to assist with identifying fraudulent activity Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to GERARDO ARAQUE JR whose telephone number is (571)272-3747. The examiner can normally be reached Monday - Friday 8-4:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sarah Monfeldt can be reached at 571-270-1833. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. GERARDO ARAQUE JR Primary Examiner Art Unit 3629 /GERARDO ARAQUE JR/Primary Examiner, Art Unit 3629 3/17/2026
Read full office action

Prosecution Timeline

Jul 14, 2023
Application Filed
Feb 29, 2024
Examiner Interview (Telephonic)
Mar 07, 2024
Examiner Interview Summary
Apr 15, 2025
Non-Final Rejection — §101, §102, §103
Jul 23, 2025
Response Filed
Aug 07, 2025
Final Rejection — §101, §102, §103
Oct 10, 2025
Examiner Interview Summary
Oct 10, 2025
Applicant Interview (Telephonic)
Nov 12, 2025
Request for Continued Examination
Nov 17, 2025
Response after Non-Final Action
Dec 05, 2025
Non-Final Rejection — §101, §102, §103
Mar 04, 2026
Examiner Interview Summary
Mar 04, 2026
Applicant Interview (Telephonic)
Mar 09, 2026
Response Filed
Mar 17, 2026
Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591898
Systems and Methods for Generating Behavior Profiles for New Entities
2y 5m to grant Granted Mar 31, 2026
Patent 12586139
OFFER MANAGEMENT AND DOCUMENT MANAGEMENT SYSTEM
2y 5m to grant Granted Mar 24, 2026
Patent 12499418
METHODS, INTERNET OF THINGS (IOT) SYSTEMS, AND MEDIUMS FOR PIPELINE REPAIR BASED ON SMART GAS
2y 5m to grant Granted Dec 16, 2025
Patent 12417440
SYSTEM AND METHOD FOR ACCESSING AND UPDATING DEVICE SAFETY DATA BY BOTH OWNERS AND NON-OWNERS OF DEVICES
2y 5m to grant Granted Sep 16, 2025
Patent 12333553
SYSTEMS AND METHODS TO TRIAGE CONTACT CENTER ISSUES USING AN INCIDENT GRIEVANCE SCORE
2y 5m to grant Granted Jun 17, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
10%
Grant Probability
25%
With Interview (+15.7%)
5y 4m
Median Time to Grant
High
PTA Risk
Based on 707 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month