Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Detailed Action
This Final Office Action is in response to Applicant’s filing of 12/11/2025.
The effective filing date of the present application is 12/29/2021.
Claims 18 and 22 – 24 are pending.
Response to Amendment
Applicant's reply and remarks of 12/11/2025 have been entered.
The examiner will address applicant's remarks at the end of this office action.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 18 and 22 – 24 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
At Step 1 of eligibility analysis, the claims recite a system and a method; thus, all claims fall within one of the four statutory categories as required.
At Step 2A, Prong One, of eligibility analysis, the claims set forth a method for analyzing various fraud indicators (signals) and scoring for potential fraud in order to verify a user’s credentials for acceptance or rejection of the identity of the user. A reasonable interpretation of these signals is that they represent data or information about the user. Therefore, the claims, in combination, describe efforts for mitigating risk, by improving the accuracy and robustness of this data analysis. Mitigating risk is a core fundamental economic practice or principle, categorized within certain methods of organizing human activity and an abstract idea.
Claim 18, contains elements that define this abstract idea (and are highlighted below):
A system comprising: a processor; and a memory, the memory storing instructions that, when executed by the processor, cause the system to:
generate an identity fraud detector having a plurality of different machine learning fraud models and train the different machine learning fraud models to aggregate fraud signals from a plurality of different input fraud signals to generate an aggregate fraud score;
the machine learning fraud models being trained to analyze features in a plurality of signals indicative of potential fraud in verifying an identity of a user submitting, via a client computing device at least: 1) at least one live photo or a live video of a photo ID and 2) at least one photo or a video of the user taken during a verification step;
generate, for a verification step performed by the machine learning models for a selected user submitting a photo of their photo ID and at least one photo or video of themselves taken during the verification step, an aggregate fraud score based at least in part on determining whether there is a heightened risk of fraud based on information associated with a combination of fraud signals including: 1) a country associated with the photo ID, 2) background room architectural and interior decoration details in the live photo or live video indicative of a geographic location, 3) clothing styles in the live photo or live video indicative of a geographic location, 4) portions of a user's outfit in the photo ID associated with a geographic location, 5) an IP address associated with a geographic location, 6) a geographic area of service of a merchant on whose behalf the verification step is performed, and 7) a correlation between fraud and country;
classify the aggregate fraud score into one of a plurality of risk categories for accepting or rejecting the identity of the selected user,
generate at least two thresholds for classifying the aggregate fraud score for classifying the aggregate fraud score and accepting or rejecting the identity of the selected user; and
perform a merchant-based adaptation of the at least two thresholds with respect to a skewed distribution of fraud scores with a long tail of high risk, the adaptation customizing the thresholds for a particular merchant to ensure a pre-selected percentage of transactions fall into three different risk/benefit categories corresponding to approve, human review, and deny by taking into account the skewed distribution of fraud scores and a statistical measure of the relative costs for false positives versus false negatives for a particular merchant on whose behalf fraud detection is performed;
determine whether the selected user is accepted or rejected based on the aggregate fraud score; and
wherein the fraud detector aggregates different available fraud input signals into the aggregate fraud score to improve accuracy and robustness of fraud detection, and the at least two thresholds for classifying the aggregate fraud score are dynamically adjusted.
Claim 22, contains elements that define this abstract idea (and are highlighted below):
A computer implemented method, comprising:
training a plurality of different-machine learning fraud models to analyze features in a plurality of signals indicative of potential fraud in verifying an identity of a user submitting, via a client computing device at least: 1) at least one photo of a photo ID and 2) at least one live photo or a live video of the user taken during a verification step;
training the machine learning fraud models to generate an aggregate fraud score and classify the aggregate fraud score into one of a plurality of risk categories for accepting or rejecting the identity of the user the aggregate fraud score being determined at least in part whether there is a heightened risk of fraud based on information associated with geographic location of the user including a combination of. 1) a country associated with the photo ID, 2) background details in the live photo or live video indicative of a geographic location, 3) clothing in the live photo or live video indicative of a geographic location, 4) portions of a user's outfit in the photo ID associated with a geographic location; 5) an IP address associated with a geographic location, and 6) a geographic area of service of a merchant on whose behalf the verification step is performed; and
using at least two thresholds for classifying the aggregate fraud score into one of a plurality of risk categories for accepting or rejecting the identity of the user, where the at least two thresholds take into account historic rates of fraud for a particular industry associated with the verification step;
performing a merchant-based adaptation of the at least two thresholds with respect to a skewed distribution of fraud scores with a long tail to adapt the selection of the at least two thresholds take into account a statistical measure of the relative costs for false positives versus false negatives for a particular merchant for which fraud detection is being performed;
determining, using the trained machine learning models, the at least two thresholds, the merchant -based adaptation whether a particular user is accepted or rejected based on the aggregate fraud score; and
wherein the aggregate fraud score aggregates available fraud input signals to improve accuracy and robustness of fraud detection.
Claim 23, contains elements that define this abstract idea (and are highlighted below):
A computer implemented system, comprising:
a processor;
and a memory, the memory storing instructions of a fraud detector having a plurality of different machine learning fraud models trained to analyze features in a plurality of signals indicative of potential fraud in verifying an identity of a user submitting, via a client computing device of at least: 1) at least one photo of a photo ID and 2) at least one live photo or a live video of the user taken during a verification step;
the machine learning fraud models including a user attribute determiner including a liveness detector to detect that the user submitted the at least one photo live and a repeated fraudster detector to check if the user was previously identified as a fraudster;
an identification attribute determiner to determine attributes of the photo ID;
a device attribute determiner to determine attributes of the user's device; and
a temporal attribute determiner to determine temporal patterns from timestamp data associated with fraudsters;
the machine learning fraud models generating an aggregate fraud score and classifying the aggregate fraud score into one of a plurality of risk categories for accepting or rejecting the identity of the user;
the machine learning fraud models being trained to generate an aggregate fraud score based at least in part on comparing the at least one photo of the photo ID;
the machine learning fraud models being trained to generate the aggregate fraud score based at least in part on detecting repeat fraudulent users based on a combination of an email age, temporal attributes including a time of day and day of the week of the verification step, comparing clothing in the least one photo of a photo ID with a data set of fraudulent photos, comparing a face in at least one photo of the photo ID with photos associated with a different account or a previous attempt at fraud, and comparing the at least one live photo or a live video of the user taken during the verification step with a face associated with a different account or a previous attempt at fraud the aggregate fraud score further being based determining whether there is a heightened risk of fraud based on information associated with a country of the user based on a combination of fraud signals including: 1) a country associated with the photo ID, 2) background room architectural and interior decoration details in the live photo or live video indicative of a geographic location, 3) clothing styles in the live photo or live video indicative of a geographic location, 4) portions of a user's outfit in the photo ID associated with a geographic location, 5) an IP address associated with a geographic location, 6) a geographic area of service of a merchant on whose behalf the verification step is performed, and 7) a correlation between fraud and country;
the machine learning fraud models determining in each verification step whether a particular user is accepted or rejected by automatically rejecting the identity of the user in response to the aggregate fraud score exceeding a threshold indicative of high risk of fraud and 2) automatically accepting the identity of the user in response to the aggregate fraud score being below a threshold associated with a low risk of fraud;
wherein the aggregate fraud score aggregates available fraud input signals to improve accuracy and robustness of fraud detection.
Claim 24, contains elements that define this abstract idea (and are highlighted below):
A computer implemented method, comprising: providing a fraud detector having a plurality of machine learning fraud models trained to analyze features in a plurality of signals indicative of potential fraud in verifying an identity of a user submitting, via a client computing device of at least: 1) at least one photo of a photo ID and 2) at least one live photo or a live video of the user taken during a verification step, the machine learning fraud model generating an aggregate fraud score and classifying the aggregate fraud score into one of a plurality of risk categories for accepting or rejecting the identity of the user, the machine learning fraud models being trained to generate the aggregate fraud score based at least in part on comparing the at least one photo of the photo ID against the at least one live photo or a live video of the user, the machine learning fraud models being trained to generate the aggregate fraud score based at least in part on determining whether there is a heightened risk of fraud based on information associated with geographic location of the user including: 1) a country associated with the photo ID, 2) background details in the live photo or live video indicative of a geographic location, 3) clothing in the live photo or live video indicative of a geographic location, 4) portions of a user's outfit in the photo ID associated with a geographic location; 5) an IP address associated with a geographic location, and 6) a geographic area of service of a merchant on whose behalf the verification step is performed, the machine learning fraud models performing a dynamic merchant-based adaptation of the at least two thresholds with respect to a skewed distribution of fraud scores with a long tail of high risk, the adaptation customizing the thresholds for a particular merchant to ensure a pre-selected percentage of transactions fall into three different risk/benefit categories corresponding to approve, human review, and deny by taking into account the skewed distribution of fraud scores and a statistical measure of the relative costs for false positives versus false negatives for a particular merchant on whose behalf fraud detection is performed;
receiving at least one photo of a photo ID against and at least one live photo or a live video of a particular user to be verified;
automatically rejecting the identity of the particular user in response to the aggregate fraud score exceeding a threshold indicative of high risk of fraud and 2) automatically accepting the identity of the user in response to the aggregate fraud score being below a threshold associated with a low risk of fraud; and
wherein the aggregate fraud score aggregates available fraud input signals to improve accuracy and robustness of fraud detection.
At Step 2A, Prong Two, the Examiner has determined that the identified abstract idea (judicial exception) is not integrated into a practical application because the additional elements are merely instructions to apply the abstract idea to a computer, as described in MPEP 2106.05(f). Further, in MPEP 2106.05(f) it is noted that "[use] of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general-purpose computer or computer components after the fact to an abstract idea does not integrate a judicial exception into a practical application or provide significantly more.” Therefore, according to the MPEP, this is not solely limited to computers but includes other technology that, recited in an equivalent to “apply it,” is a mere instruction to perform the abstract idea on that technology.
Claims 18 and 22 – 24 recite only the following additional elements:
A system comprising: a processor; and a memory, the memory storing instructions;
a client computing device;
an IP address associated with a geographic location;
computer-implemented;
a user attribute determiner including a liveness detector;
a repeated fraudster detector;
an identification attribute determiner;
a device attribute determiner;
a temporal attribute determiner;
a plurality of different machine learning fraud models.
Certain elements above are recited at a high level of generality and are merely invoked as tools to perform the abstract idea. These include: a system comprising: a processor; and a memory, the memory storing instructions; a client computing device; an IP address associated with a geographic location; computer-implemented. Applicant has described these computing element generically in the disclosure, notably at Specification [0051] and Figures 1 and, 2, as filed. Particularly, “[e]xamples of client devices 106 may include, but are not limited to, mobile phones (e.g., feature phones, smart phones, etc.), tablets, laptops, desktops, netbooks, portable media players, personal digital assistants, etc.” Specification [0021].
Other elements: a user attribute determiner including a liveness detector; a repeated fraudster detector; an identification attribute determiner; a device attribute determiner; a temporal attribute determiner, are delineated within Figure 4 and described at Specification [0066]. The disclosure details “…a fraud scorer input receiver and preprocessor that receives the input fraud detections signals and performs any necessary preprocessing or feature extraction.” This also defines these elements generally, and describes merely using a computer as a tool to perform the abstract idea. Simply implementing the abstract idea on a generic computer is not a practical application of the abstract idea. See MPEP 2106.04(d).
Regarding a plurality of different machine learning fraud models and train the different machine learning fraud models to aggregate fraud signals from a plurality of different input fraud signals; this element is interpreted as algorithms trained on data to recognize patterns, make predictions, or classify new information without explicit programming. This definition matches squarely with Applicant’s disclosure: “An algorithm is here, and generally, conceived to be a self-consistent set of operations leading to a desired result.” Applicant does not include any specific or necessary components or instructions for this modelling and alludes to the fact that the “…fraud detector 228 may be implemented using a variety of machine learning techniques, including supervised learning, unsupervised learning, semi-supervised learning, etc.” Specification [0040]. Therefore, the Examiner concludes that this aspect is also merely instructions to apply the abstract idea to a computer, as described in MPEP 2106.05(f). These additional elements do not integrate the abstract idea into a practical application and the claims are directed to the abstract idea.
At Step 2B of analysis, the Examiner has determined that the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exceptions because they do not amount to more than simply instructing one to practice the abstract idea by using generically recited devices to perform the steps that define the abstract idea. As discussed above, the additional elements of (a machine learning fraud model; a client computing device; an IP address; a user attribute determiner including a liveness detector; a device attribute determiner; and a temporal attribute determiner), are recited at a high level of generality and are instructions to apply the exception on a computer. See MPEP § 2106.05(f).
Therefore, for the reasons cited above, claims 18 and 22 – 24 are directed to an abstract idea without integration into a practical application and without reciting significantly more.
Response to Arguments
Applicant's arguments filed 12/11/2025 have been fully considered but they are not persuasive. Applicant’s arguments discuss rejection of prior claims under 35 U.S.C. § 101. See pages 11 – 12. Applicant contends that amendments made to claims 18, 22, 23, and 24, integrate the claims into a practical application, by reciting a combination of elements related to improving robustness and accuracy of identity fraud detector. Based on the reasoning that follows, the Examiner respectfully disagrees with Applicant’s arguments.
Applicant first points to Specification [0047] as having support for describing how a variety of models to generate an aggregate fraud score provides the improved robustness and improved accuracy. The Examiner respectfully disagrees with Applicant. This paragraph simply recites that a variety of models are used to input data, then concludes, without any further breakdown, how a variety of models improves a method designed to verify a user’s credentials for acceptance or rejection of the identity of the user. This is merely describing the use of models without significant details of the training algorithm, while pointing to outputs that result from using the machine learning model. This is further evidenced, by the last claim recitation within the claims: i.e., “…wherein the aggregate fraud score aggregates available fraud input signals to improve accuracy and robustness of fraud detection.” The claims, in combination, recite method steps for verifying a user’s credentials for mitigation of risk. Applicant’s argument is not persuasive.
Applicant next points to the December 5, 2025 memorandum in light of Ex Parte Desjardins See page 11. Applicant argues that the memorandum points to revisions within the MPEP “…that expand the types of improvements in software and AI…”. This first point is not accurate and not persuasive. The Examiner points to the memorandum as early on concluding: “[t]his memorandum is not intended to announce any new USPTO practice or procedure and is meant to be consistent with existing USPTO guidance.” Therefore, the Examiner maintains the conclusion, detailed above, that the amended claims recite a method for mitigating risk. Further analysis, also detailed above, shows that the additional elements in combination, are merely instructions to apply the abstract idea to a computer, as described in MPEP 2106.05(f).
The Examiner adds that the memorandum pointed to examples that illustrated claims that merely involved an abstract idea, and claims that recite the abstract idea. The instant claims are of the latter example. Steps for analyzing data, verifying identities, generating scores, classifying and ranking based on thresholds, and even finely adapting the thresholds to particular merchants, and finally accepting or rejecting the user, are all explicit steps directed to a method for mitigating risk. Thus, the claims recite a judicial exception and are not patent eligible. Applicant’s argument is not persuasive.
Applicant adds that the pending claims reflect an improvement to the field of identity verification. See page 12. This argument is not persuasive. Regarding an improvements consideration, in computer-related technologies, the examiner may conclude that claims are eligible in Step 2A Prong Two by finding that a claim reflects an improvement to the functioning of a computer or to another technology or technical field, integrating a recited judicial exception into a practical application of the exception. An important consideration in determining whether a claim improves technology or a technical field is the extent to which the claim covers a particular solution to a problem or a particular way to achieve a desired outcome, as opposed to merely claiming the idea of a solution or outcome. As noted above, the claims merely recite an intended result - improve accuracy and robustness – without reflecting what steps or components result in an improvement. Further concerning, is the conclusion that identity verification is not a technical field or another technology. The claims merely recite risk mitigation steps to be performed in a computer environment. These generically defined components are identified above; and no technical improvement in how they operate is disclosed or recited. Applicant is merely describing an abstract idea to be performed by a computer. Finally, the Examiner notes that improved accuracy and robustness of identity verification would be directed to improvements toward the abstract idea – risk mitigation – and not an improvement to technology as Applicant argues. Applicant’s arguments are not persuasive.
Conclusion
Regarding claims 18 and 22 – 24, prior art does not teach nor suggest a system or method as claimed. The Examiner points to and maintains a conclusion detailed within the Office Action filed 04/07/2025.
Accordingly, the current claim set is distinguished over prior art.
Noting that patentability of any claimed invention under 35 U.S.C. §§ 102 and 103 with respect to the prior art is neither required for, nor a guarantee of, patent eligibility under 35 U.S.C. 101, the Examiner points to other rejections within this Office Action.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DON EDMONDS whose telephone number is (571) 272-6171. The examiner can normally be reached M-F 8am-4pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sarah Monfeldt can be reached at (571) 270-1833. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
DONALD J. EDMONDS
Examiner
Art Unit 3629
/SARAH M MONFELDT/Supervisory Patent Examiner, Art Unit 3629