DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
The following is a final office action.
Claims 1-20 are currently pending and have been examined on their merits.
Claims 1, 6, 8, 13, 15, 18, and 20 are currently amended see REMARKS November 17, 2025.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1: Claims 1-7 recites a system, and claim 8-14 recites a method (i.e. a series of steps), and claims 15-20 recite one or more computer readable medium and therefore each claim falls within one of the four statutory categories.
Step 2A prong 1 (Is a judicial exception recited?):
The representative claims 1, 8, and 15 recite: A method of fraud detection using compressed data, the method comprising: receiving an initial data compression setting, independent variable values, and dependent variable values; performing a polynomial regression to generate coefficients of a polynomial curve from the independent variable values and the dependent variable values, wherein the polynomial curve has an order that is based on the data compression setting, wherein the independent variable values represent false alarm rate performance of a fraud detection model, and wherein the dependent variable values represent detection rate performance of the fraud detection model; based on at least an error between the polynomial curve and the dependent variable values, adjusting, a data compression setting to a value that achieves a maximum compression while maintaining the error below a target threshold; based on the adjusted data compression setting performing another polynomial regression to generate the coefficients of a current polynomial curve; and (sending) an independent variable range and the coefficients of the polynomial curve to regenerate the current polynomial curve.
The claims recite a mathematical concept. The claims recite a certain method of mathematical calculations as the claims recite a series of steps for receiving data and performing polynomial regression to generate a polynomial curve. The claims recite mathematical calculations as they merely recite performing mathematical operations as a polynomial regression is a standard form of regression analysis used to model the relationship between an independent variable and a dependent variable (See https://en.wikipedia.org/wiki/Polynomial_regression). Merely receiving data and performing basic analysis to model the information is an abstract idea.
Alternatively, the claims recite a mental process. The claims recite a method for performing polynomial regression to generate the coefficients of a polynomial curve. The steps of receiving data and analyzing the data using a standard mathematical process to generate a polynomial regression curve and calculating its coefficients can be performed in the human mind or by using simple tools such as pen and paper. The courts have identified concepts such as observation, evaluation, judgement and opinion as reciting a mental process. Therefore, as the claims recite a method merely evaluating independent and dependent variable values and performing polynomial regression calculations to generate coefficient information is an abstract idea.
Step 2A Prong 2 (Is the exception integrated into a practical application?): The claims additionally recite;
Claim 1: A system comprising a processor and a memory comprising computer program code, a fraud detection model, a machine learning (ML) model, and transmitting to a remote node across a computer network, and train the fraud detection model based on the regenerated polynomial curve, and utilize the trained fraud detection model to detect suspected fraudulent transactions at one or more of the following: a bank, an automatic teller machine, and a point-of-sale terminal.
Claim 8: A computer, a fraud detection model, a machine learning (ML) model, and transmitting to a remote node across a computer network, train the fraud detection model based on the regenerated polynomial curve, and utilize the trained fraud detection model to detect suspected fraudulent transactions at one or more of the following: a bank, an automatic teller machine, and a point-of-sale terminal.
Claim 15: One or more computer storage medium having computer-executable instructions that, upon execution by a processor, a fraud detection model, a machine learning (ML) model, and transmitting to a remote node across a computer network, train the fraud detection model based on the regenerated polynomial curve, and utilize the trained fraud detection model to detect suspected fraudulent transactions at one or more of the following: a bank, an automatic teller machine, and a point-of-sale terminal.
However, the additional elements merely amount to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea, as discussed in MPEP 2106.05(f). Merely utilizing generic computer elements such as a computer and a user device to perform basic actions of the abstract idea by receiving, analyzing, and transmitting information. Furthermore, a method for receiving and processing information does not amount to improvements to the functioning of a computer, or to any other technology or technical field, as discussed in MPEP 2106.05(a), applying the judicial exception with, or by use of, a particular machine, as discussed in MPEP 2106.05(b), effecting a transformation or reduction of a particular article to a different state or thing, as discussed in MPEP 2106.05(c), such that the claim as a whole is more than a drafting effort designed to monopolize the exception, as discussed in MPEP 2106.05(e). Accordingly, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea.
Alternatively, the additional elements of “utilize the trained fraud detection model to detect suspected fraudulent transactions at one or more of the following: a bank, an automatic teller machine, and a point-of-sale terminal” are directed to “generally linking” the judicial exception to a particular technological environment. As the claims recite an abstract idea of receiving data such as independent variable and dependent variable values and performing polynomial regression to merely generate a polynomial curve. The claims then merely recite using or linking the generate polynomial curve to be used as an input to train a fraud detection model that can detect suspected fraudulent activity in a bank, an automated teller machine, and a point-of-sale terminal. Therefore, the claims are merely generally linking the abstract idea of generating a polynomial curve based on a series of input data to a particular technological environment of detecting fraud in a bank, teller machine, and a point-of-sale terminal and do not add significantly more to the abstract idea.
Step 2B (Does the claim recite additional elements that amount to significantly more that the judicial exception?): As discussed above, the additional imitations amount to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea, as discussed in MPEP 2106.05(f). See reasoning for Step 2A prong 2. Therefore, the claims do not amount to significantly more as they do not recite an improvement to a technology or technical field. The claims merely recite “apply it” or applying generic computer elements to receiving and analyze information.
Claims 2-7, 9-14, and 16-20 are directed to further narrowing the abstract idea of receiving variables and generate a polynomial curve to determine the performance of a fraud detection model.
Additional elements recited by the dependent claims include:
Claims 2, 9, and 16: further training of the fraud detection model.
However, these elements are directed to merely “apply it” or applying generic computer elements to perform the abstract idea.
Therefore, claims 1-20 are rejected under U.S.C. 101.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yu (US 2017/0148027) in view of Levy (US 5596658) further in view of Grossman (US 12165017)
Claims 1, 8, and 15: Yu discloses (Claim 1) a system for fraud detection using compressed data, the system comprising: a processor; and a memory comprising computer program code, the memory and the computer program code configured to, with the processor, cause the processor to: (Claim 8) A computerized method of fraud detection using compressed data, the method comprising: (Claim 15) One or more computer storage media having computer-executable instructions that, upon execution by a processor, cause the processor to at least: receive an initial data compression setting, independent variable values, and dependent variable values (Paragraph [0008]; [0029-0030]; [0040]; [0045]; [0047-0050]; Fig. 6, a fraud detection system is disclosed that can rapidly train any number of fraud detection models with multiple predictive modeling technologies and then automatically select a model that can best protect against a particular fraud pattern at the time. A variety of machine learning technologies (including non-linear systems) may be used to implement a fraud detection model. These models include: a regression model. The present invention allows various models to be trained. Automatic model training and selection according to one embodiment. Typically, the process will be performed for each model. Model data is created from segmented transaction data used by a particular production model. Each transaction record containing the raw transaction data is augmented with any number of predictive variables and their values. In one embodiments, a previous time period is suitable for creating model data from transaction data. The previous time period extends from the current date when the model is created to a previous date. By including data up until the current data, the candidates models will be trained, validated, and tested using the most current transactions. Once the models have been trained, they will be automatically compared in order to choose the best model. Calculating a set of performance metrics for each model. There are a wide variety of performance metrics that may be calculated including the metrics of sensitivity, false positives, and manual review rate. The three metrics are not independent. Choosing a value for one of these metrics for a particular model necessarily dictates a value for each of the other two metrics);
perform a polynomial regression to generate coefficients of a polynomial curve from the independent variable values and the dependent variable values, wherein the independent variable values represent false alarm rate performance of a fraud detection model, and wherein the dependent variable values represent detection rate performance of the fraud detection model (Paragraph [0029-0030]; [0040]; [0045]; [0047-0050]; Fig. 6, These models include: a regression model. The present invention allows various models to be trained. Automatic model training and selection according to one embodiment. Typically, the process will be performed for each model. Model data is created from segmented transaction data used by a particular production model. Each transaction record containing the raw transaction data is augmented with any number of predictive variables and their values. In one embodiments, a previous time period is suitable for creating model data from transaction data. The previous time period extends from the current date when the model is created to a previous date. By including data up until the current data, the candidates models will be trained, validated, and tested using the most current transactions. Once the models have been trained, they will be automatically compared in order to choose the best model. Calculating a set of performance metrics for each model. There are a wide variety of performance metrics that may be calculated including the metrics of sensitivity, false positives, and manual review rate. The three metrics are not independent. Choosing a value for one of these metrics for a particular model necessarily dictates a value for each of the other two metrics. Figure 6 is a graph of false positives versus sensitivity. Therefore, figure 6 shows that a sensitivity yields a point on the graph which dictates that the false positives for the model will be a particular value);
based on at least an error between the polynomial curve and the dependent variable values, adjust, using a machine learning (ML) model (Paragraph [0029-0030]; [0040]; [0045]; [0047-0048]; [0059]; Fig. 6, These models include: a regression model. The present invention allows various models to be trained. Automatic model training and selection according to one embodiment. Typically, the process will be performed for each model. Model data is created from segmented transaction data used by a particular production model. Each transaction record containing the raw transaction data is augmented with any number of predictive variables and their values. In one embodiments, a previous time period is suitable for creating model data from transaction data. The previous time period extends from the current date when the model is created to a previous date. By including data up until the current data, the candidates models will be trained, validated, and tested using the most current transactions. Once the models have been trained, they will be automatically compared in order to choose the best model. Calculating a set of performance metrics for each model. There are a wide variety of performance metrics that may be calculated including the metrics of sensitivity, false positives, and manual review rate. The three metrics are not independent. Choosing a value for one of these metrics for a particular model necessarily dictates a value for each of the other two metrics. Figure 6 is a graph of false positives versus sensitivity. Therefore, figure 6 shows that a sensitivity yields a point on the graph which dictates that the false positives for the model will be a particular value. As mentioned is it preferable that the model data is based upon previous transaction data that stretches from the current date to a previous date. Thus the new best model will necessarily be based upon new transaction data that was not used to train and select the current model in production. Thus the new model will replace the current model. This process may be repeated using different blocks of segmented transaction data);
train the fraud detection model based on the regenerated polynomial curve, and utilize the trained fraud detection model to detect suspected fraudulent transactions at one or more of the following: a bank, an automatic teller machine, and a point-of-sale terminal (Paragraph [0008]; [0021]; [0028]; [0044-0045]; a fraud detection system that can rapidly train any number of fraud detection model with multiple predictive modeling technologies. The risk decision engine may direct the software to communicate with the bank, whether or not to perform a transaction. Each model is trained using target and model data. Once trained they will be automatically selected to choose the best model for the current environment. Depending of the specific operational objectives of an enterprise).
Yu discloses a system of training a plurality of fraud detection models and periodically determining their accuracy and performance to determine if the models need to be retrained. However, Yu does not specifically disclose the following claim limitations: perform a polynomial regression to generate coefficients of a polynomial curve from the independent variable values and the dependent variable values, wherein the polynomial curve has an order that is based on the initial data compression setting; based on at least an error between the polynomial curve and the dependent variable values, adjust, using a machine learning (ML) model, a data compression setting to a value that achieves a maximum compression while maintaining the error below a target threshold; based on the adjusted data compression setting, perform another polynomial regression to generate the coefficients of a current polynomial curve; and transmit an independent variable range and the coefficients of the polynomial curve to a remote node across a computer network; transmit an independent variable range and the coefficients of the polynomial curve to a remote node across a computer network for the remote node to regenerate the current polynomial curve.
In the same field of endeavor of optimizing machine learning models Levy teaches perform a polynomial regression to generate coefficients of a polynomial curve from the independent variable values and the dependent variable values, wherein the polynomial curve has an order that is based on the initial data compression setting; based on at least an error between the polynomial curve and the dependent variable values, adjust, using a machine learning (ML) model, a data compression setting to a value that achieves a maximum compression while maintaining the error below a target threshold ([Col. 1 ll. 50- Col. 2 ll. 2]; [Col. 2 ll. 36-58]; [Col. 4 ll. 1-21]; a technique for compressing data in a set of analog signal samples. To accomplish compression a set of samples are obtained. A regression analysis is performed to fit the samples to a polynomial. A weighted linear regression analysis is performed to fit the error values (which are typically thresholded) to a polynomial whose coefficients are given. Once the first and second regression analysis have been performed then the coefficients are stored or transmitted. Data compression method is applicable for compressing other types of data. It is desirable to threshold the error terms before regression analysis)
Before the effective filing date of the invention it would have been obvious to one of ordinary skill in the art to modify the system of training a fraud detection model over time to improve the models as disclosed by Yu (Yu [0008]) with the system of perform a polynomial regression to generate coefficients of a polynomial curve from the independent variable values and the dependent variable values, wherein the polynomial curve has an order that is based on the initial data compression setting; based on at least an error between the polynomial curve and the dependent variable values, adjust, using a machine learning (ML) model, a data compression setting to a value that achieves a maximum compression while maintaining the error below a target threshold as taught by Levy (Levy [Col. 1 ll. 50- Col. 2 ll. 2]). With the motivation of helping to accomplish data compression using regression analysis for sets of data (Levy [Col. 1 ll. 12-26]).
In the same field of endeavor of optimizing machine learning models Grossman teaches based on the adjusted data compression setting, perform another polynomial regression to generate the coefficients of a current polynomial curve; and transmit an independent variable range and the coefficients of the polynomial curve to a remote node across a computer network ([Col. 2 ll. 44-54]; [Col. 22 ll. 1-29]; [Col. 22 ll. 63- Col. 23 ll. 4] Fig. 8, in various example arrangements, a calibration curve presents a plurality of model performance values and a plurality of overall drift metric values, wherein each model performance value is associated with a corresponding overall drift metric value. It is determined that an estimated model performance is below a threshold by comparing the drift metric to the calibration curve. An alert is generated based on the estimated model performance being below the threshold. The weighted average drift score may be calculated as the average of all the individual weighted drift scores).
Before the effective filing date of the invention it would have been obvious to one of ordinary skill in the art to modify the system of training a fraud detection model over time to improve the models as disclosed by Yu (Yu [0008]) with the system of perform another polynomial regression to regenerate the coefficients of a polynomial curve; and transmit an independent variable range and the coefficients of the polynomial curve to a remote node across a computer network as taught by Grossman (Grossman [Col. 2 ll. 44-54]). With the motivation of helping to improve the machine learning models ability to detect fraud (Grossman Col. 1 ll. 16-30]).
Claims 2, 9, and 16: Modified Yu discloses the system as per claim 1, the computerized method as per claim 8, and the one or more computer storage media as per claim 15. Yu further discloses wherein the memory and the computer program code are configured to, with the processor, further cause the processor to: receive, by the remote node, the independent variable range and the coefficients; generate, as a current polynomial curve, the polynomial curve across the independent variable range using the coefficients (Paragraph [0029-0030]; [0040]; [0045]; [0047-0048]; [0059]; Fig. 6, These models include: a regression model. The present invention allows various models to be trained. Automatic model training and selection according to one embodiment. Typically, the process will be performed for each model. Model data is created from segmented transaction data used by a particular production model. Each transaction record containing the raw transaction data is augmented with any number of predictive variables and their values. In one embodiments, a previous time period is suitable for creating model data from transaction data. The previous time period extends from the current date when the model is created to a previous date. By including data up until the current data, the candidates models will be trained, validated, and tested using the most current transactions. Once the models have been trained, they will be automatically compared in order to choose the best model. Calculating a set of performance metrics for each model. There are a wide variety of performance metrics that may be calculated including the metrics of sensitivity, false positives, and manual review rate. The three metrics are not independent. Choosing a value for one of these metrics for a particular model necessarily dictates a value for each of the other two metrics. Figure 6 is a graph of false positives versus sensitivity. Therefore, figure 6 shows that a sensitivity yields a point on the graph which dictates that the false positives for the model will be a particular value. As mentioned is it preferable that the model data is based upon previous transaction data that stretches from the current date to a previous date. Thus the new best model will necessarily be based upon new transaction data that was not used to train and select the current model in production. Thus the new model will replace the current model. This process may be repeated using different blocks of segmented transaction data);
or perform further training of the fraud detection model (Paragraph [0029-0030]; [0040]; [0045]; [0047-0048]; [0059]; Fig. 6, these models include: a regression model. The present invention allows various models to be trained. Automatic model training and selection according to one embodiment. Once the models have been trained, they will be automatically compared in order to choose the best model. Calculating a set of performance metrics for each model. As mentioned is it preferable that the model data is based upon previous transaction data that stretches from the current date to a previous date. Thus the new best model will necessarily be based upon new transaction data that was not used to train and select the current model in production. Thus the new model will replace the current model. This process may be repeated using different blocks of segmented transaction data).
However, Yu does not disclose compare the current polynomial curve with a prior polynomial curve; and based on at least the comparison: generate an alert indicating a performance change of the fraud detection model.
In the same field of endeavor of optimizing machine learning models Grossman teaches compare the current polynomial curve with a prior polynomial curve; and based on at least the comparison: generate an alert indicating a performance change of the fraud detection model; ([Col. 2 ll. 44-54]; [Col. 22 ll. 1-29]; [Col. 22 ll. 63- Col. 23 ll. 4] Fig. 8, in various example arrangements, a calibration curve presents a plurality of model performance values and a plurality of overall drift metric values, wherein each model performance value is associated with a corresponding overall drift metric value. It is determined that an estimated model performance is below a threshold by comparing the drift metric to the calibration curve. An alert is generated based on the estimated model performance being below the threshold. The weighted average drift score may be calculated as the average of all the individual weighted drift scores).
Before the effective filing date of the invention it would have been obvious to one of ordinary skill in the art to modify the system of training a fraud detection model over time to improve the models as disclosed by Yu (Yu [0008]) with the system of perform another polynomial regression to regenerate the coefficients of a polynomial curve; and transmit an independent variable range and the coefficients of the polynomial curve to a remote node across a computer network as taught by Grossman (Grossman [Col. 2 ll. 44-54]). With the motivation of helping to improve the machine learning models ability to detect fraud (Grossman Col. 1 ll. 16-30]).
Claims 3, 10, and 17: Modified Yu discloses the system as per claim 2, the computerized method as per claim 9, and the one or more computer storage media as per claim 16. However, Yu does not disclose wherein comparing the current polynomial curve with the prior polynomial curve comprises determining whether the detection rate performance versus the false alarm rate performance of the fraud detection model has improved, worsened, or remained constant.
In the same field of endeavor of optimizing machine learning models Grossman teaches wherein comparing the current polynomial curve with the prior polynomial curve comprises determining whether the detection rate performance versus the false alarm rate performance of the fraud detection model has improved, worsened, or remained constant ([Col. 2 ll. 44-54]; [Col. 22 ll. 1-29]; [Col. 22 ll. 63- Col. 23 ll. 4] Fig. 8, in various example arrangements, a calibration curve presents a plurality of model performance values and a plurality of overall drift metric values, wherein each model performance value is associated with a corresponding overall drift metric value. It is determined that an estimated model performance is below a threshold by comparing the drift metric to the calibration curve. An alert is generated based on the estimated model performance being below the threshold. The weighted average drift score may be calculated as the average of all the individual weighted drift scores).
Before the effective filing date of the invention it would have been obvious to one of ordinary skill in the art to modify the system of training a fraud detection model over time to improve the models as disclosed by Yu (Yu [0008]) with the system of perform another polynomial regression to regenerate the coefficients of a polynomial curve; and transmit an independent variable range and the coefficients of the polynomial curve to a remote node across a computer network as taught by Grossman (Grossman [Col. 2 ll. 44-54]). With the motivation of helping to improve the machine learning models ability to detect fraud (Grossman Col. 1 ll. 16-30]).
Claims 4, 11, and 18: Modified Yu discloses the system as per claim 1, the computerized method as per claim 8, and the one or more computer storage media as per claim 15. Yu further discloses wherein an order of the polynomial curve is the data compression setting (Paragraph [0029-0030]; [0040]; [0045]; [0047-0048]; [0059]; Fig. 6, in one embodiments, a previous time period is suitable for creating model data from transaction data. The previous time period extends from the current date when the model is created to a previous date. By including data up until the current data, the candidates models will be trained, validated, and tested using the most current transactions. Once the models have been trained, they will be automatically compared in order to choose the best model. Calculating a set of performance metrics for each model. There are a wide variety of performance metrics that may be calculated including the metrics of sensitivity, false positives, and manual review rate. The three metrics are not independent. Choosing a value for one of these metrics for a particular model necessarily dictates a value for each of the other two metrics).
Claims 5, 12, and 19: Modified Yu discloses the system as per claim 1, the computerized method as per claim 8, and the one or more computer storage media as per claim 15. Yu further discloses wherein the memory and the computer program code are configured to, with the processor, further cause the processor to: receive fraud alert data from the fraud detection model; and based on at least the fraud alert data and transaction assessment data, determine the independent variable values and the dependent variable values (Paragraph [0029-0030]; [0040]; [0045]; [0047-0048]; [0059]; Fig. 6, model data is created from segmented transaction data used by a particular production model. Each transaction record containing the raw transaction data is augmented with any number of predictive variables and their values. In one embodiments, a previous time period is suitable for creating model data from transaction data. The previous time period extends from the current date when the model is created to a previous date. By including data up until the current data, the candidates models will be trained, validated, and tested using the most current transactions. Once the models have been trained, they will be automatically compared in order to choose the best model. Calculating a set of performance metrics for each model. There are a wide variety of performance metrics that may be calculated including the metrics of sensitivity, false positives, and manual review rate. The three metrics are not independent. Choosing a value for one of these metrics for a particular model necessarily dictates a value for each of the other two metrics).
Claims 6, 13, and 20: Modified Yu discloses the system as per claim 1, the computerized method as per claim 8, and the one or more computer storage media as per claim 15. Yu further discloses the regenerated polynomial curve, when compared to a previous polynomial curve, indicates a performance degradation relative to a prior time period, and a trainer is tasked to perform further training of the fraud detection model (Paragraph [0029-0030]; [0040]; [0045]; [0047-0048]; [0059]; Fig. 6, model data is created from segmented transaction data used by a particular production model. Each transaction record containing the raw transaction data is augmented with any number of predictive variables and their values. In one embodiments, a previous time period is suitable for creating model data from transaction data. The previous time period extends from the current date when the model is created to a previous date. By including data up until the current data, the candidates models will be trained, validated, and tested using the most current transactions. Once the models have been trained, they will be automatically compared in order to choose the best model. Calculating a set of performance metrics for each model. There are a wide variety of performance metrics that may be calculated including the metrics of sensitivity, false positives, and manual review rate. The three metrics are not independent. Choosing a value for one of these metrics for a particular model necessarily dictates a value for each of the other two metrics);
Claims 7 and 14: Modified Yu discloses the system as per claim 1 and the computerized method as per claim 8. Yu further discloses wherein the polynomial curve forms a relative operating characteristic (ROC) curve (Paragraph [0045]; [0053] once the multiple models have been trained they will then be automatically compared and selected in order to choose the best model. Calculate a set of performance metrics for each model. There are a wide variety of performance metrics that may be calculated. Possible metrics include ROC curves and others).
Therefore, claims 1-20 are rejected under U.S.C. 103.
Response to Arguments
Applicant’s arguments, see REMARKS, filed November 17, 2025, with respect to the rejections of claims 1-20 under U.S.C. 101 have been fully considered and are not persuasive.
The applicant argues that the claims do not recite an abstract idea as they do not recite mathematical concept but merely “involves” a mathematical operation and its application to generate data. However, the examiner respectfully disagrees as the claims recite receiving independent and dependent variable values; performing a polynomial regression to generate coefficients of a polynomial curve from the variables, perform another polynomial regression to generate coefficients of a current polynomial curve. The applicant argues that the claims are similar to example 38 and 39 of the 2019 PEG. However, the examiner disagrees as example 38 recites generating a normally distributed random value using a random number generator, while example 39 recites applying one or more transformation to digital facial images. The examples recite steps that are based in mathematical they do not recite the mathematical relationships or calculations. However, the current claims recite a mathematical relationship as they directly recite the steps of performing polynomial regression to generate coefficients of a polynomial curve from a plurality of independent and dependent values. As stated above this is a known and used mathematical relationship for generating a polynomial curve to model the non-linear relationship between variables of false alarm rate performance and detection rate performance. Therefore, the claims recite a mathematical concept. Alternatively the claims recite a mental process as the claims recite concepts the courts have identified as being mental processes such as “collecting information, analyzing it, and displaying a certain result.” As the claims merely recite receiving initial data, performing polynomial regression to generate coefficients of a polynomial curve to generate training data. The examiner finds that a person is capable of mentally, or using simple tools such as pen and paper, of receiving information and performing calculations such as polynomial regression to generate training data to be used to train a model. Therefore, the claims recite an abstract idea.
The representative further argues that the claims are directed to a practical application as they recite “transmitting an independent variable range and the coefficients of the polynomial curve to a remote node across a computer network for the remote node to regenerate the current polynomial curve, training the fraud detection model based on the regenerated polynomial curve, and utilizing the trained fraud detection model to detect suspected fraudulent transactions.” However, the examiner respectfully disagrees as merely applying generic computer elements to perform a series of calculations to generate training data and training a machine learning model to perform a basic function such as “detected suspected fraudulent activities” is not an improvement to a computer or technology. The additional elements are directed to merely “apply it” or applying generic computer elements to perform basic functions of receiving and analyzing information to generate training data and transmitting information to train a machine learning model. Additionally, using the trained machine learning model to detect suspected fraudulent transactions is directed to merely “apply it” and generally linking the abstract idea to a particular field of use. As the claims do not recite an improvement to a technology or technical field but merely a series of steps to generate training data to train a machine learning model then generally using the trained machine learning model to perform a basic function. Which are not improvements to the technology of machine learning models or the technical field of detecting fraudulent transactions.
Therefore, the examiner maintains the rejection of claims 1, 8, and 15 under U.S.C. 101.
Applicant argues that claims 2-7, 9-14, and 16-20 are allowable as being dependent on claims 1, 8, and 15 and therefore are rejected under U.S.C. 101
Applicant’s arguments, see REMARKS, filed November 17, 2025, with respect to the rejections of Claims 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yu (US 2017/0148027) in view of Levy (US 5596658) further in view of Grossman (US 12165017) are not persuasive as the claims were amended which required further search and consideration and new art was applied.
Claims 1, 8, and 15: Representative argues that the current prior art does not disclose the newly amended claim limitations and does not disclose compression setting, independent variable values, and dependent variable values. However, the examiner finds upon further search and consideration that the current prior art can be used in combination with Levy to teach the newly amended claim limitations. Yu discloses a system of using a plurality of models such as a regression model to receive information to train fraud detection models. Yu teaches a system of receiving transaction data as well as model information such as false positive rate information and using the inputs to retrain the fraud detection model to improve accuracy. The regression model of Yu can be combined with the techniques of Grossman which teaches a system of receiving a plurality of input values and using a machine learning model analyzer to determine the accuracy of a machine learning model. Additionally, the current combination of prior art can be used with Levy which teaches a technique for compressing data by performing regression analysis. Levy teaches a system of setting a compression threshold to help compress information while maintaining accuracy. Therefore, Levy can be used in combination with the current prior art to perform the recited claim limitations.
Therefore, claim 1, 8, and 15 are new rejected under U.S.C. 103.
Claims 2-7, 9-14, and 16-20 were argued as being allowable only as being dependent on claims 1, 8, and 15. Therefore, they are also allowable over U.S.C. 103.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure:
Adjauote (US 2015/0039512) Real-time cross channel fraud protection.
Zhou (US 2020/0175421) Machine learning methods for detection of fraud related events.
Warrick (US 2021/0142126) Artificial intelligence-based fraud detection system.
Baker (US 2007/0106582) System and method of detecting fraud.
Kishore (US 2022/0351209) Automated fraud monitoring and trigger system for detecting unusual patterns associated with fraudulent activity, and corresponding method thereof.
Matyska (US 2021/0248611) Method, user thereof, computer program product and system for fraud detection.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to COREY RUSS whose telephone number is (571)270-5902. The examiner can normally be reached on M-F 7:30-4:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lynda Jasmin can be reached on 5712726782. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be
obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/COREY RUSS/Examiner, Art Unit 3629