Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This office action is in response to the claims filed on 01/26/2026.
Claims 1-3, 5-10, 12-17, 19-20 are presented for examination.
Response to Argument
In reference to applicant’s argument regrading rejections under 35 U.S.C. § 101:
Applicant’s Argument:
The disclosed technology addresses these shortcomings by providing a system that determines the risk associated with an operation based on, for example, a "maker identifier" of a maker who requested the operation and the type of operation requested. See Spec., [0051], [0054]-[0055], [0069]. The system is trained to dynamically route the operation to an appropriate checker based on the risk. Id. The system can take remedial action for handling operation requests received from an error-prone maker by retraining how it routes operations received from the maker to a new checker or to multiple checkers. Id., [0069]. This approach provides technological solutions that address technological shortcomings, and provide improvements to accuracy, efficiency, and adaptability for handling requested operations, and reduce the likelihood of undetected errors propagating through a system that handles the requested operations.
The claims reflect this improvement. For example, claim 1 recites "generating, by the computer, an anomaly score by executing an anomaly detector of the machine-learning architecture on the feature vector based on the maker identifier and the operation-type of the operation, the anomaly score indicating a likelihood that the operation data represents an anomalous operation request; determining, by the computer, one or more authorization thresholds for the operation based upon the risk score, the anomaly score, and the operation-type; transmitting, by the computer, the operation data to a first checker client device corresponding to the one or more authorization thresholds based on the risk score and the anomaly score by executing the routing engine trained for routing the operation data to the one or more computing devices based upon the risk score and the anomaly score generated using the maker identifier; in response to a determination from the first checker client device indicating that the operation is a rejected operation: retraining, by the computer, the routing engine of the machine-learning architecture by adjusting one or more parameters of the routing engine according to the indication that the operation is the rejected operation having an error associated with the maker identifier; transmitting, by the computer, next operation data of a next operation to a second checker client device based upon a second risk score and a second anomaly score generated using for the next operation received from the maker client device associated with the maker identifier; and executing, by the computer, the next operation using the next operation data from the maker client device, responsive to the computer receiving an indication of authorization for the next operation from the second checker client device." As with Example 47, these features are not extra-solution activity, rather these features describe proactive measures to remediate the risk of an error-prone maker who provided an operation request rejected as an error. Further, upon extracting a feature vector, the anomaly detector generates an anomaly score, the anomaly score indicating a likelihood that the operation data represents an anomalous operation request. See, e.g., Spec., [0069], [0071]-[0072], [0074]-[0075], [0087]-[0091]. As such, the claims recite patent-eligible subject matter, at least for the reasons articulated in Example 47. In this way, the amendments submitted with this paper further clarify that the claims describe technical operations for providing technical improvements discussed in the Specification.
Examiner’s Response:
Examiner respectfully disagrees to applicant’s argument because the claim does not recite the improvement of the machine learning model or the improvement of a functioning of in the technology field, as the claim recite the mental process (“extracting, a feature vector for the operation based upon a plurality of operation features extracted using the operation data including the one or more operation data records, the maker identifier and the operation data inputs for the operation”, “ determining, , an operation-type for the operation… on the feature vector for the operation”, “ generating, a risk score for the operation by executing a risk-scoring on the feature vector based on the maker identifier and an operation-type of the operation;”, “Generating… an anomaly score based on the maker identifier and the operation-type of the operation, the anomaly score indicating a likelihood that the operation data represents an anomalous operation request”, “determining, one or more authorization thresholds for the operation based upon the risk score and the operation-type;”) For example, the human can detect/identify the anomalous of a requested transaction, for example, the receptionist can tell whether the checking people is the corrected people on the reservation or not based on Identification card checking and extracted feature associated with the requested operation. The additional claim limitations not integrated into the practical application, as the additional claim limitations recites:
--“ training, by a computer, parameters of a routing engine of a machine-learning architecture for routing operation data to one or more computing device according to a risk score using historical operation data associated with maker identifiers”, “by applying a classifier of a machine-learning architecture on the feature vector for the operation;”, “by the computer”, “by executing a risk-scoring engine of the machine-learning architecture on the feature vector”, “generating, by the computer, an anomaly score by executing an anomaly detector of the machine-learning architecture on the feature vector based on the maker identifier and the operation-type of the operation, the anomaly score indicating a likelihood that the operation data represents an anomalous operation request” , “in response to a determination from the first checker client device indicating an indication that the operation is a rejected operation: retraining, by the computer, the routing engine of the machine-learning architecture by adjusting one or more parameters of the routing engine according to the indication that the operation is the rejected operation having an error associated with the maker identifier”, “engine of the machine-learning architecture”, “and executing, by the computer, the next operation using the next operation data from the maker client device,” The additional limitation is recited at high level of generality and amounts to no more than mere instructions to apply the judicial exception using a generic computer component (machine learning architecture, the computer, routing engine, “engine of the machine-learning architecture”, processor, computer, the neural network ,) (See MPEP 2106.05(f)).
-“obtaining, by the computer, a request for an operation and operation data inputs via a user interface of a maker client device associated with a maker identifier;”, “obtaining, one or more operation data records associated with the operation indicated by the request from one or more data sources”, “transmitting, by the computer, the operation data to a first checker client device corresponding to the one or more authorization thresholds based on the risk score by executing a routing engine trained for routing the operation data to one or more computing devices based upon the risk score generated using the maker identifier;” , “transmitting, by the computer, next operation data of a next operation to a second checker client device based upon a second risk score generated using for the next operation received from the maker client device associated with the maker identifier”, “responsive to the computer receiving an indication of authorization for the next operation from the second checker client device” These/this limitation(s) are/is recited at a high-level of generality such that it amounts to necessary data gathering. As described in MPEP 2106.05(g), limitations that amount to merely adding insignificant extra-solution activity of data gathering to a judicial exception do not amount to significantly more than the judicial exception and cannot integrate a judicial exception into a practical application.
Therefore, the claim limitations are not integrated into the practical application or the improvement of the functioning of the computer in the technology field because the claim recite the mental processes are implemented by the generic computer components (training the machine learning model, processor, the computer). The additional claim limitation recites the insignificant extra-solution activity of data gathering insignificant extra-solution activity of data gathering to a judicial exception do not amount to significantly more than the judicial exception and cannot integrate a judicial exception into a practical application.
Additionally, the applicant’s argument regarding the Example 47 is not persuasive, the applicant’s argument does not provide the detail of how the current claim limitation relates to the example 47, as the current claim limitation and the example 47 are different, because the current claim limitation does not recites the improvement of the machine learning model or improvement of the functioning of a computer in the technology field, therefore, the claim limitations are not integrated into the practical application.
Applicant’s Argument:
Additionally, the Director's recently published opinion in Ex Parte Desjardins provides compelling support for the patent-eligibility of the present amended claims and directly the current §101 rejection in this Application. As with the claims found eligible in Ex Parte Desjardins, the amended claims here recite a specific technological solution to a problem rooted in computer technology. For example, the pending claims describe a machine-learning architecture that generates risk scores and anomaly scores based on maker identifiers and operation types, dynamically routes operation data to checker client devices according to authorization thresholds, and adaptively retrains the routing engine in response to a determination from a first checker device. This responsive retraining alters the routing and execution of later operations to a different checker device. As the Director stated, "the claims at issue do not simply automate a process previously performed by humans, but instead provide a technological solution to a technological problem by improving the way computers operate." See Desjardins, at 9. That is, the pending claims are not a mere automation of a known business process, but rather a technical improvement to how computer systems manage risk, adapt to user behavior, and control the flow of operations. For example, the pending claims recite a particular arrangement of functions for training a machine-learning architecture for risk-based routing, performing risk-based routing, receiving checker device determinations, and retraining the machine-learning architecture, which yields precisely the types of technological benefits raised by the Director and described in the Specification.
Accordingly, Applicant respectfully requests favorable reconsideration and withdrawal of the rejections to claims 1-20 under 35 U.S.C. § 101..
Examiner’s Response:
Examiner respectfully disagrees to applicant’s argument because the applicant’s argument regarding the current claim limitation and the Ex Parte Desjardins, however, the current limitations are not related to the provided example in the Ex Parte Desjardins.
The current claim limitations recite the mental processes:
Extracting the feature: as the human can define the importance feature in the operation data /request transaction.
determine the operation type : as the human can easy to tell what is type of the current transactions (buying /selling).
generating the risk score: as the human can determine the risky level of the particular transaction based on the previous knowledge information and the user behavior .
Generating the anomaly score: as the human can tell whether the current transaction is anomaly or not based on the previous knowledge information and the user behavior.
The additional claim limitation recites:
The generic computer components (processor, the computer, the machine learning model) to implement the above mental processes. Therefore, the claim is recited at high level of generality and amounts to no more than mere instructions to apply the judicial exception using a generic computer component (machine learning architecture, the computer, routing engine, “engine of the machine-learning architecture”,) (See MPEP 2106.05(f))
The insignificant extra-solution activity of data gathering insignificant extra-solution activity of data gathering to a judicial exception do not amount to significantly more than the judicial exception and cannot integrate a judicial exception into a practical application.
Therefore, claim limitations that are not indicative of the integration into a practical application.
However, the example is provided in the the Ex Parte Desjardins are different to the current claim limitation, as the example in the Ex Parte Desjardins recites a technological solution to a technological problem. Id. Finally, in Ex Parte Desjardins, the claims reflected a specific improvement that addressed the technical problem of “catastrophic forgetting” in continual learning systems, while allowing artificial intelligence systems to variously optimize system performance, use less storage capacity and reduce system complexity.
Therefore, the applicant’s argument is not persuasive, the 101 rejection is still maintained.
In reference to applicant’s argument regrading rejections under 35 U.S.C. § 103:
Applicant’s Argument:
The applicant’s argument regarding the 103 rejection based on the claim amendment field on 01/26/2026.
Examiner’s Response:
This argument includes the newly amended limitations. It has been fully considered but is moot in view of the new grounds of rejection presented below necessitated by the amendment.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-3, 5-10, 12-17, 19-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 analysis:
In the instant case, the claims are directed to a method (claims 1,3-7), system (claims 8-14) and computer program product (claims 15-17, 19-20). Thus, each of the claims falls within one of the four statutory categories (i.e., process, machine, manufacture, or composition of matter).
Step 2A analysis:
Based on the claims being determined to be within of the four categories (Step 1), it must be determined if the claims are directed to a judicial exception (i.e., law of nature, natural phenomenon, and abstract idea), in this case the claims fall within the judicial exception of an abstract idea. Specifically the abstract idea of “Mental Processes/Concepts performed in the human mind (including an observation, evaluation, judgment, opinion)” and mathematical concept.
The claim 1 recites:
Step 2A: prong 1 analysis:
-“extracting, a feature vector for the operation based upon a plurality of operation features extracted using the operation data including the one or more operation data records, the maker identifier and the operation data inputs for the operation” this is a mental process, the human mind can extract the feature vector for operation based on the operation data include the data record, (Observation/evaluation)
-“ determining, , an operation-type for the operation… on the feature vector for the operation” this is a mental process, the human mind can determine the operation type for operation (Observation/Evaluation ).
-“ generating, a risk score for the operation by executing a risk-scoring on the feature vector based on the maker identifier and an operation-type of the operation;” this is a mental process, the human mind can generate the risk score for the operation based on the maker identifier and operation type (observation/Evaluation).
“Generating… an anomaly score based on the maker identifier and the operation-type of the operation, the anomaly score indicating a likelihood that the operation data represents an anomalous operation request” this is a mental process, as the human can determine/generate the anomaly score based on the marker identifier and the operation type, for example, the receptionist can tell whether the checking people is the corrected people on the reservation or not based on Identification card checking, (observation/evaluation).
“determining, one or more authorization thresholds for the operation based upon the risk score, the anomaly score and the operation-type;” this is a mental process, the human mind can determine the authorization threshold for operation based on the risk score and operation type, (observation/Evaluation)
a) Step 2A: Prong 2 analysis:
-“ training, by a computer, parameters of a routing engine of a machine-learning architecture for routing operation data to one or more computing device according to a risk score using historical operation data associated with maker identifiers” The additional limitation is recited at high level of generality and amounts to no more than mere instructions to apply the judicial exception using a generic computer component (machine learning architecture, the computer, routing engine, “engine of the machine-learning architecture”,) (See MPEP 2106.05(f)).
-“obtaining, by the computer, a request for an operation and operation data inputs via a user interface of a maker client device associated with a maker identifier;”, “obtaining, one or more operation data records associated with the operation indicated by the request from one or more data sources”, “transmitting, by the computer, the operation data to a first checker client device corresponding to the one or more authorization thresholds based on the risk score and the anomaly score by executing a routing engine trained for routing the operation data to one or more computing devices based upon the risk score and the anomaly score generated using the maker identifier;” , “transmitting, by the computer, next operation data of a next operation to a second checker client device based upon a second risk score and s second anomaly score generated using for the next operation received from the maker client device associated with the maker identifier” These/this limitation(s) are/is recited at a high-level of generality such that it amounts to necessary data gathering. As described in MPEP 2106.05(g), limitations that amount to merely adding insignificant extra-solution activity of data gathering to a judicial exception do not amount to significantly more than the judicial exception and cannot integrate a judicial exception into a practical application.
--“ training, by a computer, parameters of a routing engine of a machine-learning architecture for routing operation data to one or more computing device according to a risk score using historical operation data associated with maker identifiers”, “by applying a classifier of a machine-learning architecture on the feature vector for the operation;”, “by the computer”, “by executing a risk-scoring engine of the machine-learning architecture on the feature vector”, “generating, by the computer, by executing an anomaly detector of the machine-learning architecture on the feature vector based on the maker identifier and the operation-type of the operation” , “in response to a determination from the first checker client device indicating an indication that the operation is a rejected operation: retraining, by the computer, the routing engine of the machine-learning architecture by adjusting one or more parameters of the routing engine according to the indication that the operation is the rejected operation having an error associated with the maker identifier”, “engine of the machine-learning architecture”, “ and executing, by the computer, the next operation using the next operation data from the maker client device, responsive to the computer receiving an indication of authorization for the next operation from the second checker client device” The additional limitation is recited at high level of generality and amounts to no more than mere instructions to apply the judicial exception using a generic computer component (machine learning architecture, the computer, routing engine, “engine of the machine-learning architecture”,) (See MPEP 2106.05(f)).
-“ and executing, by the computer, the next operation using the next operation data from the maker client device, responsive to the computer receiving an indication of authorization for the next operation from the second checker client device” This/these limitation(s) is/are amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (See MPEP 2106.05(h)). As explained by the Supreme Court, a claim directed to a judicial exception cannot be made eligible "simply by having the applicant acquiesce to limiting the reach of the patent for the formula to a particular technological use." Diamond v. Diehr, 450 U.S. 175, 192 n.14, 209 USPQ 1, 10 n. 14 (1981). Thus, limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception and that it does not integrate the judicial exception into a practical application.
b) Step 2B analysis:
-“ training, by a computer, parameters of a routing engine of a machine-learning architecture for routing operation data to one or more computing device according to a risk score using historical operation data associated with maker identifiers” The additional limitation is recited at high level of generality and amounts to no more than mere instructions to apply the judicial exception using a generic computer component (machine learning architecture, the computer, routing engine, “engine of the machine-learning architecture”,) (See MPEP 2106.05(f)).
-“obtaining, by the computer, a request for an operation and operation data inputs via a user interface of a maker client device associated with a maker identifier;”, “obtaining, one or more operation data records associated with the operation indicated by the request from one or more data sources”, “transmitting, by the computer, the operation data to a first checker client device corresponding to the one or more authorization thresholds based on the risk score and the anomaly score by executing a routing engine trained for routing the operation data to one or more computing devices based upon the risk score and the anomaly score generated using the maker identifier;” , “transmitting, by the computer, next operation data of a next operation to a second checker client device based upon a second risk score and s second anomaly score generated using for the next operation received from the maker client device associated with the maker identifier” These/this limitation(s) are/is recited at a high-level of generality such that it amounts to necessary data gathering. As described in MPEP 2106.05(g), limitations that amount to merely adding insignificant extra-solution activity of data gathering to a judicial exception do not amount to significantly more than the judicial exception itself .
The courts have found limitations directed to obtaining information electronically, recited at a high level of generality, to be well-understood, routine, and conventional (see MPEP 2106.05(d)(II), “receiving or transmitting data over a network”, "electronic record keeping," and "storing and retrieving information in memory").
---“ training, by a computer, parameters of a routing engine of a machine-learning architecture for routing operation data to one or more computing device according to a risk score using historical operation data associated with maker identifiers”, “by applying a classifier of a machine-learning architecture on the feature vector for the operation;”, “by the computer”, “by executing a risk-scoring engine of the machine-learning architecture on the feature vector”, “generating, by the computer, by executing an anomaly detector of the machine-learning architecture on the feature vector based on the maker identifier and the operation-type of the operation” , “in response to a determination from the first checker client device indicating an indication that the operation is a rejected operation: retraining, by the computer, the routing engine of the machine-learning architecture by adjusting one or more parameters of the routing engine according to the indication that the operation is the rejected operation having an error associated with the maker identifier”, “engine of the machine-learning architecture”, “ and executing, by the computer, the next operation using the next operation data from the maker client device, responsive to the computer receiving an indication of authorization for the next operation from the second checker client device” The additional limitation is recited at high level of generality and amounts to no more than mere instructions to apply the judicial exception using a generic computer component (machine learning architecture, the computer, routing engine, “engine of the machine-learning architecture”,) (See MPEP 2106.05(f)). .
The claim 2 recites:
a) Step 2A: Prong 2 analysis:
-“ receiving, by the computer from a maker client device, one or more maker inputs indicating the one or more operation records for the operation” These/this limitation(s) are/is recited at a high-level of generality such that it amounts to necessary data gathering. As described in MPEP 2106.05(g), limitations that amount to merely adding insignificant extra-solution activity of data gathering to a judicial exception do not amount to significantly more than the judicial exception and cannot integrate a judicial exception into a practical application.
b) Step 2B analysis:
-“ receiving, by the computer from a maker client device, one or more maker inputs indicating the one or more operation records for the operation” These/this limitation(s) are/is recited at a high-level of generality such that it amounts to necessary data gathering. As described in MPEP 2106.05(g), limitations that amount to merely adding insignificant extra-solution activity of data gathering to a judicial exception do not amount to significantly more than the judicial exception itself .
The courts have found limitations directed to obtaining information electronically, recited at a high level of generality, to be well-understood, routine, and conventional (see MPEP 2106.05(d)(II), “receiving or transmitting data over a network”, "electronic record keeping," and "storing and retrieving information in memory").
The claim 3 recites:
a) Step 2A: Prong 2 analysis:
-“ training, by the computer, the classifier of the machine-learning architecture to determine the operation type by applying the classifier of the machine-learning architecture.” The additional limitation is recited at high level of generality and amounts to no more than mere instructions to apply the judicial exception using a generic computer component (See MPEP 2106.05(f)).
-“ by applying the classifier of the machine-learning architecture on a plurality of historic operation records, each historic operation record having a training label indicating the operation type” This/these limitation(s) is/are amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (See MPEP 2106.05(h)). As explained by the Supreme Court, a claim directed to a judicial exception cannot be made eligible "simply by having the applicant acquiesce to limiting the reach of the patent for the formula to a particular technological use." Diamond v. Diehr, 450 U.S. 175, 192 n.14, 209 USPQ 1, 10 n. 14 (1981). Thus, limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception and that it does not integrate the judicial exception into a practical application.
b) Step 2B analysis:
-“ training, by the computer, the classifier of the machine-learning architecture to determine the operation type by applying the classifier of the machine-learning architecture.” The additional limitation is recited at high level of generality and amounts to no more than mere instructions to apply the judicial exception using a generic computer component (See MPEP 2106.05(f)).
-“ by applying the classifier of the machine-learning architecture on a plurality of historic operation records, each historic operation record having a training label indicating the operation type” This/these limitation(s) is/are amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use. As explained by the Supreme Court, a claim directed to a judicial exception cannot be made eligible "simply by having the applicant acquiesce to limiting the reach of the patent for the formula to a particular technological use." Diamond v. Diehr, 450 U.S. 175, 192 n.14, 209 USPQ 1, 10 n. 14 (1981). Thus, limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception do not amount to significantly more than the exception itself.
The claim 5 recites:
a) Step 2A: Prong 2 analysis:
-“ wherein the computer applies a risk model of the machine-learning architecture on historical data to generate the risk score, wherein the historical data includes error data associated with one or more operation features of the feature vector.” this/these limitation(s) is/are amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (See MPEP 2106.05(h)). As explained by the Supreme Court, a claim directed to a judicial exception cannot be made eligible "simply by having the applicant acquiesce to limiting the reach of the patent for the formula to a particular technological use." Diamond v. Diehr, 450 U.S. 175, 192 n.14, 209 USPQ 1, 10 n. 14 (1981). Thus, limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception and that it does not integrate the judicial exception into a practical application.
b) Step 2B analysis:
-“ wherein the computer applies a risk model of the machine-learning architecture on historical data to generate the risk score, wherein the historical data includes error data associated with one or more operation features of the feature vector.” This/these limitation(s) is/are amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use. As explained by the Supreme Court, a claim directed to a judicial exception cannot be made eligible "simply by having the applicant acquiesce to limiting the reach of the patent for the formula to a particular technological use." Diamond v. Diehr, 450 U.S. 175, 192 n.14, 209 USPQ 1, 10 n. 14 (1981). Thus, limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception do not amount to significantly more than the exception itself.
The claim 6 recites:
a) Step 2A: Prong 2 analysis:
-“ wherein the computer applies a routing model of the machine-learning architecture on historical data and the risk score to determine the one or more authorization thresholds.” this/these limitation(s) is/are amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (See MPEP 2106.05(h)). As explained by the Supreme Court, a claim directed to a judicial exception cannot be made eligible "simply by having the applicant acquiesce to limiting the reach of the patent for the formula to a particular technological use." Diamond v. Diehr, 450 U.S. 175, 192 n.14, 209 USPQ 1, 10 n. 14 (1981). Thus, limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception and that it does not integrate the judicial exception into a practical application.
b) Step 2B analysis:
-“ wherein the computer applies a routing model of the machine-learning architecture on historical data and the risk score to determine the one or more authorization thresholds.” This/these limitation(s) is/are amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use. As explained by the Supreme Court, a claim directed to a judicial exception cannot be made eligible "simply by having the applicant acquiesce to limiting the reach of the patent for the formula to a particular technological use." Diamond v. Diehr, 450 U.S. 175, 192 n.14, 209 USPQ 1, 10 n. 14 (1981). Thus, limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception do not amount to significantly more than the exception itself.
The claim 7 recites:
a) Step 2A: Prong 2 analysis:
-“ wherein the risk score indicates at least one of a probability of one or more errors in the one or more operation records or a level of risk associated with the operation.” this/these limitation(s) is/are amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (See MPEP 2106.05(h)). As explained by the Supreme Court, a claim directed to a judicial exception cannot be made eligible "simply by having the applicant acquiesce to limiting the reach of the patent for the formula to a particular technological use." Diamond v. Diehr, 450 U.S. 175, 192 n.14, 209 USPQ 1, 10 n. 14 (1981). Thus, limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception and that it does not integrate the judicial exception into a practical application.
b) Step 2B analysis:
-“ wherein the risk score indicates at least one of a probability of one or more errors in the one or more operation records or a level of risk associated with the operation.” This/these limitation(s) is/are amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use. As explained by the Supreme Court, a claim directed to a judicial exception cannot be made eligible "simply by having the applicant acquiesce to limiting the reach of the patent for the formula to a particular technological use." Diamond v. Diehr, 450 U.S. 175, 192 n.14, 209 USPQ 1, 10 n. 14 (1981). Thus, limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception do not amount to significantly more than the exception itself.
The claim 8 is rejected for the same reason as the claim 1, since these claims recite the same limitations.
The claim 9 is rejected for the same reason as the claim 2, since these claims recite the same limitations.
The claim 10 is rejected for the same reason as the claim 3, since these claims recite the same limitations.
The claim 12 is rejected for the same reason as the claim 5, since these claims recite the same limitations.
The claim 13 is rejected for the same reason as the claim 6, since these claims recite the same limitations.
The claim 14 is rejected for the same reason as the claim 7, since these claims recite the same limitations.
The claim 15 is rejected for the same reason as the claim 1, since these claims recite the same limitations.
The claim 16 is rejected for the same reason as the claim 2, since these claims recite the same limitations.
The claim 17 is rejected for the same reason as the claim 3, since these claims recite the same limitations.
The claim 19 is rejected for the same reason as the claim 5, since these claims recite the same limitations.
The claim 20 is rejected for the same reason as the claim 7, since these claims recite the same limitations.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims1-3, 7, 8-10, 14, 15-17, 20 are rejected under 35 U.S.C. 103 as being unpatentable over Bhaskar et al. (Pub. No.US 20200293878, hereafter -Bhaskar), in view of Ramesh et al. (Pub. No.US 20210304204 A1, hereafter – Ramesh) and further in view of Scholl et al. (PUB. NO. US20200065820- hereafter- Scholl) and further in view of Boue et al. (PUB. No. US20240134972-hereafter-Boue).
Regarding claim 1, Bhaskar teaches a computer-implemented method comprising: obtaining, by the computer, a request for an operation and operation data inputs via a user interface of a maker client device associated with a maker identifier (Bhasker, [Par.0021-0022], “The fields of each transaction may vary, and may include … identifying one or more parties to the transaction (e.g., name, birth date,
account identifier or username, email address, mailing address, internet protocol (IP) address, etc .. and [0022], For example, the transaction system 106 may provide user interfaces, such as graphical user interfaces (GUIs) through which clients, using client devices 102, may submit a transaction request and data fields associated with the request.” Examiner’s note, the transaction request is considered as the request for an operation and the data fields are considered as the operation data inputs, including fields identifying one or more parties to the transaction, which are considered as the maker identifier.) ,
obtaining, by the computer, one or more operation data records associated with the operation indicated by the request from one or more data sources (Bhasker, [Par.0022], Client devices 102 generally represent devices that interact with the transaction system in order to request transactions. For example, the transaction system 106 may provide user interfaces, such as graphical user interfaces (GUIs) through which clients, using client devices 102, may submit a transaction request and data fields associated with the request. In some instances, data fields associated with a request may be determined independently by the transaction system 106 (e.g., by independently determining a time of day, by referencing profile information to retrieve data on a client associated with the request, etc.). Client devices 102 may include any number of different computing devices. For example, individual client devices 102 may correspond to a laptop or tablet computer, personal computer, wearable computer, personal digital assistant (PDA), hybrid PDA/mobile phone, or mobile phone.;” Examiner’s note, the data related to the transaction request and data fields associated with the submitted request from one or more data source. ),
extracting, by the computer, a feature vector for the operation based upon a plurality of operation features extracted using the operation data including the one or more operation data records and the operation data inputs for the operation (Bhasker, [Par.0024], “The vector transformation unit 126 can comprise computer code that operates to transform categorical field values (e.g., names, email addresses, etc.) into high-dimensionality numerical representations of those field values. Each high-dimensionality numerical representations may take the form of a set of numerical values, referred to generally herein as a vector. In one embodiment, categorical field values are transformed into numerical representations by use of embedding techniques, such as word-level or character-level embedded, as discussed above. The modeling unit 130 can represent code that operates to generate and train a machine learning model, such as a hierarchical neural network, wherein the high-dimensionality numerical representations are first passed through one or more auxiliary neural networks before being passed to a main network. The trained model may then be utilized by the risk detection unit 134, which can comprise computer code that operates to pass new field values for an attempted transaction into the trained model to result in a classification as to the likelihood that the transaction is fraudulent.” Examiner’s note, transforming field values into a vector is considered analogous to extracting, by the computer, a feature vector for the operation based upon a plurality of operation features extracted using the operation data including the one or more operation data records, the maker identifier, and the operation data inputs for the operation.),
determining, by the computer, an operation-type for the operation by applying a classifier of a machine-learning architecture on the feature vector for the operation (Bhasker, [Par.0024], [Fig.3, Par. 0016], “For example, where a name is represented as a 100-dimension vector, an auxiliary network may take the 100-dimensions of each name as 100 input values, and produce a 3 to 5 neuron output. These outputs effectively represent a lower-dimensionality representation of the categorical variable value, which can be passed into a subsequent neural network. The outputs of a main network is established as the desired result (e.g., a binary classification of whether a transaction is or is not fraud). The auxiliary and main network are then concurrently trained, enabling the outputs of the auxiliary network represent a low-dimensionality representation that is specific to the desired output (e.g., a binary classification as fraudulent or non-fraudulent or multi-class classification with types of fraud/abuse), rather than a generalized low-dimensionality representation that would be achieved by embedding (which relies on an established, rather than concurrently trained, model). Thus, the low-dimensionality representation of a categorical variable produced by an auxiliary neural network is expected to maintain semantic or contextual information relevant to a desired final result, without requiring the high-dimensionality representation to be fed into a main model (which would otherwise incur the costs associated with attempting to model one or more high-dimensionality representations in a single model, as noted above). Advantageously, utilizing the lower-dimensionality output of the auxiliary network with the main network allows a user to test the interactions and correlations of categorical variables with non-categorical variables using fewer computing resources in comparison to existing methods.” Examiner’s note, using the neural network to predict whether the transaction is fraud or not);
generating, by the computer, a risk score for the operation by executing a risk-scoring engine of the machine-learning architecture on the feature vector based on the maker identifier and an operation-type of the operation (Bhasker, [Fig. 3A ; Par. 0031, “The machine learning system 118 (e.g., via the risk detection unit 134) may then apply the previously learned model to the transaction information, to obtain a likelihood that the transaction is fraudulent. At (8), the machine learning system 118 transmits the final risk score to the transaction system 106”; and [Par.0034], “The outputs of the auxiliary network represent inputs, or features 307, to the main network.”),
determining, by the computer, one or more authorization thresholds for the operation based upon the risk score and the operation-type (Bhasker, [par.0021, 0031], “FIG. 2B is a block diagram depicting an illustrative generation and flow of data for utilizing the machine learning system 118 within a networked environment, according to some embodiments. The data flow may begin when (5) a user, through client devices 102, requests initiation of a transaction on transaction system 106. For example, a user may attempt to purchase an item from a commercial retailer's online website. To aid in a determination as to whether to allow the transaction, the transaction system 106 submits the transaction information (e.g., including the fields discussed above) to the machine learning system 118, at (6). The machine learning system 118 (e.g., via the risk detection unit 134) may then apply the previously learned model to the transaction information, to obtain a likelihood that the transaction is fraudulent. At (8), the machine learning system 118 transmits the final risk score to the transaction system 106, such that the transaction system 106 can determine whether or not to allow the transaction. Illustratively, the transaction system may establish a threshold likelihood, such that any attempted transaction above the threshold is rejected or held for further processing (e.g., human or automated verification),”, Examiner’s note the transaction whether to be hold or processed based on the determined threshold and the transaction type (amount, or type)).
the risk score generated using the maker identifier (Bhasker, [Fig. 3A ; Par. 0021, 0031, “[0021], The fields of each transaction may vary, and may include … identifying one or more parties to the transaction (e.g., name, birth date, account identifier or username, email address, mailing address, internet protocol (IP) address, etc .. and [0031], The machine learning system 118 (e.g., via the risk detection unit 134) may then apply the previously learned model to the transaction information, to obtain a likelihood that the transaction is fraudulent. At (8), the machine learning system 118 transmits the final risk score to the transaction system 106”; and [Par.0034], “The outputs of the auxiliary network represent inputs, or features 307, to the main network.” Examiner’s note, the risk score is generated based on the maker identifier (identified transaction field: name, account identifier),
However, Bhasker does not teach training, by a computer, parameters of a routing engine of a machine-learning architecture for routing operation data to one or more computing device according to a risk score using historical operation data associated with maker identifiers, generating, by the computer, an anomaly score by executing an anomaly detector of the machine-learning architecture on the feature vector based on the maker identifier and the operation-type of the operation, the anomaly score indicating a likelihood that the operation data represents an anomalous operation request determining, by the computer, one or more authorization thresholds for the operation based upon the risk score, the anomaly score, and the operation-type; transmitting, by the computer, the operation data to a first checker client device corresponding to the one or more authorization thresholds based on the risk score and the anomaly score by executing a routing engine trained for routing the operation data to one or more computing devices based upon the risk score and the anomaly score; in response to a determination from the first checker client device indicating indication that the operation is a rejected operation: retraining, by the computer, the routing engine of the machine-learning architecture by adjusting one or more parameters of the routing engine according to the indication that the operation is the rejected operation having an error associated with the maker identifier; transmitting, by the computer, next operation data of a next operation to a second checker client device based upon a second risk score and a second anomaly score generated using for the next operation received from the maker client device associated with the maker identifier; and executing, by the computer, the next operation using the next operation data from the maker client device, responsive to the computer receiving an indication of authorization for the next operation from the second checker client device.
On the other hand, Ramesh teaches training, by a computer, parameters of a routing engine of a machine-learning architecture for routing operation data to one or more computing device according to a risk score using historical operation data associated with maker identifiers (Ramesh, [Par.0035], “The payment account may be accessed and/or used through a browser application and/or dedicated payment application to engage in transaction processing through transaction processing application 122 that generates transactions used for training a machine learning or other AI model for prohibited transaction identification. Transaction processing application 122 may process the payment and may provide a transaction history that is used for transaction data in transaction data sets used to train and utilize the model for prohibited transaction identification.” And [0063], t step 408, a machine learning model is then iteratively trained using the flagged transactions and agent review of the false positives identified in the flagged transactions. Iteratively training may allow for retraining, adjusting weights and/or values of nodes with trees and/or hidden layers, and otherwise adjust the machine learning model to make better or different predictions, such as to lower or remove false positives. Once the machine learning model is trained, the machine learning model may be provided and/or output to one or more entities for prediction of prohibited transactions and generation of narratives.” Examiner’s note, the transaction history data is used to train the machine learning model, which includes the transaction data with false positive, therefore, the history transaction data associates with the marker identifier. The machine learning model is iterative train the model to adjust the weight based on the flagged transaction data with false positives.
transmitting, by the computer, the operation data to a first checker client device corresponding to the one or more authorization thresholds (Ramesh, [Par.0011-0017], “A service provider server, which may provide a prohibited transaction detection platform, may train a machine learning model through iterative training on a training data set. In this regard, a machine learning technique, such as gradient boosting or random forest algorithms, may be used to detect flagged transactions within the training data set that indicate potential fraud. These may be then reviewed by an agent to determine whether the flags may be false positives where the transactions were flagged but do not indicate fraud to a sufficient level to require reporting to a regulatory body (e.g., an authority that handles money laundering offenses and transactions). Once the false positives have been identified and used to retrain the model iteratively, the model may then provide more accurate results for prohibited transaction detection. Thereafter further transactions may be processed using the model to identify and flag any transactions for potential money laundering or other fraud….[0017], For example, the explanation graph may include factors that weighed in favor of the decision, and may rank those factors as well as an overall rank, threshold, or score comparison that led to the transaction being flagged.” Examiner’s note, the flagged transaction is sent to agent for further reviewed based on the output of training the machine model, wherein, the machine learning model predict the transaction status based on the particular threshold (amount of money) and score/risk score. ) ;
in response to a determination from the first checker client device indicating that the operation is a rejected operation: retraining, by the computer, the routing engine of the machine-learning architecture by adjusting one or more parameters of the routing engine according to the indication that the operation is the rejected operation having an error associated with the maker identifier (Ramesh, [Par.0033, 0063], “[0033], For example, transactions in the training data set and/or other data sets may be flagged using the machine learning technique to identify prohibited transactions, where the agent may indicate that the flagged transactions were not actually prohibited (e.g., not indicative or including money laundering). Identification of these false positives may be used to retrain the model of machine learning engine 132 in a continuous and/or iterative process so that false positives may be reduced and/or eliminated and machine learning engine 132 may more accurately predict and detect money laundering or other prohibited transactions” and [0063], t step 408, a machine learning model is then iteratively trained using the flagged transactions and agent review of the false positives identified in the flagged transactions. Iteratively training may allow for retraining, adjusting weights and/or values of nodes with trees and/or hidden layers, and otherwise adjust the machine learning model to make better or different predictions, such as to lower or remove false positives. Once the machine learning model is trained, the machine learning model may be provided and/or output to one or more entities for prediction of prohibited transactions and generation of narratives.” Examiner’s note, the machine model is retrained based on the identified false positive by the agent, for example, the agent ( the first checker client device) identifies that flagged transaction were not actually prohibited (e.g., not indicative or including money laundering) , therefore the identified fail positive is considered as the maker identifier. );
Ramesh and Bhasker are analogous in arts because they have the same field of endeavor of generating the fraudulent prediction.
Accordingly, it would have been obvious to one of the ordinary skills in the art before the effective filing date of the claimed invention to have modify the determining, by the computer, one or more authorization thresholds for the operation based upon the risk score and the operation-type, the risk score generated using the maker identifier, as taught by Bhasker, to include the training, by a computer, parameters of a routing engine of a machine-learning architecture for routing operation data to one or more computing device according to a risk score using historical operation data associated with maker identifiers, transmitting, by the computer, the operation data to a first checker client device corresponding to the one or more authorization thresholds based on the risk score by executing a routing engine trained for routing the operation data to one or more computing devices based upon the risk score; in response to a determination from the first checker client device indicating that the operation is a rejected operation: retraining, by the computer, the routing engine of the machine-learning architecture by adjusting one or more parameters of the routing engine according to the indication that the operation is the rejected operation having an error associated with the maker identifier, as taught by Ramesh. The modification would have been obvious because one of the ordinary skills in art would be motivated to provide more accurate result, (Ramesh, [Par.0011], “A service provider server, which may provide a prohibited transaction detection platform, may train a machine learning model through iterative training on a training data set. In this regard, a machine learning technique, such as gradient boosting or random forest algorithms, may be used to detect flagged transactions within the training data set that indicate potential fraud. These may be then reviewed by an agent to determine whether the flags may be false positives where the transactions were flagged but do not indicate fraud to a sufficient level to require reporting to a regulatory body (e.g., an authority that handles money laundering offenses and transactions). Once the false positives have been identified and used to retrain the model iteratively, the model may then provide more accurate results for prohibited transaction detection. Thereafter further transactions may be processed using the model to identify and flag any transactions for potential money laundering or other fraud.”).
However, Neither Bhasker nor Ramesh teaches generating, by the computer, an anomaly score by executing an anomaly detector of the machine-learning architecture on the feature vector based on the maker identifier and the operation-type of the operation, the anomaly score indicating a likelihood that the operation data represents an anomalous operation request determining, by the computer, one or more authorization thresholds for the operation based upon the risk score, the anomaly score, and the operation-type; authorization thresholds based on the risk score and the anomaly score by executing a routing engine trained for routing the operation data to one or more computing devices based upon the risk score and the anomaly score; transmitting, by the computer, next operation data of a next operation to a second checker client device based upon a second risk score and the anomaly score generated using for the next operation received from the maker client device associated with the maker identifier; and executing, by the computer, the next operation using the next operation data from the maker client device, responsive to the computer receiving an indication of authorization for the next operation from the second checker client device.
On the other hand, Scholl teaches transmitting, by the computer, next operation data of a next operation to a second checker client device based upon a second risk score generated using for the next operation received from the maker client device associated with the maker identifier (Scholl, [Par.0079-0082], “The SA component 26 may generate an authentication request message, such as message 610 (shown in FIG. 6), to verify whether the transaction should be declined, for example for fraud. At an operation 716, the SA component 26 may transmit the authentication request message 610 to the cardholder 22, for example, via the cardholder's mobile device 40 running a user mobile application, such as the secondary authentication application 504, another user contact address, another user device, or to one or more thereof, etc. …[0082], In the exemplary embodiment, in response to receipt of the authentication request message 610, the cardholder 22 may input additional data after providing the requested biometrics, PIN, alphanumeric password, etc., as described above. For example, the cardholder 22 may check a box indicating that the transaction is fraudulent. The cardholder input may be received by the secondary authentication application 504 via the mobile device 40. The secondary authentication application 504 may generate a response message, such as the authentication response message 612 (shown in FIG. 6), that may be transmitted to the SA component 26 at an operation 720. The response message may be transmitted, for example, with the authentication information provided by the cardholder. In suitable embodiments, the transmitted data may also include location information of the cardholder's mobile device 40.” Examiner’ snote, the system verify the transaction should be decline or not based on the authentication response message/ second input data/ addition input data is received from the cardholder. );
and executing, by the computer, the next operation using the next operation data from the maker client device, responsive to the computer receiving an indication of authorization for the next operation from the second checker client device (Scholl, [Par.0079-0087], “The SA component 26 may generate an authentication request message, such as message 610 (shown in FIG. 6), to verify whether the transaction should be declined, for example for fraud. At an operation 716, the SA component 26 may transmit the authentication request message 610 to the cardholder 22, for example, via the cardholder's mobile device 40 running a user mobile application, such as the secondary authentication application 504, another user contact address, another user device, or to one or more thereof, etc. …[0082], In the exemplary embodiment, in response to receipt of the authentication request message 610, the cardholder 22 may input additional data after providing the requested biometrics, PIN, alphanumeric password, etc., as described above. For example, the cardholder 22 may check a box indicating that the transaction is fraudulent. The cardholder input may be received by the secondary authentication application 504 via the mobile device 40. The secondary authentication application 504 may generate a response message, such as the authentication response message 612 (shown in FIG. 6), that may be transmitted to the SA component 26 at an operation 720. The response message may be transmitted, for example, with the authentication information provided by the cardholder. In suitable embodiments, the transmitted data may also include location information of the cardholder's mobile device 40.” Examiner’s note, the system verify the transaction should be decline or not based on the authentication response message/ second input data/ addition input data is received from the cardholder via the cardholder's mobile device, wherein, the cardholder’s mobile device is considered as the second checker client device ).
Bhasker, Ramesh and Scholl are analogous in arts because they have the same field of endeavor of generating the fraudulent prediction.
Accordingly, it would have been obvious to one of the ordinary skills in the art before the effective filing date of the claimed invention to have modify the combined teaching of Bhasker and Ramesh the determining, by the computer, one or more authorization thresholds for the operation based upon the risk score and the operation-type, the risk score generated using the maker identifier and the transmitting, by the computer, the operation data to a first checker client device corresponding to the one or more authorization thresholds based on the risk score by executing a routing engine trained for routing the operation data to one or more computing devices based upon the risk score; in response to an indication that the operation is a rejected operation: retraining, by the computer, the routing engine of the machine-learning architecture by adjusting one or more parameters of the routing engine according to the indication that the operation is the rejected operation having an error associated with the maker identifier, as set forth above, to include the transmitting, by the computer, next operation data of a next operation to a second checker client device based upon a second risk score generated using for the next operation received from the maker client device associated with the maker identifier; and executing, by the computer, the next operation using the next operation data from the maker client device, responsive to the computer receiving an indication of authorization for the next operation from the second checker client device, as taught by Scholl. The modification would have been obvious because one of the ordinary skills in art would be motivated to decide whether to decline the transaction (Scholl, [Par.0079], “he SA component 26 may generate an authentication request message, such as message 610 (shown in FIG. 6), to verify whether the transaction should be declined, for example for fraud. At an operation 716, the SA component 26 may transmit the authentication request message 610 to the cardholder 22, for example, via the cardholder's mobile device 40 running a user mobile application, such as the secondary authentication application 504, another user contact address, another user device, or to one or more thereof, etc. The authentication request message 610 may be transmitted by a number of communication methods as described herein and may, in some instances, be transmitted by more than one communication method (e.g., a request message may be transmitted via SMS and pushed to the secondary authentication application 504).”).
However, Bhasker, Ramesh and Scholl do not teach generating, by the computer, an anomaly score by executing an anomaly detector of the machine-learning architecture on the feature vector based on the maker identifier and the operation-type of the operation, the anomaly score indicating a likelihood that the operation data represents an anomalous operation request determining, by the computer, one or more authorization thresholds for the operation based upon the risk score, the anomaly score, and the operation-type; authorization thresholds based on the risk score and the anomaly score by executing a routing engine trained for routing the operation data to one or more computing devices based upon the risk score and the anomaly score, a second checker client device based upon a second risk score and the anomaly score generated using for the next operation received from the maker client device associated with the maker identifier;
On the other hand, Boue teaches generating, by the computer, an anomaly score by executing an anomaly detector of the machine-learning architecture on the feature vector based on the maker identifier and the operation-type of the operation (BOUE, [Par.0036-0038], “[0036] The anomaly detector 216 is implemented on the processor 208 and includes an EVT mechanism 218, the data collector 220, a score generator 222, and an anomaly identifier 224. The EVT mechanism 218 is a specialized processing unit that executes a primary machine learning (ML) model 219a or algorithm to perform one or more calculations described herein to calculate a probability value, calculate a threshold, and assign an outlier score based on the calculated probability value and threshold.” Examiner’s note, the outlier score is considered as the anomaly score is detected by the machine learning model based on the feature value for particular operation of the particular device, as it can, [Par.0054], “the outlier score may indicate that a particular device has failed or is susceptible to failing and the triggered action is to initiate repair or replacement of the IoT device 234. In examples where the system 200 is a virtual computing machine 236 for a payment system, the outlier score may indicate an order of an unusual size or from an unusual account and the triggered action is to flag the order as potentially fraudulent and either decline to process the order or investigate the order prior to fulfillment.”),
the anomaly score indicating a likelihood that the operation data represents an anomalous operation request (BOUE, [Par.0054], “In some examples, a confirmed anomalous sample triggers a task, or action, to be executed. Triggered tasks are executed by the task executor 232. The task executor 232 is implemented on the processor 208 and executes the triggered task based on the outlier score being above the threshold level. In examples where the system 200 is an engineering system for one or more IoT devices 234 that detects an anomaly in an IoT device 234, the outlier score may indicate that a particular device has failed or is susceptible to failing and the triggered action is to initiate repair or replacement of the IoT device 234. In examples where the system 200 is a virtual computing machine 236 for a payment system, the outlier score may indicate an order of an unusual size or from an unusual account and the triggered action is to flag the order as potentially fraudulent and either decline to process the order or investigate the order prior to fulfillment.”)..
determining, by the computer, one or more authorization thresholds for the operation based upon the risk score, the anomaly score, and the operation-type , (BOUE, [Par.0047], “The primary ML model 219a receives the schema 227 as feedback regarding the outlier score and/or potential anomaly in the sample. In some examples, receiving the schema 227 as feedback triggers an action by the primary ML model 219a. For example, where the schema 227 is labeled with a 1 to indicate the sample was correctly identified as an anomaly, the schema 227 provides positive feedback to reinforce the threshold that was determined for the risk factors, and no additional adjustment is performed. In examples where the schema 227 is labeled with a 0 to indicate the sample is not an anomaly and was therefore given a score by the score generator 222 that led to the incorrect identification as an anomaly by the anomaly identifier 224, the primary ML model 219a adjusts the risk factors in order to optimize and redetermine the set of value thresholds {z_1, z_2, . . . z_n) associated with the respective risk factors.” Examiner’s note, the claim does not define what is the authorization thresholds, therefore, do determine whether to trigger action/task based on the outlier score and the potential anomaly (anomaly or not anomaly) in the sample (particular incident/operation type), and the threshold value, that is corresponding to the determining the authorization threshold (determining the whether to execute the task/action) based on comparation the outlier score and the threshold value to determine the potential anomaly is correctly detected or incorrected detected, as it can be seen at [Par.0061-0063].).
an authorization thresholds based on the risk score and the anomaly score by executing the routing engine trained for routing the operation data to the one or more computing devices based upon the risk score and the anomaly score generated using the maker identifier; (BOUE, [Par.0061-0064], “In operation 312, the score generator 222 generates an outlier score for the sample, assigned as log(1/q). In operation 314, the anomaly identifier 224 compares the generated outlier score to the determined set of value thresholds {z_1, z_2, . . . z_n) to determine whether to classify the sample for which the outlier score is generated as an anomaly or not an anomaly. Where the outlier score is less than the threshold, the anomaly identifier 224 determines the sample is not an anomaly in operation 316. Where the outlier score is not less than the threshold, e.g., the outlier score is the same as or greater than the threshold, the anomaly identifier 224 identifies the sample as an anomaly in operation 318.[0062] In operation 320, the investigator analyzes the identified anomaly to confirm whether or not the identified sample is indeed an anomaly or not. As described herein, the investigator 226 investigates the identified potential anomalies to either confirm the identified potential anomaly is an anomaly or reject the potential anomaly as not an anomaly. The investigator 226 returns the schema 227 to the primary ML model 219a that confirms the sample is an anomaly or that determines the identification of the sample of the anomaly was a false positive. Where the sample is determined to be a false positive, the schema 227 is returned to the primary ML model 219a, which proceeds to operation 316 to determine the sample is not an anomaly. Where the sample is confirmed to be an anomaly, the schema 227 is returned to the primary ML model 219a, which proceeds to operation 322 to trigger an action.[0063] In operation 322, the task executor 232 executes an action based on the confirmation of the sample as an anomaly. As described herein, the action being performed is particular to the type of system 200 executing the operations of the method 300. In examples where the system 200 is an engineering system for one or more IoT devices 234 that detects an anomaly in an IoT device 234, the outlier score may indicate that a particular device has failed or is susceptible to failing and the triggered action is to initiate repair or replacement of the IoT device 234. In examples where the system 200 is a virtual computing machine 236 for a payment system, the outlier score may indicate an order of an unusual size or from an unusual account and the triggered action is to flag the order as potentially fraudulent and either decline to process the order or investigate the order prior to fulfillment. In examples where the system 200 is a virtual storage system, the outlier score may indicate data being stored in an unusual location and the triggered action is to flag the stored data as potentially fraudulent.”)
a second checker client device based upon a second risk score and a second anomaly score generated using for the next operation received from the maker client device associated with the maker identifier, (BOUE, [Par. [Par.0070.0071], “In operation 422, the investigator 226 generates the schema 227 with a label of 0, for example, {r_1_t, r_2_t, . . . r_n_t; 0}, to indicate the sample is not an anomaly and a false positive based on the investigator 226 determining the sample is not an anomaly and therefore was mischaracterized by the anomaly identifier 224. The schema 227 is sent to the primary ML model 219a as feedback for the ML model of the anomaly detector 216. In contrast, in operation 424, the investigator 226 generates the schema 227 with a label of 1, for example, {r_1_t, r_2_t, . . . r_n_t; 1}, to confirm the sample is an anomaly and was correctly characterized by the anomaly identifier 224. The schema 227 is sent to the primary ML model 219a as feedback for the ML model of the anomaly detector 216. [0071] In operation 426, the primary ML model 219a updates. In some examples, the primary ML model 219a updates continuously based on receiving one or more of a notification of a new incident stored in the incident database 225, a schema 227 with a label of 0 following operation 422, or a schema 227 with a label of 1 following operation 424. In some examples, the primary ML model 219a updates by adjusting the risk factors in order to optimize and redetermine the set of value thresholds {z_1, z_2, . . . z_n) associated with the respective risk factors. In some examples, the risk factors are adjusted based on an analysis performed based on a comparison of an adjustment mode to a value from a uniform distribution. Following the update to the primary ML model 219a, the method 400 returns to operation 402 and selects a new set of risk factors for a next iteration of the method 400.” Examiner’s note, the machine learning model updated the risk factor and the redetermine the set of threshold value based on the new incident data is coming.)
Bhasker, Ramesh, Scholl and BOUE are analogous in arts because they have the same field of endeavor of generating the anomalous event.
Accordingly, it would have been obvious to one of the ordinary skills in the art before the effective filing date of the claimed invention to have modify the obtaining, by the computer, one or more operation data records associated with the operation indicated by the request from one or more data sources, extracting, by the computer, a feature vector for the operation based upon a plurality of operation features extracted using the operation data including the one or more operation data records and the operation data inputs for the operation, determining, by the computer, an operation-type for the operation by applying a classifier of a machine-learning architecture on the feature vector for the operation,generating, by the computer, a risk score for the operation by executing a risk scoring engine of the machine-learning architecture on the feature vector based on the maker identifier and an operation-type of the operation, determining, by the computer, one or more authorization thresholds for the operation based upon the risk score and the operation-type, the risk score generated using the maker identifier, as taught by Bhasker, to include the generating, by the computer, an anomaly score by executing an anomaly detector of the machine-learning architecture on the feature vector based on the maker identifier and the operation-type of the operation, the anomaly score indicating a likelihood that the operation data represents an anomalous operation request determining, by the computer, one or more authorization thresholds for the operation based upon the risk score, the anomaly score, and the operation-type; authorization thresholds based on the risk score and the anomaly score by executing a routing engine trained for routing the operation data to one or more computing devices based upon the risk score and the anomaly score, a second checker client device based upon a second risk score and the anomaly score generated using for the next operation received from the maker client device associated with the maker identifier, as taught by BOUE. The modification would have been obvious because one of the ordinary skills in art would be motivated to maintaining low latency and real time requirement of the computing device, (BOUE, [Par.0020], “Aspects of the present disclosure provide numerous technical solutions that improve the functioning of the computing device that executes the ML model. For example, the implementation of EVT into the anomaly detector that executes the ML model enables risk factors to be expressed as a mathematical probability, rather than an arbitrary score that cannot be directly interpreted as a probability. The ML model is continually updated and improved due to the feedback loop present between the ML model and the investigator, which produces feedback regarding potential anomalies identified, in order to intelligently optimize the threshold for anomalous samples. For example, risk factors and an initial calibration sample of data may be adjusted based on the feedback received from the investigator, which intelligently optimizes the threshold for anomalous samples while maintaining low latency and real-time requirements of the computing device.").
Regarding claim 2, Bhasker teaches the computer-implemented method of claim 1, further comprising receiving, by the computer from a maker client device, one or more maker inputs indicating the one or more operation records for the operation (Bhasker, [Par.0021-0022], “The transaction system 106 illustratively represents a network-based transaction facilitator, which operates to service requests from clients (via client devices 102) to initiate transactions. The transactions may illustratively be purchases or acquisitions of physical goods, non-physical goods, services, etc. Many different types of network-based transaction facilitators are known within the art. Thus, the details of operation of the transaction system 106 may vary across embodiments, and are not discussed herein. However, for the purposes of discussion, it is assumed that the transaction system 106 maintains historical data correlating various fields related to a transaction with a final outcome of the transaction (e.g., as fraudulent or non-fraudulent). The fields of each transaction may vary, and may include fields such as a time of transaction, and amount of the transaction, fields identifying one or more parties to the transaction (e.g., name, birth date, account identifier or username, email address, mailing address, internet protocol (IP) address, etc.), the items to which the transaction pertains (e.g. characteristics of the items, such as the departure and arrival airports for a flight purchased, a brand of item purchased, etc.), payment information for the transaction (e.g., type of payment instrument or a credit card number used), or other constraints on the transaction (e.g., whether the transaction is refundable). Outcomes of each transaction may be determined by monitoring those transactions after they have completed, such as by monitoring “charge-backs” to transactions later reported as fraudulent by an impersonated individual. The historical transaction data is illustratively stored in a data store 110, which may be a hard disk drive (HDD), solid state drive (SSD), network attached storage (NAS), or any other persistent or substantially persistent data storage device.” Examiner’s note, the user may, in some embodiments, input fields identifying which profile or account their using, this may be considered analogous to receiving, by the computer from a maker client device, one or more maker inputs indicating the one or more operation records for the operation.).
Regarding claim 3, Bhasker teaches the computer-implemented method of claim 1, further comprising training, by the computer, the classifier of the machine-learning architecture to determine the operation type by applying the classifier of the machine-learning architecture on a plurality of historic operation records, each historic operation record having a training label indicating the operation type (Bhasker, [Par.0025-0026, 0030], “The interactions begin at (1), where the transaction system 106 transmits to machine learning system 118 historical transaction data. In some embodiments, the historical transaction data may comprise raw data of past transactions that have been processed or submitted to the transaction system 106. For example, the historical data may be a list of all transactions made on the transaction system 106 over the course of a three-month period, as well as fields related to the transaction, such as such as a time of transaction, and amount of the transaction, fields identifying one or more parties to the transaction (e.g., name, birth date, account identifier or username, email address, mailing address, interne protocol (IP) address, etc.), the items to which the transaction pertains (e.g. characteristics of the items, such as the departure and arrival airports for a flight purchased, a brand of item purchased, etc.), payment information for the transaction (e.g., type of payment instrument or a credit card number used), or other constraints on the transaction (e.g., whether the transaction is refundable). The historical data is illustratively “tagged” or labeled with an outcome of the transaction with respect to a desired categorization. For example, each transaction can be labelled as “fraudulent” or “not fraudulent.” In some embodiments, the historical data may be stored and transmitted in the form of a text file, a tabulated spreadsheet, or other data storage format” ).
Regarding claim 7, Bhasker teaches the computer-implemented method of claim 1, wherein the risk score indicates at least one of a probability of one or more errors in the one or more operation records or a level of risk associated with the operation (Bhasker, [Par.0031], “FIG. 2B is a block diagram depicting an illustrative generation and flow of data for utilizing the machine learning system 118 within a networked environment, according to some embodiments. The data flow may begin when (5) a user, through client devices 102, requests initiation of a transaction on transaction system 106. For example, a user may attempt to purchase an item from a commercial retailer's online website. To aid in a determination as to whether to allow the transaction, the transaction system 106 submits the transaction information (e.g., including the fields discussed above) to the machine learning system 118, at (6). The machine learning system 118 (e.g., via the risk detection unit 134) may then apply the previously learned model to the transaction information, to obtain a likelihood that the transaction is fraudulent. At (8), the machine learning system 118 transmits the final risk score to the transaction system 106, such that the transaction system 106 can determine whether or not to allow the transaction. Illustratively, the transaction system may establish a threshold likelihood, such that any attempted transaction above the threshold is rejected or held for further processing (e.g., human or automated verification)” Examiner’s note, the risk score indicate the probability whether the transaction is fraudulent or not. ).
The claim 8 is rejected for the same reason as the claim 1, since these claims recite the same limitations.
The claim 9 is rejected for the same reason as the claim 2, since these claims recite the same limitations.
The claim 10 is rejected for the same reason as the claim 3, since these claims recite the same limitations.
The claim 14 is rejected for the same reason as the claim 7, since these claims recite the same limitations.
The claim 15 is rejected for the same reason as the claim 1, since these claims recite the same limitations.
Additionally, Bhasker teaches a computer-readable medium comprising a non-transitory storage memory configured to store machine-readable instructions that when executed by a processor instruct the processor to (Bhasker, [Par.0053], “The elements of a method, process, routine, or algorithm described in connection with the embodiments disclosed herein can be embodied directly in hardware, in a software module executed by a similarity detection system, or in a combination of the two. A software module can reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of a non-transitory computer-readable storage medium. An illustrative storage medium can be coupled to the similarity detection system such that the similarity detection system can read information from, and write information to, the storage medium. In the alternative, the storage medium can be integral to the similarity detection system.”).
The claim 16 is rejected for the same reason as the claim 2, since these claims recite the same limitations.
The claim 17 is rejected for the same reason as the claim 3, since these claims recite the same limitations.
The claim 20 is rejected for the same reason as the claim 7, since these claims recite the same limitations.
Claims 5, 12, 19 are rejected under 35 U.S.C. 103 as being unpatentable over Bhaskar et al. (Pub. No.US 20200293878, hereafter -Bhaskar), in view of Ramesh et al. (Pub. No.US 20210304204 A1, hereafter – Ramesh) and further in view of Scholl et al. (PUB. NO. US20200065820- hereafter- Scholl) and further in view of Boue et al. (PUB. No. US20240134972-hereafter-Boue) and further in view of Chatterjee et al. (PUB. No. US20240211544-hereinafter-Chatterjee).
Regarding claim 5, Bhasker teaches the computer-implemented method of claim 1, wherein the computer applies a risk model of the machine-learning architecture on historical data to generate the risk score (Bhasker, [Par.0026-0031], “[0030], After generating the network structure, the modeling unit 130 may train the network utilizing at least a portion of the historical transaction data. General training of defined neural network structures is known in the art, and thus will not be described in detail herein. However, in brief, the modeling unit 130 may, for example, divide the historical data into multiple data sets (e.g., training, validation, and test sets) and process the data sets using the hierarchical neural network (the overall network, including auxiliary, main, and any intermediary networks) to determine weights applied at each node to input data. As an end result, a final model may be generated that takes as input fields from a proposed transaction, and results as an output the probability that the fields will be placed into a given category (e.g., fraudulent or non-fraudulent). And [0031], FIG. 2B is a block diagram depicting an illustrative generation and flow of data for utilizing the machine learning system 118 within a networked environment, according to some embodiments. The data flow may begin when (5) a user, through client devices 102, requests initiation of a transaction on transaction system 106. For example, a user may attempt to purchase an item from a commercial retailer's online website. To aid in a determination as to whether to allow the transaction, the transaction system 106 submits the transaction information (e.g., including the fields discussed above) to the machine learning system 118, at (6). The machine learning system 118 (e.g., via the risk detection unit 134) may then apply the previously learned model to the transaction information, to obtain a likelihood that the transaction is fraudulent. At (8), the machine learning system 118 transmits the final risk score to the transaction system 106, such that the transaction system 106 can determine whether or not to allow the transaction. Illustratively, the transaction system may establish a threshold likelihood, such that any attempted transaction above the threshold is rejected or held for further processing (e.g., human or automated verification),
However, Bhasker does not teach wherein the historical data includes error data associated with one or more operation features of the feature vector.
On the other hand, Chatterjee teaches wherein the historical data includes error data associated with one or more operation features of the feature vector. (Chatterjee, [Par.0055], “in a number of embodiments, the sub-dataset for training the second machine learning module in block 521 can include historical data points that comprise label errors existing in the original feature values or introduced in the feature values imputed in block 510. In certain embodiments, the sub-dataset can include some (e.g., one third, a half, or two thirds, etc.) of the historical data points while the rest of the historical data points are used as validation data. In some embodiments, the warm-up stage can include multiple (e.g., 20, 30, etc.) iterations for training the second machine learning module in block 521.” ).
Bhasker, Ramesh and Chatterjee are analogous in arts because they have the same field of endeavor of generating the machine learning model.
Accordingly, it would have been obvious to one of the ordinary skills in the art before the effective filing date of the claimed invention to have modify the the computer applies a risk model of the machine-learning architecture on historical data to generate the risk score, as taught by Bhasker, to include the wherein the historical data includes error data associated with one or more operation features of the feature vector, as taught by Chatterjee. The modification would have been obvious because one of the ordinary skills in art would be motivated to decide whether to improve the quality of the training dataset, (Chatterjee, [Par.0038], “in some embodiments, the historical change data can include various issues, such as missing or incorrect feature values, class imbalance (e.g., significantly more normal change requests than risky change requests being submitted and/or approved), and/or gradual changes in data distribution, etc. Accordingly, determining the training dataset in block 410 further can include one or more procedures, processes, activities, and/or blocks to address some or all of these issues and improve the quality of the training dataset.).
The claim 12 is rejected for the same reason as the claim 5, since these claims recite the same limitations.
The claim 19 is rejected for the same reason as the claim 5, since these claims recite the same limitations.
Claims 6, 13 are rejected under 35 U.S.C. 103 as being unpatentable over Bhaskar et al. (Pub. No.US 20200293878, hereafter -Bhaskar), in view of Ramesh et al. (Pub. No.US 20210304204 A1, hereafter – Ramesh) and further in view of Scholl et al. (PUB. NO. US20200065820- hereafter- Scholl) and further in view of Boue et al. (PUB. No. US20240134972-hereafter-Boue) and further in view of Ameisen et al. (PUB. No. US20240095741-hereinafter- Ameisen).
Regarding claim 6, Ameisen as modified in view of Bhasker teaches wherein the computer applies a routing model of the machine-learning architecture on historical data and the risk score to determine the one or more authorization thresholds (Ameisen, [Par.0059], “In embodiments, to ensure a consistent block rate, to satisfy user expectations, to maintain a level of consistency of a fraud detection system, and to enable deployment of the new MLM providing improved fraud detection, processing logic determines a second threshold value using a set of prior transactions input into the second fraud detection MLM that results in a second block rate of the second fraud detection MLM within a predetermined margin of the first threshold (processing block 306). That is, in embodiments, processing logic determines a second, different threshold, that when applied to the fraud scores of transactions generated by the second fraud detection MLM results in a block rate within the predetermined margin. Such a second threshold value will therefore maintain a consistent block rate when applied with the second MLM as when the first threshold is applied with the first MLM.”).
Bhasker, Ramesh and Ameisen are analogous in arts because they have the same field of endeavor of generating the machine learning model.
Accordingly, it would have been obvious to one of the ordinary skills in the art before the effective filing date of the claimed invention to have modify the fraudulent prediction, as taught by Bhasker, to include the computer applies a routing model of the machine-learning architecture on historical data and the risk score to determine the one or more authorization thresholds, as taught by Ameisen. The modification would have been obvious because one of the ordinary skills in art would be motivated to improve the fraud detection system, (Chatterjee, [Par.0060], “Since the second fraud detection threshold when used with the second MLM maintains a consistent block rate, the user's expectations are maintained, the fraud detection consistency is maintained, and the improved fraud detection MLM is deployed. Therefore, improved fraud detection is enabled that maintains user expectations to ensure a smooth transition to the new and improved MLM.”.).
The claim 13 is rejected for the same reason as the claim 6, since these claims recite the same limitations.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure is provide below.
Korup et al. (Pub. No.:us 20220327550-hereinafter, Korup) teaches the machine learning model to predict potential of fraudulent for the transaction.
Koren et al. (Pub. No.:us 20220327504-hereinafter, Koren) teaches the machine
learning model to predict risk of fraudulent for the transaction.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to EM N TRIEU whose telephone number is (571)272-5747. The examiner can normally be reached on Mon-Fri from 9:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Omar Fernandez Rivas can be reached on (571) 272-2589. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/E.T./Examiner, Art Unit 2128
/OMAR F FERNANDEZ RIVAS/Supervisory Patent Examiner, Art Unit 2128