Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
The following is a Final Office Action in response to communications received June 16, 2025. Claims 1-20 are pending and examined.
Response to Amendments and Arguments
As to the objection to the Drawings, Applicant’s amendments fully address this objection which is thereby withdrawn.
As to the rejection of Claims 1-20 under 35 U.S.C. § 101, Applicant’s arguments and amendments have been fully considered but are not persuasive. Applicant first argues that the claims do not fall any of the sub-groupings for “certain methods of organizing human activity”. Examiner disagrees. The claims are directed toward improving a system for substantiating the fidelity of commercial transactions which fall under the sub-grouping of commercial or legal interactions. Applicant also argues that the present claims as a whole integrate the judicial exception into a practical application of the exception to apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception. Examiner disagrees. The claims in the instant application include an abstract idea, and when considered as a whole, the claims (independent and dependent) do not integrate the exception into a practical application, and merely add the words “apply it” to the “the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f). The additional elements do not include an improvement to another technology or technical field, an improvement to the functioning of the computer itself, or meaningful limitations beyond generally linking the use of an abstract idea to a particular technological environment. And simply relying on a computer to perform routine tasks or calculations more quickly or more accurately is insufficient to render a claim patent eligible. See Alice, 134 S. Ct. at 2359 (“use of a computer to create electronic records, track multiple transactions, and issue simultaneous instructions” is not an inventive concept); Bancorp Servs., L.L.C. v. Sun Life Assur. Co. of Can. (U.S.), 687 F.3d 1266, 1278 (Fed. Cir. 2012) (a computer “employed only for its most basic function . . . does not impose meaningful limits on the scope of those claims”); cf. DDR Holdings, LLC v. Hotels.com, L.P., 773 F.3d 1245, 1258–59 (Fed. Cir. 2014) (finding a computer-implemented method patent eligible where the claims recite a specific manipulation of a general-purpose computer such that the claims do not rely on a “computer network operating in its normal, expected manner”). It is for these reasons that the present claims do not rise to the level of improvements to computer functionality or system operation as in McRO. The rejection is thereby maintained.
As to the rejection of claims 1-20 under 35 U.S.C. § 103, Applicant's arguments are moot given the new grounds of rejection for the claims as amended.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
(Step 1) The claims recite a method, system, and manufacture. For the purposes of this analysis, representative claim 1 is addressed.
(Step 2A, prong 1) Abstract ideas are in bold below, and represents certain methods of organizing human activity, as a method of testing an effectiveness of a transaction monitoring system. Testing an effectiveness of a transaction monitoring system is akin to certain methods of organizing human activity.
A computer-implemented method to test an effectiveness of a transaction monitoring system, the method comprising:
executing a reinforcement learning agent to perform a sequence of test transactions, wherein the transaction monitoring system is configured to detect transactions that are suspicious based on satisfying a scenario that defines a suspicious activity, and wherein the reinforcement learning agent generates the sequence of test transactions to cumulatively transfer an amount without detection by the scenario; recording the sequence of test transactions along with a set of responses made by the transaction monitoring system in response to each test transaction being performed, wherein the set of responses includes at least an alert status of detection by the scenario, and wherein the alert status indicates one of an alert for the suspicious activity is triggered or the alert for suspicious activity is not triggered; generating an alert-based metric that represents the effectiveness of the transaction monitoring system for resisting the suspicious activity based on identifying one or more alerts that are triggered among the alert statuses in the set of responses; [[and]] automatically adjust the scenario based at least in part on the alert-based metric; generating, for display in a graphical user interface, a visualization of the alert-based metric that represents the effectiveness of the transaction monitoring system for resisting the suspicious activity and an option to accept the adjusted scenario; and in response to selection of the option to accept, automatically deploying the adjusted scenario into the transaction monitoring system to adjust resistance to the suspicious activity.
(Step 2A prong 2) The additional elements are considered as follows:
“A computer-implemented method” This is merely “apply it” this sever is claimed at a high level of generality, it receives the information, performs the abstract idea, and outputs the results.
“reinforcement learning agent” and “transaction monitoring system” These are described in Applicant’s Specification at ¶[0030] as “a computer-implemented system for autonomously selecting and performing test transactions in response to states of a transaction environment… To execute the reinforcement learning agent, a computer reads and implements instructions that cause the reinforcement learning agent to select and perform the test transactions in accordance with the policy. In one embodiment, the transaction system and the transaction monitoring system are an environment that is configured to simulate an actual transaction system and transaction monitoring system” These are merely “apply it” as these are claimed at a high level of generality, it receives the information, performs the abstract idea, and outputs the results.
“non-transitory computer-readable medium having stored thereon computer-executable instructions that, when executed by a processor accessing memory of a computer” This is merely “apply it” the computer, processor, and memory are claimed at a high level of generality, they receive the information, perform the abstract idea, and output the results.
(Step 2B) The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration into a practical application, the additional elements amount to no more than mere instructions to apply the abstract idea of testing an effectiveness of a transaction monitoring system using generic computer components. The claim elements when considered separately and in an ordered combination, do not add significantly more than implementing the abstract idea of testing an effectiveness of a transaction monitoring system, over a generic computer network with generic computing elements, and generic hardware.
Analysis of dependent claims 2-8, 10-14, and 16-20, recited additional details which only further narrow the abstract idea and do not add any additional features, alone or in combination, that would provide a practical application or provide significantly more For example, Claims 2-4 further narrows the limitation “generating the metric” of Claim 1. Claim 5 further narrows the limitations “recording the sequence of test transactions performed by the reinforcement learning agent” and “generating the metric’ of Claim 1. Claim 6 further narrows the limitations “recording the sequence of test transactions performed by the reinforcement learning agent” and “generating the metric” and “generating the visualization of the metric” of Claim 1. Claims 7 and 8 further narrow claim 1. Claims 10-14 and 16-20 similarly narrow the respective independent Claims 9 and 15 and similarly rejected under the same reasoning.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action:
(a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102 of this title, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negatived by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103(a) are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-20 are rejected under 35 U.S.C. 103(a) as being unpatentable over Zoldi et al. (Publication No.: US 2017/0270534 A1), in view of Kala et al. (Publication No.: US 2021/0081948 A1).
As to Claim 1, Zoldi teaches
a computer-implemented method to test an effectiveness of a transaction monitoring system, the method comprising:
recording the sequence of test transactions along with a set of responses made by the transaction monitoring system in response to each test transaction being performed, wherein the set of responses includes at least an alert status of detection by the scenario, and wherein the alert status indicates one of an alert for the suspicious activity is triggered or the alert for suspicious activity is not triggered (see ¶[0055] – “The feature construction module converts the input data (of multiple types, such as categorical, ordinal, text), and convert them into numerical features, which are stored and retrieved from profile data stores. Relevant entities to be profiled for AML include customers and accounts. A distinction is made between customers and accounts who have direct relationships with the financial institution (on-us) and those who do not (off-us). Once features are constructed, this numerical vector is converted to the AML Alert Score using the scoring and calibration module.”);
generating an alert-based metric that represents the effectiveness of the transaction monitoring system for resisting suspicious activity based on identifying one or more alerts that are triggered among the alert statuses in the set of responses (see ¶[0055]); [[and]]
(see ¶[0055], and ¶[0082] – “To provide for interaction with a user, one or more aspects or features of the subject matter described herein can be implemented on a computer having a display device, such as for example a cathode ray tube (CRT), a liquid crystal display (LCD) or a light emitting diode (LED) monitor for displaying information to the user and a keyboard and a pointing device, such as for example a mouse or a trackball, by which the user may provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well.”).
Although Zoldi substantially teaches the invention of Claim 1, it does not explicitly teach that the machine learning agent is a reinforcement learning agent or executing a [machine] learning agent to perform a sequence of test transactions, wherein the transaction monitoring system is configured to detect transactions that are suspicious based on satisfying a scenario that defines a suspicious activity, and wherein the [machine] learning agent generates the sequence of test transactions to cumulatively transfer an amount without detection by the scenario or generating adjust the scenario based at least in part on the alert-based metric; generating, for display in a graphical user interface, a visualization of the alert-based metric that represents the effectiveness of the transaction monitoring system for resisting the suspicious activity and an option to accept the adjusted scenario; and in response to selection of the option to accept, automatically deploying the adjusted scenario into the transaction monitoring system to adjust resistance to the suspicious activity. Kala does teach that the machine learning agent is a reinforcement learning agent or executing a [machine] learning agent to perform a sequence of test transactions, wherein the transaction monitoring system is configured to detect transactions that are suspicious based on satisfying a scenario that defines a suspicious activity, and wherein the [machine] learning agent generates the sequence of test transactions to cumulatively transfer an amount without detection by the scenario or generating adjust the scenario based at least in part on the alert-based metric; generating, for display in a graphical user interface, a visualization of the alert-based metric that represents the effectiveness of the transaction monitoring system for resisting the suspicious activity and an option to accept the adjusted scenario; and in response to selection of the option to accept, automatically deploying the adjusted scenario into the transaction monitoring system to adjust resistance to the suspicious activity (see ¶[0042] – “The GUI 200 may include a button 220 that may be selectable by the issuer to run a risk test using the selected fraud rules. In some embodiments, the risk test may include applying the selected one or more fraud rules to the test transaction data to determine one or more suspected fraudulent transactions in the test transaction data. In some embodiments, the risk test may determine a fraud detection rate 222 based on the proportion of fraudulent transactions are identified in the fraud test out of the number of known fraudulent transactions in the test transaction data. In some embodiments, the GUI 200 may also provide additional details based on percentages of false positive findings of fraudulent transactions (e.g., known legitimate transactions identified by the test as likely fraudulent). Based on the test results, the issuer may tweak the fraud rules, acceptable risk threshold, fraud rate thresholds, or other parameters in order to improve the fraud detection rate and minimize false positive findings.”). It would have been obvious to one of ordinary skill in the art at the time of filing to incorporate the generating of test transactions and adjusting the scenario aspect taught in Kala with the machine learning fraud detection system of Zoldi as both inventions pertain to the detection of and improving the detection of fraudulent activity.
As to Claim 2, Zoldi teaches the computer-implemented method of claim 1, wherein generating the metric further comprises
counting a number of alerts triggered by the sequence of test transactions under each of a set of scenarios in the transaction monitoring system, and
calculating a relative effectiveness of the scenario based on the numbers of alerts for the scenarios in the set of scenarios, wherein the metric is the relative effectiveness; and wherein generating the visualization of the alert-based metric further comprises including the proportion of the relative effectiveness of the scenario in the visualization along with proportions of relative effectiveness of other scenarios in the set (see ¶[0006] – “This document describes an automated system for detecting risky entity behavior using an efficient frequent behavior-sorted list. From these lists, fingerprints and distance measures can be constructed to enable comparison to known risky entities. The lists also facilitate efficient linking of entities to each other, such that risk information propagates through entity associations. These behavior sorted lists, in combination with other profiling techniques, which efficiently summarize information about the entity within a data store, can be used to create threat scores. These threat scores may be applied within the context of anti-money laundering (AML) and retail banking fraud detection systems. A particular instantiation of these scores elaborated herein is the AML Threat Score, which is trained to identify behavior for a banking customer that is suspicious and indicates high likelihood of money laundering activity”).
As to Claim 3, Zoldi teaches the computer-implemented method of claim 1, wherein generating the metric further comprises
counting a number of alerts triggered by the sequence of test transactions,
determining an amount of time taken by the [machine] learning agent to transfer the amount to a goal account, and
calculating a number of cumulative alerts over a given time period based on the number of alerts triggered and the amount of time, wherein the metric is the number of cumulative alerts over the given time period; and
wherein generating the visualization of the metric further comprises including the number of cumulative alerts in the visualization (see ¶[0075]-¶[0078] – “The key processes in the risk-linking are (1) the storing of the score in the profiles of both parties to a transactions, and (2) in the scoring and calibration module (see FIG. 3), using the risk-linking features to create a “risk-linked score”, which is merged from the raw score. The feature creation module (see FIG. 3) is responsible for decaying the risk-linked features (scores), and a number of decay strategies are possible: [0076] a) Event-averaged decay, where the scores are decayed based on the number of events that have occurred since the high-scoring event. [0077] b) Time-based decay, where the scores are decayed based on the time between the current event and the earlier high-scoring event. [0078] c) Time-to-live decay, where the scores are preserved (unaltered) for a certain number time-period.”).
Although Zoldi substantially teaches the invention of Claim 3, it does not explicitly teach that the machine learning agent is a reinforcement learning agent. Since each individual element and its function are shown in the prior art, albeit shown in separate references, the difference between the claimed subject matter and the prior art rests not on any individual element or function but in the very combination itself- that is in the substitution of the machine learning agent of Kala for the reinforcement learning agent of Zoldi. Thus, the simple substitution of one known element for another producing a predictable result renders the claim obvious.
As to Claim 4, Zoldi teaches the computer-implemented method of claim 1, wherein generating the metric further comprises
determining a first alert that is an earliest alert triggered among the set of responses, and
determining a portion of an amount to be transferred to a goal account that is transferred without alert before the first alert, wherein the metric is the portion of the amount that is transferred before the first alert; and
wherein generating the visualization of the metric further comprises including the portion of the amount that is transferred before the first alert in the visualization (see ¶[0033]).
As to Claim 5, Zoldi teaches the computer-implemented method of claim 1,
wherein recording the sequence of test transactions performed by the [machine] learning agent further comprises executing the [machine] learning agent to generate multiple episodes of transactions;
wherein generating the metric further comprises determining a value for the metric for each of the multiple episodes, and calculating an average of the values of the metric; and
wherein generating the visualization of the metric for display further comprises including the average of the values for the metric in the visualization (see ¶[0065]-¶[0066]).
Although Zoldi substantially teaches the invention of Claim 5, it does not explicitly teach that the machine learning agent is a reinforcement learning agent. Since each individual element and its function are shown in the prior art, albeit shown in separate references, the difference between the claimed subject matter and the prior art rests not on any individual element or function but in the very combination itself- that is in the substitution of the machine learning agent of Kala for the reinforcement learning agent of Zoldi. Thus, the simple substitution of one known element for another producing a predictable result renders the claim obvious.
As to Claim 6, Zoldi teaches the computer-implemented method of claim 1,
wherein recording the sequence of test transactions performed by the [machine] learning agent further comprises executing the [machine] learning agent to generate multiple episodes of transactions;
wherein generating the metric further comprises
determining a count of episodes among the multiple episodes in which no alert occurred and an amount was completely transferred to a goal account, and
calculating a ratio of episodes in which the amount is completely transferred to the destination account without alerts based on the count and a total number of the multiple episodes, wherein the metric is the ratio of episodes in which the amount is completely transferred without alerts; and
wherein generating the visualization of the metric further comprises including the ratio of episodes in which the amount is completely transferred without alerts in the visualization (see ¶[0065]-¶[0066]).
Although Zoldi substantially teaches the invention of Claim 6, it does not explicitly teach that the machine learning agent is a reinforcement learning agent. Since each individual element and its function are shown in the prior art, albeit shown in separate references, the difference between the claimed subject matter and the prior art rests not on any individual element or function but in the very combination itself- that is in the substitution of the machine learning agent of Kala for the reinforcement learning agent of Zoldi. Thus, the simple substitution of one known element for another producing a predictable result renders the claim obvious.
As to Claim 7, Zoldi teaches the computer-implemented method of claim 1, further comprising:
accepting an input that re-configures the transaction monitoring system by adjusting the scenario of the system from a first set of thresholds to a second set of thresholds;
re-training the [machine] learning agent to perform an additional sequence of test transactions to cumulatively transfer the amount without detection by the adjusted scenario that applies the second set of thresholds;
recording the additional sequence of test transactions performed by the [machine] learning agent along with an additional set of responses made by the re-configured transaction monitoring system, wherein the additional set of responses includes at least alert statuses of detection by the adjusted scenario that uses the second set of thresholds;
generating an updated metric that represents the effectiveness of the re-configured transaction monitoring system for resisting transactions that attempt to evade the adjusted scenario that uses the second set of thresholds, wherein the updated metric is based on the additional sequence of test transactions and additional set of responses[[]]; and
including the updated metric in the visualization (see ¶[0084]-¶[0086]).
Although Zoldi substantially teaches the invention of Claim 7, it does not explicitly teach that the machine learning agent is a reinforcement learning agent. Since each individual element and its function are shown in the prior art, albeit shown in separate references, the difference between the claimed subject matter and the prior art rests not on any individual element or function but in the very combination itself- that is in the substitution of the machine learning agent of Kala for the reinforcement learning agent of Zoldi. Thus, the simple substitution of one known element for another producing a predictable result renders the claim obvious.
As to Claim 8, Zoldi teaches the computer-implemented method of claim 1, further comprising:
accepting an input that adjusts the amount for transfer by the [machine] learning agent;
performing an additional sequence of test transactions to transfer the adjusted amount, wherein the [machine] learning agent selects the additional sequence of test transactions to cumulatively transfer the adjusted amount without detection by the scenario;
recording the additional sequence of test transactions performed by the [machine] learning agent along with an additional set of responses made by the transaction monitoring system;
generating an adjusted metric that represents the effectiveness of the transaction monitoring system for resisting transactions to transfer the adjusted amount, wherein the adjusted metric is based on the additional set of test transactions and the additional set of responses; and including the adjusted metric in the visualization (see ¶[0084]-¶[0086]).
Although Zoldi substantially teaches the invention of Claim 8, it does not explicitly teach that the machine learning agent is a reinforcement learning agent. Since each individual element and its function are shown in the prior art, albeit shown in separate references, the difference between the claimed subject matter and the prior art rests not on any individual element or function but in the very combination itself- that is in the substitution of the machine learning agent of Kala for the reinforcement learning agent of Zoldi. Thus, the simple substitution of one known element for another producing a predictable result renders the claim obvious.
Claim 9 is the system for performing the method of Claim 1 and is rejected under the same reasoning as Claim 1.
As to Claim 10, Zoldi teaches the computing system of claim 9, wherein the instructions to generate the time-based metric further cause the computing system to:
count an amount of time taken by the [machine] learning agent to transfer an amount to a goal account, and
count a number of intermediate accounts used by the [machine] learning agent to transfer the amount to the goal account, wherein the metric measures overall system strength as a tuple of the amount of time and the number of intermediate accounts; and
wherein the instructions to generate the visualization of the time-based metric further cause the computing system to: include the amount of time and the number of intermediate accounts in the visualization (see ¶[0033], and ¶[0075]-¶[0078]).
Although Zoldi substantially teaches the invention of Claim 10, it does not explicitly teach that the machine learning agent is a reinforcement learning agent. Since each individual element and its function are shown in the prior art, albeit shown in separate references, the difference between the claimed subject matter and the prior art rests not on any individual element or function but in the very combination itself- that is in the substitution of the machine learning agent of Kala for the reinforcement learning agent of Zoldi. Thus, the simple substitution of one known element for another producing a predictable result renders the claim obvious.
Claim 11 is the system for performing the method of Claim 3 and is rejected under the same reasoning as Claim 3.
As to Claim 12, Zoldi teaches the computing system of claim 9, wherein the instructions to generate the metric further causes the computing system to determine an amount of time taken by the [machine] learning agent to complete an episode of transactions, wherein the metric is the amount of time to complete the episode; and
wherein the instructions to generate the visualization of the metric for display further cause the computing system to include the amount of time in the visualization (see ¶[0065]-¶[0066]).
Although Zoldi substantially teaches the invention of Claim 12, it does not explicitly teach that the machine learning agent is a reinforcement learning agent. Since each individual element and its function are shown in the prior art, albeit shown in separate references, the difference between the claimed subject matter and the prior art rests not on any individual element or function but in the very combination itself- that is in the substitution of the machine learning agent of Kala for the reinforcement learning agent of Zoldi. Thus, the simple substitution of one known element for another producing a predictable result renders the claim obvious.
Claim 13 is the system for performing the method of Claim 7 and is rejected under the same reasoning as Claim 7.
As to Claim 14, Zoldi teaches the computing system of claim 9, wherein the instructions for generating the time-based metric further cause the computing system to determine one or more of: an amount of time taken by the [machine] learning agent to transfer an amount to a destination account, a number of intermediate accounts used by the [machine] learning agent to transfer the amount to the destination account, a relative strength of the rule among multiple rules, a number of cumulative alerts triggered over a given time period, a portion of the amount that is transferred to the destination before an alert is first triggered, or an amount of time taken by the [machine] learning agent to complete an episode of transactions (see ¶[0006], and ¶[0055]).
Although Zoldi substantially teaches the invention of Claim 14, it does not explicitly teach that the machine learning agent is a reinforcement learning agent. Since each individual element and its function are shown in the prior art, albeit shown in separate references, the difference between the claimed subject matter and the prior art rests not on any individual element or function but in the very combination itself- that is in the substitution of the machine learning agent of Kala for the reinforcement learning agent of Zoldi. Thus, the simple substitution of one known element for another producing a predictable result renders the claim obvious.
Claim 15 is the non-transitory computer-readable medium having stored thereon computer-executable instructions that, when executed by a processor accessing memory of a computer cause the computer to perform the method of Claim 1 and is rejected under the same reasoning as Claim 1.
As to Claim 16, Zoldi teaches the non-transitory computer-readable medium of claim 15, wherein the instructions further cause the computer to:
train the [machine] learning agent to select the first sequence of test transactions to cumulatively transfer the amount without detection by the scenario, wherein the scenario is configured to apply a first set of thresholds in the first configuration;
accept an input that re-configures the transaction monitoring system from the first configuration to the second configuration by adjusting the scenario of the system from the first set of thresholds to a second set of thresholds; and
re-train the [machine] learning agent to select the second sequence of test transactions to cumulatively transfer the amount without detection by the scenario, wherein the scenario is re-configured to apply the second set of thresholds in the second configuration;
wherein the first metric represents the effectiveness of the transaction monitoring system when the scenario is configured to apply the first set of thresholds in the first configuration, and the second metric represents the effectiveness of the transaction monitoring system when the scenario is re-configured to apply the second set of thresholds in the second configuration (see ¶[0084]-¶[0086]).
Although Zoldi substantially teaches the invention of Claim 16, it does not explicitly teach that the machine learning agent is a reinforcement learning agent. Since each individual element and its function are shown in the prior art, albeit shown in separate references, the difference between the claimed subject matter and the prior art rests not on any individual element or function but in the very combination itself- that is in the substitution of the machine learning agent of Kala for the reinforcement learning agent of Zoldi. Thus, the simple substitution of one known element for another producing a predictable result renders the claim obvious.
Claim 17 is the non-transitory computer-readable medium having stored thereon computer-executable instructions that, when executed by a processor accessing memory of a computer cause the computer to use the system of Claim 14 and is rejected under the same reasoning as Claim 14.
As to Claim 18, Zoldi teaches the non-transitory computer-readable medium of claim 15, wherein the instructions further cause the computer to:
in the first configuration, train the [machine] learning agent to select the first sequence of test transactions to cumulatively transfer the amount without detection by the scenario;
in the second configuration, train the [machine] learning agent to select the second sequence of test transactions to cumulatively transfer the amount without regard to detection by the scenario;
wherein the first metric represents the effectiveness of the transaction monitoring system against transactions selected to avoid detection by the scenario in the first configuration, and the second metric represents the effectiveness of the transaction monitoring system against naive selection of transactions in the second configuration (see ¶[0065]-¶[0066]).
Although Zoldi substantially teaches the invention of Claim 18, it does not explicitly teach that the machine learning agent is a reinforcement learning agent. Since each individual element and its function are shown in the prior art, albeit shown in separate references, the difference between the claimed subject matter and the prior art rests not on any individual element or function but in the very combination itself- that is in the substitution of the machine learning agent of Kala for the reinforcement learning agent of Zoldi. Thus, the simple substitution of one known element for another producing a predictable result renders the claim obvious.
As to Claim 19, Zoldi teaches the non-transitory computer-readable medium of claim 15, wherein the instructions further cause the computer to:
identify a source, destination, amount, and order for test transactions in the first sequence of test transactions and the second sequence of test transactions; and
generate, for display in the graphical user interface, a visualization of a first graph of the first sequence of test transactions and a second graph of the second sequence of test transactions, wherein the graphs show the source, destination, amount, and order of the test transactions (see ¶[0025], and ¶[0065]-¶[0066]).
As to Claim 20, Zoldi teaches the non-transitory computer-readable medium of claim 15, wherein the instructions further cause the computer to train the [machine] learning agent to select the first sequence of test transactions to cumulatively transfer the amount to a goal account without detection by the scenario, wherein the first sequence of test transactions are recorded during the training (see ¶[0065]-¶[0066]).
Although Zoldi substantially teaches the invention of Claim 20, it does not explicitly teach that the machine learning agent is a reinforcement learning agent. Since each individual element and its function are shown in the prior art, albeit shown in separate references, the difference between the claimed subject matter and the prior art rests not on any individual element or function but in the very combination itself- that is in the substitution of the machine learning agent of Kala for the reinforcement learning agent of Zoldi. Thus, the simple substitution of one known element for another producing a predictable result renders the claim obvious.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to IRENE S KANG whose telephone number is (571)270-3611. The examiner can normally be reached on Monday through Friday between M-F 10am-2pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matt Gart may be reached at (571)-273-3955. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/IRENE KANG/
Examiner, Art Unit 3695
12/27/2025
/EDWARD CHANG/Primary Examiner, Art Unit 3696 12/27/2025