Prosecution Insights
Last updated: April 18, 2026
Application No. 18/357,549

DYNAMIC PROCESSING OF TRANSACTIONS BASED ON PREDICTED COMPUTATION COSTS

Non-Final OA §103
Filed
Jul 24, 2023
Examiner
SWIFT, CHARLES M
Art Unit
2196
Tech Center
2100 — Computer Architecture & Software
Assignee
Paypal Inc.
OA Round
1 (Non-Final)
81%
Grant Probability
Favorable
1-2
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
706 granted / 872 resolved
+26.0% vs TC avg
Strong +22% interview lift
Without
With
+22.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
52 currently pending
Career history
924
Total Applications
across all art units

Statute-Specific Performance

§101
10.0%
-30.0% vs TC avg
§103
55.7%
+15.7% vs TC avg
§102
17.0%
-23.0% vs TC avg
§112
6.1%
-33.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 872 resolved cases

Office Action

§103
DETAILED ACTION This office action is in response to application filed on 7/24/2023. Claims 1 – 20 are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1 – 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chaturvedi (US 20210406896), in view of Cherkasova et al (US 20080221941, hereinafter Cherkasova), and further in view of Wang et al, “iBatch: Saving Ethereum Fees via Secure and Cost-Effective Batching of Smart-Contract Invocations”, ESEC/FSE ’21, August 23–28, 2021, Athens, Greece, ACM, pages 566 – 577 (hereinafter Wang). As per claim 1, Chaturvedi discloses: A system, comprising: a non-transitory memory; and one or more hardware processors coupled with the non-transitory memory and configured to read instructions from the non-transitory memory to cause the system (Chaturvedi figure 8) to perform operations comprising: receiving, from a user device, a request for processing an electronic transaction; (Chaturvedi [0114]: “the system may receive, through an application programming interface from a remote server, a first transaction request to process a transaction in an interaction between the remote server and a user device”.) predicting, using a machine learning model, an estimated frequency of future transactions to be processed by the system that have at least one common attribute with the electronic transaction; (Chaturvedi [0079]: “training datasets 122 may include merchant device behavior data used for training the machine learning-trained classifier 121 for identifying any transactional patterns between the merchant server 140 and the communication device 150, identifying any anomalies in the transaction frequencies, and/or processing future transactions by the transaction periodicity forecast module 120 or another transaction processing entity, where transactions may be processed by the machine learning-trained classifier 121 to identify different types of transaction periodicities (e.g., periodic, aperiodic, combination of periodic and aperiodic) based on transaction patterns involving the communication device 150, the merchant server 140 and/or the issuer host device 170 and predict a transaction periodicity that indicates the frequency at which certain transactions occur between the merchant server 140 and the issuer host device 170”.) in response to determining that the estimated frequency is above a threshold, perform an action for processing the electronic transaction; (Chaturvedi [0124]: “the system may determine, using the transaction processing server with the machine learning-trained classifier, whether the remote server invokes a number of recurrent transaction requests that exceeds a predetermined threshold based on the transactional history. The system also may add, using a merchant whitelist engine of the transaction processing server, the remote server to a whitelist indicating a number of remote servers that invoke recurrent transactions when the remote server is determined to invoke the number of recurrent transaction requests that exceeds the predetermined threshold.”) Chaturvedi did not explicitly disclose: determining, based on at least one of (i) attributes associated with the electronic transaction or (ii) attributes associated with the user device, an estimated computation cost for processing the transaction; wherein the performance of the action comprises configuring one or more compute nodes to perform the action for processing the electronic transaction, wherein the action reduces the estimated computation cost for processing the electronic transaction; and processing the electronic transaction based on using the one or more computer nodes to perform the action. However, Cherkasova teaches: determining, based on at least one of (i) attributes associated with the electronic transaction or (ii) attributes associated with the user device, an estimated computation cost for processing the transaction; (Cherkasova [0056]: “resource cost calculator 103 receives the determined workload profile 108, and in block 204, the resource cost calculator 103 determines a corresponding resource cost 105 for the transactions 120 in the workload profile 108.”) wherein the performance of the action comprises configuring one or more compute nodes to perform the action for processing the electronic transaction, and processing the electronic transaction based on using the one or more computer nodes to perform the action. (Cherkasova [0038]: “by analyzing a representative workload 101 and the corresponding resource costs 105 and client behavior characteristics 121 (e.g., think time 304) of the computing system 115 for the representative workload 101 (or some portion thereof, such as those transactions included in workload profile 108), capacity analyzer 110 can determine the capacity of computing system 115, such as determining how many clients computing system 115 can support according to a desired quality of service target 113 if the such clients act consistent with those represented by representative workload 111. Such a capacity analysis can be output as analysis 112, and used by a planner to determine future planning as to computing resources to be added to computing system 115 to enable the service provider to support a growing workload (e.g., a growing number of clients to support).”) It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Cherkasova into that of Chaturvedi in order to determining, based on at least one of (i) attributes associated with the electronic transaction or (ii) attributes associated with the user device, an estimated computation cost for processing the transaction wherein the performance of the action comprises configuring one or more compute nodes to perform the action for processing the electronic transaction, and processing the electronic transaction based on using the one or more computer nodes to perform the action. Chaturvedi [0064] teaches cost of transaction is part of the transaction information for a transaction. One of ordinary skill can easily see that the cost of the transaction can be calculated instead, without deviating from the general teaching of Chaturvedi. Furthermore, performing capacity planning is commonly known in the field to be performed in order to improve the overall processing efficiency of the system, applicants have thus merely claimed the combination of known parts in the field to achieve predictable results and is therefore rejected under 35 USC 103. Wang teaches: wherein the action reduces the estimated computation cost for processing the electronic transaction; (Wang page 571, right column – page 572, left column, section 4.1 “The degree of amortizing the cost by iBatch is dependent on the number and type of invocations put in a batch… Batching all invocations that arrive in a time window, say W seconds. In practice, the larger W it is, the more invocations will end up in a batch and hence the lower Gas each invocation is amortized…. Only batch when there are more than X candidate invocations in a batch time window. The intuition here is that if there are too few invocations, the degree of cost amortization may be too low and can be offset by the batching overhead to result in negative cost saving.”) It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Wang into that of Chaturvedi and Cherkasova in order to have the action reduces the estimated computation cost for processing the electronic transaction. Chaturvedi [0124] teaches forecasting the number of recurrent transaction invocation and add a remote server to a whitelist if the number is greater than a threshold. One of ordinary skill can easily see those other forms of optimization, such as batching the number of recurrent transaction invocation in order to save gas (resource) cost, can easily be applied here without deviating from the general teaching of Chaturvedi. Applicants have thus merely claimed the combination of known parts in the field to achieve predictable results and is therefore rejected under 35 USC 103. As per claim 2, the combination of Chaturvedi, Cherkasova and Wang further teach: The system of claim 1, wherein the electronic transaction is a first transaction, and wherein the operations further comprise: determining that the estimated frequency is above a second threshold, wherein the action comprises suspending the processing of the first transaction; subsequent to the suspending the processing of the first transaction, receiving a second request for processing a second transaction from a second user device; determining that the second transaction shares the at least one common attribute with the first transaction; configuring the one or more computer nodes to perform a batch processing based on the first transaction and the second transaction; and performing the batch processing using the one or more computer nodes based on the first transaction and the second transaction. (Wang page 571, right column – page 572, left column, section 4.1: batching involves waiting till a predetermined number of invocations has arrived, greater than a determined threshold prior to execute the patch.) As per claim 3, the combination of Chaturvedi, Cherkasova and Wang further teach: The system of claim 1, wherein the operations further comprise: determining that the estimated frequency is below a second threshold, wherein the action comprises storing processed data associated with a processing of the electronic transaction in a cache memory of the system. (Wang page 571, right column – page 572, left column, section 4.1: batching involves waiting till a predetermined number of invocations has arrived, greater than a determined threshold prior to execute the patch; Chaturvedi [0027]: cache element.) As per claim 4, the combination of Chaturvedi, Cherkasova and Wang further teach: The system of claim 3, wherein the electronic transaction is a first transaction, and wherein the operations further comprise: receiving a second request for processing a second transaction from a second user device; determining that the second transaction shares the at least one common attribute with the first transaction; configuring the one or more computer nodes to use the processed data associated with the first transaction to process the second transaction; and processing, using the one or more computer nodes, the second transaction based on the processed data retrieved from the cache memory. (Wang page 571, right column – page 572, left column, section 4.1: batching involves waiting till a predetermined number of invocations has arrived, greater than a determined threshold prior to execute the patch.) As per claim 5, the combination of Chaturvedi, Cherkasova and Wang further teach: The system of claim 4, wherein the processed data is obtained based on the one or more computer nodes (i) retrieving first external data from one or more external data sources based on first attributes associated with the first transaction and (ii) processing the first external data, and wherein the configuring the one or more computer nodes to use the processed data to process the second transaction comprises configuring the one or more computer nodes to abort retrieving second external data from the one or more external data sources based on second attributes associated with the second transaction. (Wang page 571, right column – page 572, left column, section 4.1: external calls.) As per claim 6, the combination of Chaturvedi, Cherkasova and Wang further teach: The system of claim 4, wherein the operations further comprise: associating a flag with the at least one common attribute, wherein the determining that the second transaction shares the at least one common attribute with the first transaction is based on the flag. (Chaturvedi [0080]: “classifiers for the data may be designated (e.g., “recurrent transaction”) and/or the data sets may be annotated or labeled with particular transactions flagged as periodic (or recurrent).”) As per claim 7, the combination of Chaturvedi, Cherkasova and Wang further teach: The system of claim 3, wherein the operations further comprise: detecting a trigger associated with the electronic transaction; and in response to detecting the trigger, removing the processed data in the common cache layer. (Wang page 571, right column – page 572, left column, section 4.1: batching involves waiting till a predetermined number of invocations has arrived, greater than a determined threshold prior to execute the patch; Chaturvedi [0027]: cache element.) As per claim 8, the combination of Chaturvedi, Cherkasova and Wang further teach: The system of claim 7, wherein the trigger is related to at least one of an actual frequency of transactions sharing the at least one common attribute with the transaction, a data type associated with the processed data, a volume of cached data stored in the cache memory, or a priority of the processed data. (Wang page 571, right column – page 572, left column, section 4.1 “The degree of amortizing the cost by iBatch is dependent on the number and type of invocations put in a batch… Batching all invocations that arrive in a time window, say W seconds. In practice, the larger W it is, the more invocations will end up in a batch and hence the lower Gas each invocation is amortized…. Only batch when there are more than X candidate invocations in a batch time window. The intuition here is that if there are too few invocations, the degree of cost amortization may be too low and can be offset by the batching overhead to result in negative cost saving.”) As per claim 9, Chaturvedi discloses: A method comprising: receiving, by a computer system and from a user device, a request for processing an electronic transaction; (Chaturvedi [0114]: “the system may receive, through an application programming interface from a remote server, a first transaction request to process a transaction in an interaction between the remote server and a user device”.) predicting, using a machine learning model, an estimated frequency of future transactions to be processed by the computer system that have at least one common attribute with the electronic transaction; (Chaturvedi [0079]: “training datasets 122 may include merchant device behavior data used for training the machine learning-trained classifier 121 for identifying any transactional patterns between the merchant server 140 and the communication device 150, identifying any anomalies in the transaction frequencies, and/or processing future transactions by the transaction periodicity forecast module 120 or another transaction processing entity, where transactions may be processed by the machine learning-trained classifier 121 to identify different types of transaction periodicities (e.g., periodic, aperiodic, combination of periodic and aperiodic) based on transaction patterns involving the communication device 150, the merchant server 140 and/or the issuer host device 170 and predict a transaction periodicity that indicates the frequency at which certain transactions occur between the merchant server 140 and the issuer host device 170”.) in response to determining that the estimated frequency is above a first threshold, perform an action for processing the electronic transaction, (Chaturvedi [0124]: “the system may determine, using the transaction processing server with the machine learning-trained classifier, whether the remote server invokes a number of recurrent transaction requests that exceeds a predetermined threshold based on the transactional history. The system also may add, using a merchant whitelist engine of the transaction processing server, the remote server to a whitelist indicating a number of remote servers that invoke recurrent transactions when the remote server is determined to invoke the number of recurrent transaction requests that exceeds the predetermined threshold.”) Chaturvedi did not explicitly disclose: wherein the performance of the action comprises, configuring, by the computer system, one or more computer nodes to perform the action for processing the electronic transaction, wherein the action improves a computer resource usage efficiency for processing the electronic transaction; and processing, by the computer system, the electronic transaction based on using the one or more computer nodes to perform the action. However, Cherkasova teaches: wherein the performance of the action comprises configuring one or more compute nodes to perform the action for processing the electronic transaction, and processing the electronic transaction based on using the one or more computer nodes to perform the action. (Cherkasova [0038]: “by analyzing a representative workload 101 and the corresponding resource costs 105 and client behavior characteristics 121 (e.g., think time 304) of the computing system 115 for the representative workload 101 (or some portion thereof, such as those transactions included in workload profile 108), capacity analyzer 110 can determine the capacity of computing system 115, such as determining how many clients computing system 115 can support according to a desired quality of service target 113 if the such clients act consistent with those represented by representative workload 111. Such a capacity analysis can be output as analysis 112, and used by a planner to determine future planning as to computing resources to be added to computing system 115 to enable the service provider to support a growing workload (e.g., a growing number of clients to support).”) It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Cherkasova into that of Chaturvedi in order to have the performance of the action comprises configuring one or more compute nodes to perform the action for processing the electronic transaction, and processing the electronic transaction based on using the one or more computer nodes to perform the action. Chaturvedi [0064] teaches cost of transaction is part of the transaction information for a transaction. One of ordinary skill can easily see that the cost of the transaction can be calculated instead, without deviating from the general teaching of Chaturvedi. Furthermore, performing capacity planning is commonly known in the field to be performed in order to improve the overall processing efficiency of the system, applicants have thus merely claimed the combination of known parts in the field to achieve predictable results and is therefore rejected under 35 USC 103. Wang teaches: wherein the action improves a computer resource usage efficiency for processing the electronic transaction; (Wang page 571, right column – page 572, left column, section 4.1 “The degree of amortizing the cost by iBatch is dependent on the number and type of invocations put in a batch… Batching all invocations that arrive in a time window, say W seconds. In practice, the larger W it is, the more invocations will end up in a batch and hence the lower Gas each invocation is amortized…. Only batch when there are more than X candidate invocations in a batch time window. The intuition here is that if there are too few invocations, the degree of cost amortization may be too low and can be offset by the batching overhead to result in negative cost saving.”) It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Wang into that of Chaturvedi and Cherkasova in order to have the action improves a computer resource usage efficiency for processing the electronic transaction. Chaturvedi [0124] teaches forecasting the number of recurrent transaction invocation and add a remote server to a whitelist if the number is greater than a threshold. One of ordinary skill can easily see those other forms of optimization, such as batching the number of recurrent transaction invocation in order to save gas (resource) cost, can easily be applied here without deviating from the general teaching of Chaturvedi. Applicants have thus merely claimed the combination of known parts in the field to achieve predictable results and is therefore rejected under 35 USC 103. As per claim 10, the combination of Chaturvedi, Cherkasova and Wang further teach: The method of claim 9, further comprising: selecting, from a plurality of different model sets, a first model set for processing the transaction based on the attributes associated with the electronic transaction; and configuring the one or more computer nodes to process the electronic transaction using the first model set. (Cherkasova [0038].) As per claim 11, the combination of Chaturvedi, Cherkasova and Wang further teach: The method of claim 10, further comprising: determining that a risk associated with the transaction is below a threshold, wherein the first model set is selected from the plurality of different model sets based on the risk associated with the transaction being below the threshold, and wherein the first model set requires less computation resources than a second model set in the plurality of different model set. (Cherkasova [0028]: similar workload.) As per claim 12, the combination of Chaturvedi, Cherkasova and Wang further teach: The method of claim 9, further comprising: prior to receiving the request, monitoring computer resource usages associated with processing a plurality of transactions; and training the machine learning model using the monitored computer resource usages. (Cherkasova [0070]: monitoring window and training.) As per claim 13, the combination of Chaturvedi, Cherkasova and Wang further teach: The method of claim 9, wherein the action comprises suspending the processing of the electronic transaction. (Wang page 571, right column – page 572, left column, section 4.1: batching involves waiting till a predetermined number of invocations has arrived, greater than a determined threshold prior to execute the patch.) As per claim 14, the combination of Chaturvedi, Cherkasova and Wang further teach: The method of claim 13, wherein the electronic transaction is a first transaction, and wherein the method further comprises: subsequent to the suspending the processing of the first transaction, receiving a second request for processing a second transaction from a second user device; determining that the second transaction shares the at least one common attribute with the first transaction; and configuring the one or more computer nodes to perform a batch processing based on the first transaction and the second transaction. (Wang page 571, right column – page 572, left column, section 4.1: batching involves waiting till a predetermined number of invocations has arrived, greater than a determined threshold prior to execute the patch.) As per claim 15, the combination of Chaturvedi, Cherkasova and Wang further teach: The method of claim 9, wherein the action comprises storing processed data associated with a processing of the electronic transaction in a cache memory of the computer system. (Chaturvedi [0027]: cache element.) As per claim 16, Chaturvedi discloses: A non-transitory machine-readable medium (Chaturvedi [0113]) having stored thereon machine-readable instructions executable to cause a machine to perform operations comprising: receiving, from a user device, a request for processing an electronic payment transaction; (Chaturvedi [0114]: “the system may receive, through an application programming interface from a remote server, a first transaction request to process a transaction in an interaction between the remote server and a user device”.) predicting, using a machine learning model, an estimated frequency of future transactions to be processed by a service provider associated with the machine that have at least one common attribute with the electronic transaction; (Chaturvedi [0079]: “training datasets 122 may include merchant device behavior data used for training the machine learning-trained classifier 121 for identifying any transactional patterns between the merchant server 140 and the communication device 150, identifying any anomalies in the transaction frequencies, and/or processing future transactions by the transaction periodicity forecast module 120 or another transaction processing entity, where transactions may be processed by the machine learning-trained classifier 121 to identify different types of transaction periodicities (e.g., periodic, aperiodic, combination of periodic and aperiodic) based on transaction patterns involving the communication device 150, the merchant server 140 and/or the issuer host device 170 and predict a transaction periodicity that indicates the frequency at which certain transactions occur between the merchant server 140 and the issuer host device 170”.) in response to determining that the estimated computation resource usage is above a threshold, configuring one or more computer nodes to perform an action for processing the electronic transaction based on the estimated frequency of future transactions, (Chaturvedi [0124]: “the system may determine, using the transaction processing server with the machine learning-trained classifier, whether the remote server invokes a number of recurrent transaction requests that exceeds a predetermined threshold based on the transactional history. The system also may add, using a merchant whitelist engine of the transaction processing server, the remote server to a whitelist indicating a number of remote servers that invoke recurrent transactions when the remote server is determined to invoke the number of recurrent transaction requests that exceeds the predetermined threshold.”) Chaturvedi did not explicitly disclose: determining, based on at least one of (i) attributes associated with the electronic transaction or (ii) attributes associated with the user device, an estimated computation cost for processing the transaction; wherein the performance of the action comprises configuring one or more compute nodes to perform the action for processing the electronic transaction, wherein the action reduces the estimated computation cost for processing the electronic transaction; and processing the electronic transaction based on using the one or more computer nodes to perform the action. However, Cherkasova teaches: determining, based on at least one of (i) attributes associated with the electronic transaction or (ii) attributes associated with the user device, an estimated computation cost for processing the transaction; (Cherkasova [0056]: “resource cost calculator 103 receives the determined workload profile 108, and in block 204, the resource cost calculator 103 determines a corresponding resource cost 105 for the transactions 120 in the workload profile 108.”) wherein the performance of the action comprises configuring one or more compute nodes to perform the action for processing the electronic transaction, and processing the electronic transaction based on using the one or more computer nodes to perform the action. (Cherkasova [0038]: “by analyzing a representative workload 101 and the corresponding resource costs 105 and client behavior characteristics 121 (e.g., think time 304) of the computing system 115 for the representative workload 101 (or some portion thereof, such as those transactions included in workload profile 108), capacity analyzer 110 can determine the capacity of computing system 115, such as determining how many clients computing system 115 can support according to a desired quality of service target 113 if the such clients act consistent with those represented by representative workload 111. Such a capacity analysis can be output as analysis 112, and used by a planner to determine future planning as to computing resources to be added to computing system 115 to enable the service provider to support a growing workload (e.g., a growing number of clients to support).”) It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Cherkasova into that of Chaturvedi in order to determining, based on at least one of (i) attributes associated with the electronic transaction or (ii) attributes associated with the user device, an estimated computation cost for processing the transaction wherein the performance of the action comprises configuring one or more compute nodes to perform the action for processing the electronic transaction, and processing the electronic transaction based on using the one or more computer nodes to perform the action. Chaturvedi [0064] teaches cost of transaction is part of the transaction information for a transaction. One of ordinary skill can easily see that the cost of the transaction can be calculated instead, without deviating from the general teaching of Chaturvedi. Furthermore, performing capacity planning is commonly known in the field to be performed in order to improve the overall processing efficiency of the system, applicants have thus merely claimed the combination of known parts in the field to achieve predictable results and is therefore rejected under 35 USC 103. Wang teaches: wherein the action reduces the estimated computation cost for processing the electronic transaction; (Wang page 571, right column – page 572, left column, section 4.1 “The degree of amortizing the cost by iBatch is dependent on the number and type of invocations put in a batch… Batching all invocations that arrive in a time window, say W seconds. In practice, the larger W it is, the more invocations will end up in a batch and hence the lower Gas each invocation is amortized…. Only batch when there are more than X candidate invocations in a batch time window. The intuition here is that if there are too few invocations, the degree of cost amortization may be too low and can be offset by the batching overhead to result in negative cost saving.”) It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Wang into that of Chaturvedi and Cherkasova in order to have the action reduces the estimated computation cost for processing the electronic transaction. Chaturvedi [0124] teaches forecasting the number of recurrent transaction invocation and add a remote server to a whitelist if the number is greater than a threshold. One of ordinary skill can easily see those other forms of optimization, such as batching the number of recurrent transaction invocation in order to save gas (resource) cost, can easily be applied here without deviating from the general teaching of Chaturvedi. Applicants have thus merely claimed the combination of known parts in the field to achieve predictable results and is therefore rejected under 35 USC 103. As per claim 17, the combination of Chaturvedi, Cherkasova and Wang further teach: The non-transitory machine-readable medium of claim 16, wherein the action comprises suspending the processing of the electronic transaction. (Wang page 571, right column – page 572, left column, section 4.1: batching involves waiting till a predetermined number of invocations has arrived, greater than a determined threshold prior to execute the patch.) As per claim 18, the combination of Chaturvedi, Cherkasova and Wang further teach: The non-transitory machine-readable medium of claim 17, wherein the electronic transaction is a first transaction, and wherein the operations further comprise: subsequent to the suspending the processing of the first transaction, receiving a second request for processing a second transaction from a second user device; determining that the second transaction shares the at least one common attribute with the first transaction; configuring the one or more computer nodes to perform a batch processing based on the first transaction and the second transaction; and performing the batch processing using the one or more computer nodes. (Wang page 571, right column – page 572, left column, section 4.1: batching involves waiting till a predetermined number of invocations has arrived, greater than a determined threshold prior to execute the patch.) As per claim 19, the combination of Chaturvedi, Cherkasova and Wang further teach: The non-transitory machine-readable medium of claim 16, wherein the action comprises storing processed data associated with a processing of the electronic transaction in a cache memory of the service provider. (Chaturvedi [0027]: cache element.) As per claim 20, the combination of Chaturvedi, Cherkasova and Wang further teach: The non-transitory machine-readable medium of claim 19, wherein the electronic transaction is a first transaction, wherein the processing the first transaction comprises retrieving first external data from one or more external data sources based on first attributes associated with the first transaction, and wherein the operations further comprise: receiving a second request for processing a second transaction from a second user device; determining that the second transaction shares the at least one common attribute with the first transaction; and configuring the one or more computer nodes to (i) abort retrieving second external data from the one or more external data sources based on second attributes associated with the second transaction and (ii) use the processed data stored in the cache memory and associated with the first transaction to process the second transaction. (Wang page 571, right column – page 572, left column, section 4.1: external calls.) Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Mohammed et al (US 20240273462) teaches “Historical data associated with operation of a plurality of computing entities is received. Inventory information is tracked based on the historical data, including discovering when new computing entities are added to the plurality of computing entities. Transaction information is tracked based on the historical data. Transaction information comprises information associated with transactions of the plurality of computing entities and with a volume of transactions. Cost information associated with the transactions of each computing entity, is tracked. Utilization information, comprising information relating to utilization of an infrastructure of each respective computing entity, is also tracked. A database is built comprising at least one of inventory, transaction, and cost information. An output is generated providing a report of information on one or more computing entities in the plurality of computing entities. The report of information is based on information contained in the database of information.” Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHARLES M SWIFT whose telephone number is (571)270-7756. The examiner can normally be reached Monday - Friday: 9:30 AM - 7PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, April Blair can be reached at 5712701014. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHARLES M SWIFT/Primary Examiner, Art Unit 2196
Read full office action

Prosecution Timeline

Jul 24, 2023
Application Filed
Nov 26, 2025
Non-Final Rejection — §103
Mar 16, 2026
Examiner Interview Summary
Mar 16, 2026
Applicant Interview (Telephonic)
Apr 07, 2026
Response Filed

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585499
SYSTEMS AND METHODS FOR MICROSERVICES BASED FUNCTIONALITIES
2y 5m to grant Granted Mar 24, 2026
Patent 12566635
SYSTEMS AND METHODS FOR DYNAMIC ALLOCATION OF COMPUTE RESOURCES VIA A MACHINE LEARNING-INFORMED FEEDBACK SEQUENCE
2y 5m to grant Granted Mar 03, 2026
Patent 12561183
PARALLEL DATA PROCESSING IN EMBEDDED SYSTEMS
2y 5m to grant Granted Feb 24, 2026
Patent 12554529
DESIGN OPERATION EXECUTION FOR CONNECTION SERVICE INTEGRATION
2y 5m to grant Granted Feb 17, 2026
Patent 12547443
METHOD AND SYSTEM FOR AUTOMATICALLY PROVIDING A PROCESS COMPLETION INFORMATION OF AN APPLICATION PROCESS
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
81%
Grant Probability
99%
With Interview (+22.3%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 872 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month