Prosecution Insights
Last updated: April 19, 2026
Application No. 17/936,789

MULTI-TENANT SOLVER EXECUTION SERVICE

Non-Final OA §102§103
Filed
Sep 29, 2022
Examiner
YAARY, MICHAEL D
Art Unit
2151
Tech Center
2100 — Computer Architecture & Software
Assignee
Amazon Technologies, Inc.
OA Round
1 (Non-Final)
87%
Grant Probability
Favorable
1-2
OA Rounds
3y 2m
To Grant
95%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
872 granted / 1001 resolved
+32.1% vs TC avg
Moderate +8% lift
Without
With
+8.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
18 currently pending
Career history
1019
Total Applications
across all art units

Statute-Specific Performance

§101
24.5%
-15.5% vs TC avg
§103
33.9%
-6.1% vs TC avg
§102
21.6%
-18.4% vs TC avg
§112
9.0%
-31.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1001 resolved cases

Office Action

§102 §103
DETAILED ACTION 1. Claims 1-20 are pending in the application. Notice of Pre-AIA or AIA Status 2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 3. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – 4. Claim(s) 1, 4-6, and 19 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Cheng et al (hereafter Cheng)(US Pat. 2021/0224665). Cheng was cited in the IDS filed 01/30/2024. 5. As to claims 1, 6, and 19, Cheng discloses a method, comprising: performing, by a solver execution service implemented by one or more computing devices (abstract dynamically scheduling machine learning inference jobs): receiving configuration data for an execution of an optimization solver to determine a solution to an optimization problem, wherein the optimization problem is stored as a model at a first storage location ([0007] receiving or determining, with at least one processor, a plurality of performance profiles associated with a plurality of system resources, each performance profile being associated with a machine learning model; receiving, with at least one processor, a request for system resources for an inference job associated with the machine learning model), provisioning a compute resource to perform at least a portion of the execution, wherein the compute resource is configured according to the configuration data to implement an execution environment for the optimization solver ([0104] determine a system resource of the plurality of system resources for processing the inference job associated with the machine learning model based on the plurality of performance profiles and a quality of service requirement associated with the inference job); executing an instance of the optimization solver in the execution environment to process the model and determine the solution to the optimization problem (assigning, with at least one processor, the system resource to the inference job for processing the inference job); and writing the solution to a second storage location ([0007] and [0107] receiving result data). 6. As to claim 4, Chen discloses wherein: the solver execution service is a multitenant service that performs a plurality of solver executions for a plurality of clients in parallel; and individual ones of the solver executions are performed in isolated execution environments ([0007]). 7. As to claim 5, Chen discloses wherein the solver execution service provisions a distributed execution environment for the execution that includes a plurality of compute resource instances, and the optimization solver is executed on individual ones of the compute resource instances to solve different portions of the optimization problem ([0084]-[0085]). Claim Rejections - 35 USC § 103 8. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 9. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 10. Claim(s) 2, 3, and 7-9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Cheng in view of Kim (EP 3933585). Kim was cited in the IDS filed 01/30/2024 11. As to claims 2 and 7, Cheng discloses the solver execution service is implemented by an infrastructure provider network ([0073] environment 100 includes transaction processing network 101, which may include merchant system 102, payment gateway system 104, acquirer system 106, transaction service provider system 108, and/or issuer system 110, user device 112, and/or communication network 114); 12. Chen does not disclose the compute resource is provisioned by a serverless compute service implemented by the infrastructure provider network; and the compute resource is a virtual machine instance or a container instance configured with software to execute the optimization solver. However, Kim discloses the compute resource is provisioned by a serverless compute service implemented by the infrastructure provider network; and the compute resource is a virtual machine instance or a container instance configured with software to execute the optimization solver ([0017] artificial intelligence service providing method based on a serverless platform including identifying a container in which an artificial intelligence model is to be loaded). 13. Therefore, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention, to modify the teachings of Cheng with serverless compute service implemented by the infrastructure provider network, as taught by Cheng, for the benefit of avoiding low execution speed and high cost (Kim [0012]). 14. As to claim 3, the combination of Cheng and Kim discloses wherein the configuration data specifies one or more of: a number of processors or processor cores of a virtual machine to use for the execution; a type of processor of the virtual machine; an amount of memory of the virtual machine; a first storage location of the model in the storage service; and a second storage location in the storage service to write the solution (Kim, [0017]). 15. As to claim 8, the combination of Cheng and Kim discloses selecting the compute resource based at least in part on one or more properties of the model ([Cheng [0007]). 16. As to claim 9, the combination of Cheng and Kim discloses the first storage location is a client storage location allocated to a client of the solver execution service; the second storage location is the same client storage location; the model is stored as a first object in the client storage location; and the solution is stored as a second object in the client storage location (Cheng, [0074]). 17. Claim(s) 10-18 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Cheng in view of Stojanovic et al (hereafter Stojanovic)(US Pub. 2010/0161289). 18. As to claims 10 and 20, Cheng does not disclose the solver execution service implements an application programming interface (API) configured to receive client requests; the configuration data is received via the API in a first request; the provisioning of the compute resource, the executing of the optimization solver, and the writing of the solution are performed as part of a first solver job initiated based upon the first request; and the solver execution service returns, in accordance with the API, a response indicating a job identifier of the first solver job. However, Stojanovic discloses the solver execution service implements an application programming interface (API) configured to receive client requests ([0020] and [0078] api); the configuration data is received via the API in a first request; the provisioning of the compute resource, the executing of the optimization solver, and the writing of the solution are performed as part of a first solver job initiated based upon the first request; and the solver execution service returns, in accordance with the API, a response indicating a job identifier of the first solver job ([0021], and [0089]-[0090]). 19. Therefore, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify the teachings of Cheng by implementing an API as taught by Stojanovic, for the benefit of providing a number of solver-related services for creating a model, analyzing a model, selecting a well-suited solver based on analysis of model, decomposing a model into multiple sub-models and providing the sub-models to multiple solvers as parallel threads, and data binding, as well as other services (Stojanovic [0008]). 20. As to claim 11, the combination of Chen and Stojanovic discloses the solver execution service: receiving, via the API, a second request specifying a trigger for executing a second solver job; and initiating execution of the second solver job in response to a detection that the trigger is satisfied (Stojanovic [0021], and [0089]-[0090]). 21. As to claim 12, the combination of Chen and Stojanovic discloses the solver execution service: storing a plurality of triggers for a plurality of solver jobs, including (a) a first trigger that specifies a schedule for initiating an associated solver job, and (b) a second trigger that specifies to initiate another solver job when a model associated with the other solver job is changed (Stojanovic fig. 4, 6, and 7, selected solver and solver scheduler). 22. As to claim 13, the combination of Chen and Stojanovic discloses the solver execution service: monitoring executions of a plurality solver jobs and tracking status information about the solver jobs in a job management database; and responsive to a second request received via the API, returning status information about one or more of the solver jobs in the job management database (Stojanovic fig. 4, 6, and 7). 23. As to claim 14, the combination of Chen and Stojanovic discloses the solver execution service: responsive to a second request received via the API, stopping a second solver job in the solver execution service before completion of the second solver job (Stojanovic [0021], and [0089]-[0090]). 24. As to claim 15, the combination of Chen and Stojanovic discloses the solver execution service: determining in the configuration data a resource tag for the first solver job; and tagging the compute resource with the resource tag, wherein the resource tag is used to associate the first solver job to events generated by the first solver job (Stojanovic [0079]-[0081]). 25. As to claim 16, the combination of Chen and Stojanovic discloses the solver execution service: logging analytics data about a plurality of solver jobs including solver parameters, resource parameters, and performance data associated with individual ones of the solver jobs; and using the analytics data to generate recommended configuration data for another solver job (Stojanovic [0021] and [0081]). 26. As to claim 17, the combination of Chen and Stojanovic discloses the solver execution service: tracking usage data indicating usage of the solver execution service by a client account; determining, based at least in part on the usage data, that the client account has exceeded a usage limit, and in response: throttling a next solver job associated with the client account (Stojanovic [0021] and [0081]). 27. As to claim 18, the combination of Chen and Stojanovic discloses the server execution service executes the optimization solver under a license; and the method further comprises assessing a licensing fee to a client account based at least in part on the license. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL D YAARY whose telephone number is (571)270-1249. The examiner can normally be reached Mon-Fri 9-5:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, James Trujillo can be reached at (571)272-3677. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL D. YAARY/ Primary Examiner, Art Unit 2151
Read full office action

Prosecution Timeline

Sep 29, 2022
Application Filed
Jan 29, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591537
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD AND STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12591411
SYSTEM AND METHOD TO ACCELERATE GRAPH FEATURE EXTRACTION
2y 5m to grant Granted Mar 31, 2026
Patent 12585434
COMPUTING DEVICE AND METHOD
2y 5m to grant Granted Mar 24, 2026
Patent 12585430
FLOATING-POINT CONVERSION WITH DENORMALIZATION
2y 5m to grant Granted Mar 24, 2026
Patent 12585725
NON-RECTANGULAR MATRIX COMPUTATIONS AND DATA PATTERN PROCESSING USING TENSOR CORES
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
87%
Grant Probability
95%
With Interview (+8.0%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 1001 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month