Prosecution Insights
Last updated: April 19, 2026
Application No. 17/811,339

QUANTUM JOB SUBMISSION AND OPTIMIZATION FOR END-TO-END ALGORITHMS

Non-Final OA §103§112
Filed
Jul 08, 2022
Examiner
XU, ZUJIA
Art Unit
2195
Tech Center
2100 — Computer Architecture & Software
Assignee
DELL PRODUCTS, L.P.
OA Round
3 (Non-Final)
68%
Grant Probability
Favorable
3-4
OA Rounds
3y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
114 granted / 169 resolved
+12.5% vs TC avg
Strong +82% interview lift
Without
With
+81.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
33 currently pending
Career history
202
Total Applications
across all art units

Statute-Specific Performance

§101
16.0%
-24.0% vs TC avg
§103
46.2%
+6.2% vs TC avg
§102
2.0%
-38.0% vs TC avg
§112
31.0%
-9.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 169 resolved cases

Office Action

§103 §112
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This Office Action is in response to Request for Continued Examination and Applicant Amendment and Arguments filed on 17 November, 2025. Claims 1-20 are pending in this application. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 17 November, 2025 has been entered. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. Claims 1-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claims 1 and 11 contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for pre-AIA the inventor(s), at the time the application was filed, had possession of the claimed invention. Because the specification fails to disclose how to applying a machine learning model trained on quantum job metadata …to determine priority order based on the predicted runtime characteristics and determined grace period. More specifically, in claims 1 (lines 7-10) and claim 11 (lines 9-12), it recites “wherein prioritizing the quantum jobs comprises applying a machine learning model trained on quantum job metadata to predict runtime characteristics of the quantum jobs and to determine priority order based on the predicted runtime characteristics and determined grace period”. Paragraph [0031] of specification discloses “quantum job metadata 224 such that the prioritization engine 206, which may be part of the orchestration engine 202, can evaluate the quantum jobs in the job queue 210 and reprioritize the quantum jobs in the job queue 210. In one example, the ML 204 may be used to predict runtime characteristics of each of the quantum jobs 212, 214, and 216 in the job queue 210 based on features such as number of shots, depth, number of qubits, entanglement, gates, or the like or combinations thereof”; Paragraph [0032] of specification discloses “The prioritization engine 206 may order or prioritize the quantum jobs in the job queue 210 based on the runtime characteristics and/or user intents in some example embodiments. For example, user intents may include budget, execution deadline, accuracy, confidence, cost, user-defined priority, or the like. Using user intents, the characteristics of the quantum jobs in the job queue (including future quantum jobs), wait times (e.g., grace periods), and the like may be used to prioritize the quantum jobs in the job queue 21.” Such embodiment is related to prioritization engine to determine priority order. Whereas the limitation in claims 1 and 11 is related to applying a machine learning model…to determine priority order based on the predicted runtime characteristics and determined grace period. Thus, the specification fails to disclose how to applying a machine learning model trained on quantum job metadata …to determine priority order based on the predicted runtime characteristics and determined grace period. Claims 2-10 and 12-20, they are depend on claims 1 and 11 and do not overcome the deficiencies thereof, therefore they are rejected for the same reason as claims 1 and 11 above. Claim Rejections - 35 USC § 103 The following is a quotation of pre-AIA 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action: (a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 1 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Ravi et al. (US Pub. 2024/0378085 A1) in view of HAAH (US Pub. 2022/0414509 A1) and further in view of Bishop, III et al. (US Pub. 2022/0398129 A1), Burleson et al. (US Pub. 2019/0213509 A1) and Vasileiadis et al. (US Patent. 11,663,051 B2). Ravi, HAAH and Burleson were cited in the previous Office Action. As per claim 1, Ravi teaches the invention substantially as claimed including A method, comprising: placing quantum jobs in a job queue, wherein the quantum jobs are associated with applications (Ravi, Fig. 3, 120 job queue, 122T to 122P quantum jobs; [0048] lines 1-3, Each individual request 140 and their associated umbrella job 310 may generate one or more jobs 122 that are added, by the QaO server 110, to the job queue 120; [0049] lines 8-12, Requests 140 requiring multiple jobs 122 are referred to herein as “complex requests.” For example, some quantum applications are iterative in nature, requiring multiple jobs 122 to be performed as the algorithm approaches a solution; also see [0029] lines 1-28, job requests 140 represent requests for the Cloud Processing System 100 to perform execution of some computational workload. Some job requests 140 may include quantum programs, and thus may target execution on premium quantum computing devices 132. Some job requests 140 may include classical programs, and thus may target execution on premium classical computing devices 134); wherein each of the applications includes a computing job and a quantum job (Ravi, Fig. 1, 140 requests; [0029] lines 1-28, job requests 140 represent requests for the Cloud Processing System 100 to perform execution of some computational workload. Some job requests 140 may include quantum programs, and thus may target execution on premium quantum computing devices 132. Some job requests 140 may include classical programs, and thus may target execution on premium classical computing devices 134…With requests 140 that are classically based (“classically-based requests,” e.g., requesting execution on one of the premium classical computing devices 134)); prioritizing the quantum jobs in the job queue based on a determined wait time associated with execution of a computer job of one of the applications [not ready for execution] (Ravi, Fig. 1, 140 requests (which including computer portion of one of the applications) and jobs 122 placed in the queue 120, 134 premium classical computing device; Fig. 3, request 140, and jobs 122 in queue 120; [0029] lines 1-28, job requests 140 represent requests for the Cloud Processing System 100 to perform execution of some computational workload. Some job requests 140 may include quantum programs, and thus may target execution on premium quantum computing devices 132. Some job requests 140 may include classical programs, and thus may target execution on premium classical computing devices 134…With requests 140 that are classically based (“classically-based requests,” e.g., requesting execution on one of the premium classical computing devices 134)…; [0056] lines 1-27, the scheduling engine 114 may track and maintain an estimated time to execution (“ETE”) for the various jobs 122 on the queue 120… the queuing time may be broken down among the different possible optimizations (e.g., based on heuristics or some analysis of the job 122, the circuits, or the device 132)…the scheduling engine 114 may promote (as prioritizing) a different independent job of the user ahead of another job (e.g., if an ongoing optimization task is currently running for the overtaken job, if the overtaken job is not ready for execution; [0021] These optimizations are performed while the jobs are queued up awaiting execution. In an example embodiment, the QIP system provides job scheduling services that maximize execution fidelity at low system load, minimizes wait times at high system load, and otherwise provides a balanced approach that accounts for users' quality of service (“QoS”) terms (e.g., maximum wait times) while accounting for the effects of QC device recalibration and optimizing calibration schedules for improved fidelity and lower wait times; [Examiner noted: prioritizing the jobs (i.e., as including quantum jobs) in the job queue based on execution of other jobs not ready for execution and determined wait time ((i.e., the other job/overtaken job), that including either a computer portion of one of the applications or the quantum portion of one of the applications that is not ready for execution (i.e., overtaken job is not ready for execution), so the different independent jobs are prioritized (i.e., as prioritizing either quantum jobs or computer portion jobs), thus prioritizing the quantum jobs in the job queue based at least on execution of a computer portion of one of the applications [i.e., not ready for execution)]); a corresponding quantum job for execution in a quantum processing unit (Ravi, [0029] lines 1-28, job requests 140 represent requests for the Cloud Processing System 100 to perform execution of some computational workload. Some job requests 140 may include quantum programs, and thus may target execution on premium quantum computing devices 132); and executing, during the determined wait time of a first application, a highest priority quantum job associated with a different application from the job queue during the determined wait time of the computer job of the one application (Ravi, Fig. 3, 202 compute job from job queue 120 to 132 premium quantum computing device for execution; [0067] lines 6-10, ordering the terms/jobs 122 in a priority order, which can be garnered from understanding the quantum problem at hand, earlier jobs 122 can be made to be more likely to contribute to forming the appropriate solution. [0091] lines 14-23, Each of these jobs 404 represent a link to one of the jobs 122 currently on the associated job queue (or “physical job queue”) 120. The virtual queue 402 is distributed along the machine's actual physical job queue(s) 120 (e.g., based on traditional priority schemes like fairshare, based on hardware targeting, or the like). The physical job queue 120 itself can be agnostic to the existences of the user virtual queues 402. Virtual-queue based prioritization algorithms at the physical layer can also be implemented (as including that highest priority quantum job from the job queue is executed); also see [0056] If the waiting time is about to end, the newest saved shapshot can be used and all pending or ongoing optimizations in the pipeline can be terminated. In some embodiments (e.g., with virtual queue management), the scheduling engine 114 may promote a different independent job of the user ahead of another job (e.g., if an ongoing optimization task is currently running for the overtaken job, if the overtaken job is not ready for execution (i.e., determined wait time, still waiting)); [0008] queued time amounts to wasted time. Quantum jobs which are submitted to a quantum machine sit idle in the machine's queue until they reach the head of the queue; [0077] multi-threading, jobs from two different processes or applications (e.g., sets of jobs which may be independent of one other). Ravi fails to specifically teach the wait time is grace period, wherein the grace period is as a time required by the computer job to generate an output that serves as input for a corresponding quantum job. However, HAAH teaches the wait time is grace period, wherein the grace period is a time required by the computer job to generate an output that serves as input for a corresponding quantum job (HAAH, [0014] lines 5-11, a quantum processor of a quantum computing system waits for classical feedback regarding a set of Clifford gate operations. To execute a given quantum algorithm, the quantum processor waits for tens to hundreds of these instances of classical feedback and each instance of classical feedback typically takes more time than an elementary quantum operation; also see [0042] lines 9-11, Each instance of classical feedback takes a non-trivial amount of time to perform, prolonging overall execution of the overall magic state distillation process (as grace period is a time required by the computer job to generate an output required (i.e., wait time for waiting the classical feedback)). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Ravi with HAAH because HAAH’s teaching of reducing the wait time between the quantum process and classical process would have provided Ravi’s system with the advantage and capability to allow the system to reduce the number of instances of classical feedback involved to a single instance of classical feedback for improving the waiting time (see HAAH, [0014] “reduce the number of instances of classical feedback involved to a single instance of classical feedback.”). Ravi and HAAH fail to specifically teach wherein prioritizing the quantum jobs comprises applying a machine learning model trained on quantum job metadata to predict runtime characteristics of the quantum jobs and to determine priority order based on the predicted runtime characteristics and determined grace period. However, Bishop teaches wherein prioritizing the quantum jobs comprises applying a machine learning model trained on quantum job metadata to predict runtime characteristics of the quantum jobs and to determine priority order based on the predicted runtime characteristics. (Bishop, Fig. 1, 116, application prioritization system, 140 feedback-based ML model, 124, 126, 162, 142 application priority, application tasks, 112a-c); [0025] lines 8-11, monitoring usage of the computing infrastructure 102, identifying usage trends (as job metadata), and predicting the infrastructure demand 162 based on the trends (as predict runtime characteristics; please note: quantum jobs was taught by Ravi); [0027] lines 1-15, The application prioritization system 116 determines, by applying a feedback- based ML model 140 to at least a portion of the application data 124, the query 132, the computing task rules 126, and/or the infrastructure demand 162, application priorities 142. The portion of the application data 124 to which the feedback-based ML model 140 is applied may not be pre-defined (e.g., by a user or administrator)…the feedback-based ML model 140 generally employs a combination of one or more machine learning models and linear regression in an iterative fashion to determine an appropriate application prioritizations 142 for generating a response 146 to the received query 132. For example, the feedback-based ML model 140 may be applied to the application data 124, the query 132, and the computing task rules 126 to iteratively determine factors and corresponding weights for the first computing application 112a and the second computing application 112b; Abstract, determines, using a feedback-based machine learning model, a first priority of the first computing application and a second priority of the second computing application and an explanation of the first and second priorities). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Ravi and HAAH with Bishop because Bishop’s teaching of utilizing the ML model to determining the priority orders based on the predicted runtime characteristics would have provided Ravi and HAAH’s system with the advantage and capability to allow the system to easily determining the importance of the different jobs in order to ranking the jobs based on its importance level which improving the system efficiency and performance. Although Ravi, HAAH and Bishop teach grace period and prioritizing the quantum jobs in the job queue, Ravi, HAAH and Bishop fail to specifically teach when prioritizing, it is based on determined grace period associated with execution, when determining priority order, it is also based on determined grace period, and when executing, it is during the determined grace period. However, Burleson teaches when prioritizing, it is based on determined grace period associated with execution and when executing, it is during the determined grace period (Burleson, [0142] lines 1-17, task scheduling 904, goals 906, and/or congestion avoidance 908 are second priority variables. For example, variables associated with downtime of users assigned to dependent tasks, such as order-filler and loader downtime. The impact prediction system can prioritize order-filling task assignments and breaks to minimize downtime by associates based on supply chain dependencies. For example, a goals variable can specify that corrective action should be taken to prevent a predicted incident associated with a lift driver's task if an order-filler is waiting for the lift driver to replenish a slot to avoid leaving the order-filler idle while the lift driver's task is delayed. Likewise, the variables can specify that an order-filler's task can be delayed by a predicted incident if a loader can be assigned to other tasks during the delay time (as prioritizing the other tasks based on the grace period (i.e., during delay/wait time), therefore, it is prioritized and processed during the grace period) such that completion of tasks in the dependency chain do not negatively impact laws, scheduling; please note: execution of highest priority quantum job was taught by Ravi), and when determining priority order, it is also based on determined grace period (Burleson, Fig. 5 114 impact prediction component (as model), 504 set of weighted impact variables; Fig. 9, set of impact variables 900, 904, 906 and 908 (as including priority order/variables); [0142] lines 1-17, task scheduling 904, goals 906, and/or congestion avoidance 908 are second priority variables. For example, variables associated with downtime of users assigned to dependent tasks, such as order-filler and loader downtime. The impact prediction system (as ML model) can prioritize order-filling task assignments and breaks to minimize downtime by associates based on supply chain dependencies. For example, a goals variable can specify that corrective action should be taken to prevent a predicted incident associated with a lift driver's task if an order-filler is waiting for the lift driver to replenish a slot to avoid leaving the order-filler idle while the lift driver's task is delayed. Likewise, the variables can specify that an order-filler's task can be delayed by a predicted incident if a loader can be assigned to other tasks during the delay time). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Ravi, HAAH and Bishop with Burleson because Burleson’s teaching of prioritizing and performing the different task during the delay time (i.e., other task still running, to replenish a slot) would have provided Ravi, HAAH and Bishop’s system with the advantage and capability to allow the system to efficiently utilizing the different resources for processing different tasks which avoiding the resource being idle during the waiting time in order to improving the system efficiency and performance. Ravi, HAAH, Bishop and Burleson fail to specifically teach wherein the grace period is dynamically determined. However, Vasileiadis teaches wherein the grace period is dynamically determined (Vasileiadis, Claim 1, dynamically optimizing, by the scheduler, execution of the workflow containing dependencies between one or more subject nodes and one or more observer nodes by determining a wait time between successive executions of the workflow for the one or more observer nodes based on the execution time information, wherein the wait time is determined by a machine learning operation initiated by the model builder to model each one of the one or more subject nodes and the one or more observer nodes according to the execution time information stored in the bookkeeping ledger, wherein the optimizer uses an output of the machine learning operation to compute the wait time, report the wait time to the scheduler, and convert the workflow from a sequential workflow to a pipelined workflow based on the wait time, and wherein the wait time is indicative of an effective delta time (EDT) representative of a time period necessary for the one or more subject nodes to produce a number of novel output frames of the one or more tasks which are simultaneously able to be processed). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Ravi, HAAH, Bishop and Burleson with Vasileiadis because Vasileiadis’s teaching of dynamically optimizing the workflow based on the wait time (as wait time is dynamically determined, since it is dynamically optimization) would have provided Ravi, HAAH, Bishop and Burleson’s system with the advantage and capability to allow the system to optimizing the workflow to satisfying the workflow dependencies with respect to the wait time which improving the processing performance and efficiency. As per claim 11, it is a non-transitory storage medium claim of claim 1 above. Therefore, it is rejected for the same reason as claim 1 above. In addition, Ravi further teaches one or more hardware processors to perform operations (Ravi, Claim 15, lines 1-3, A non-transitory computer-readable medium storing instructions that, when executed by at least one classical processor, causes the at least one classical processor to). Claims 2-4 and 12-14 are rejected under 35 U.S.C. 103 as being unpatentable over Ravi, HAAH, Bishop, Burleson and Vasileiadis, as applied to claims 1 and 11 respectively above, and further in view of Doi (US Patent. 12,032,888 B2), RADHA et al. (US Pub. 2024/0394414 A1) and Cain, III et al. (US Pub. 2016/0357676 A1). Doi, RADHA and Cain were cited in the previous Office Action. As per claim 2, Ravi, HAAH, Bishop, Burleson and Vasileiadis teach the invention according to claim 1 above. Ravi further teaches placing the quantum jobs in the job queue as placeholders, wherein each of the placeholders includes quantum job metadata, the quantum job metadata including one or more of quantum circuit (Ravi, [0029] lines 1-5, job requests 140 represent requests for the Cloud Processing System 100 to perform execution of some computational workload. Some job requests 140 may include quantum programs, and thus may target execution on premium quantum computing devices 132; [0010] lines 11-13, adding a first job entry to the first job queue for the request, the first job entry (as placeholder) includes a quantum circuit for a first job (as quantum job metadata including one or more of quantum circuit)). Ravi, HAAH, Bishop, Burleson and Vasileiadis fail to specifically teach the quantum job metadata further including number of shots, number of qubits, quantum depth, and application end-to-end execution time. However, Doi teaches the quantum job metadata further including number of shots, number of qubits (Doi, Fig. 6, 602, 604, 606, 608, 610 queue; Col 11, lines 16-23, storage component 202 can store: 2×2 complex matrix parameters 602a, 602b, 602c, 602N of shots 1, 2, 3, N in queue 602; pointer to statevector parameters 604a, 604b, 604c, 604N in queue 604; control mask parameters 606a, 606b, 606c, 606N in queue 606; target qubit parameters 608a, 608b, 608c, 608N in queue 608; and/or second target qubit parameters 610a, 610b, 610c, 610N in queue 610; Col 11, lines 35-41, store parameters in queues (e.g., as illustrated in diagram 600 of FIG. 6); and 4) execute kernel (e.g., batched kernel 306 or batched general quantum gate kernel 406a, 406b, and/or 406c) with N*2.sup.q−1 threads on second processor 204 (e.g., a GPU), where N denotes the total number of shots, q denotes the max number of qubits of quantum circuit). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Ravi, HAAH, Bishop, Burleson and Vasileiadis with Doi because Doi’s teaching of quantum job metadata that including the information about total number of shots and the max number of qubits of quantum circuit would have provided Ravi, HAAH, Bishop, Burleson and Vasileiadis’s system with the advantage and capability to easily determining the number of shot and number qubits for the quantum jobs in order to reduce computational cost which improving the system performance and efficiency (see Doi, Col 13, lines 54-57, “batched quantum circuits simulation system 102 can thereby reduce computational cost of such a GPU used to perform the simulation of the batched quantum gates and/or further improve the performance and/or efficiency of such a GPU”). Ravi, HAAH, Bishop, Burleson, Vasileiadis and Doi fail to specifically teach the quantum job metadata further including quantum depth, and application end-to-end execution time. However, RADHA teaches quantum job metadata further including quantum depth (RADHA, [0183] lines 10-16, The parameters of the quantum component include the quantum computing device ID or device name (which could be identifiable by location or a number, or some other indicator), the number of qubits of the quantum computing device (e.g., 20 qubits), the number of shots (e.g., 100 shots), and the depth of the quantum circuit (e.g., 100) (as quantum job metadata further including quantum depth). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Ravi, HAAH, Bishop, Burleson, Vasileiadis and Doi with RADHA because RADHA’s teaching of parameters of the quantum component that including the depth of the quantum circuit would have provided Ravi, HAAH, Bishop, Burleson, Vasileiadis and Doi’s system with the advantage and capability to enable the system to easily determining the depth of the quantum circuit that is related to the jobs in order to improving the system performance and efficiency. Ravi, HAAH, Bishop, Burleson, Vasileiadis, Doi and RADHA fail to specifically teach the quantum job metadata further including application end-to-end execution time. However, Cain teaches the quantum job metadata further including application end-to-end execution time (Cain, Fig. 2, 201 OS task queue, task runtime (as application/task end-to-end execution time); [0014] lines 7-9, each entry in the OS task queue 201 includes a task (or software thread) identifier 1 to N. Each of the tasks 1 to N has a respective task runtime value). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Ravi, HAAH, Bishop, Burleson, Vasileiadis, Doi and RADHA with Cain because Cain’s teaching of task runtime that associated with the each entry of task queue would have provided Ravi, HAAH, Bishop, Burleson, Vasileiadis, Doi and RADHA’s system with the advantage and capability to allow the system to easily determine and tracking the execution time needed for each of tasks in order to improving the system efficiency and performance. As per claim 3, Ravi, HAAH, Bishop, Burleson, Vasileiadis, Doi, RADHA and Cain teach the invention according to claim 2 above. Ravi further teaches estimating an execution time for each of the quantum jobs (Ravi, [0034] lines 5-7, for execution of quantum jobs 122, the execution engine 118 sends such jobs 122; [0056] lines 1-3, the scheduling engine 114 may track and maintain an estimated time to execution (“ETE”) for the various jobs 122 on the queue 120). As per claim 4, Ravi, HAAH, Bishop, Burleson, Vasileiadis, Doi, RADHA and Cain teach the invention according to claim 2 above. Ravi further teaches wherein the quantum jobs include first quantum jobs associated with a first application, wherein a number of the first quantum jobs is estimated by the first application and wherein the first quantum jobs are performed at different times (Ravi, [0049] lines 7-20, requests 140 may require multiple jobs 122 to complete the request 140. Requests 140 requiring multiple jobs 122 are referred to herein as “complex requests.” For example, some quantum applications are iterative in nature, requiring multiple jobs 122 to be performed (as first quantum jobs associated with a first application) as the algorithm approaches a solution (e.g., Variational Quantum Eigensolver (VQE), which uses O (1/∈.sup.2) iterations of depth-O (1) circuits, where ∈ is the target precision). Some quantum applications are composite in nature, requiring multiple jobs 122 to provide a complete result. For example, at each iteration of VQE, an ansatz may be made up of multiple terms, where each term is a quantum circuit. Since each of these terms is a separate quantum circuit, solving them is performed individually. Thus, an ansatz of N terms may cause the QaO server 110 to create N jobs 122 on the queue 120 per iteration of the VQE (as quantum jobs include first quantum jobs associated with a first application (i.e., quantum application is iterative in nature, requiring multiple jobs), wherein a number of the first quantum jobs is estimated by the first application and wherein the first quantum jobs are performed at different times (i.e., estimated/determined/created N jobs on the queue per iteration of the VQE) (see specification support [0014] “such as VQE (Variational Quantum Eigensolver) algorithms, the number of quantum jobs may be estimated or predicted”). In addition, HAAH further teaches wherein the first quantum jobs are associated with first computer jobs (HAAH, [0014] lines 5-10, a quantum processor of a quantum computing system waits for classical feedback regarding a set of Clifford gate operations. To execute a given quantum algorithm, the quantum processor waits for tens to hundreds of these instances of classical feedback and each instance of classical feedback typically takes more time than an elementary quantum operation (as quantum jobs are associated with first computer jobs)). As per claims 12-14, they are non-transitory storage medium claims of claims 2-4 respectively above. Therefore, they are rejected for the same reasons as claims 2-4 respectively above. Claims 5-6 and 15-16 are rejected under 35 U.S.C. 103 as being unpatentable over Ravi, HAAH, Bishop, Burleson, Vasileiadis, Doi, RADHA and Cain, as applied to claims 4 and 14 respectively above, and further in view of PALOP et al. (US Pub. 2023/0140809 A1). PALOP was cited in the previous Office Action. As per claim 5, Ravi, HAAH, Bishop, Burleson, Vasileiadis, Doi, RADHA and Cain teach the invention according to claim 4 above. Ravi teaches first computer jobs (Ravi, Fig. 1, 140 requests; [0029] lines 1-28, job requests 140 represent requests for the Cloud Processing System 100 to perform execution of some computational workload. Some job requests 140 may include quantum programs, and thus may target execution on premium quantum computing devices 132. Some job requests 140 may include classical programs, and thus may target execution on premium classical computing devices 134…With requests 140 that are classically based (“classically-based requests,” e.g., requesting execution on one of the premium classical computing devices 134). Ravi, HAAH, Bishop, Burleson, Vasileiadis, Doi, RADHA and Cain fail to specifically teach wherein a grace period is determined for each pair of the first quantum jobs. However, PALOP teaches wherein a grace period is determined for each pair of the first quantum jobs (PALOP, [0046] lines 6-7, given time delay between a giving pairing/set of tasks; [0047] lines 1-4, predicting time delays (as determining grace period) resulting from contention between tasks). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Ravi, HAAH, Bishop, Burleson, Vasileiadis, Doi, RADHA and Cain with PALOP because PALOP’s teaching of determining/predicting the time delays between pair of the tasks would have provided Ravi, HAAH, Bishop, Burleson, Vasileiadis, Doi, RADHA and Cain’s system with the advantage and capability to allow the system to easily determining the waiting times between the different pair of tasks in order to predict the effect of contention on any given pairing of tasks given only their characteristic performance profile which improving the system performance and efficiency (see PALOP, [0049] “predict the effect of contention on any given pairing of tasks given only their characteristic performance profile”). As per claim 6, Ravi, HAAH, Bishop, Burleson, Vasileiadis, Doi, RADHA, Cain and PALOP teach the invention according to claim 5 above. Ravi further teaches prioritizing the quantum jobs such that a second quantum job associated with a second application is performed (Ravi, Fig. 1, 140 requests and jobs 122 placed in the queue 120; Fig. 3, request 140, and jobs 122 in queue 120; [0029] lines 1-18, job requests 140 represent requests for the Cloud Processing System 100 to perform execution of some computational workload. Some job requests 140 may include quantum programs…; [0049] lines 8-12, Requests 140 requiring multiple jobs 122 are referred to herein as “complex requests.” For example, some quantum applications are iterative in nature, requiring multiple jobs 122 (as including a second quantum job associated with a second application) to be performed as the algorithm approaches a solution; [0056] lines 1-27, the scheduling engine 114 may track and maintain an estimated time to execution (“ETE”) for the various jobs 122 on the queue 120… the queuing time may be broken down among the different possible optimizations (e.g., based on heuristics or some analysis of the job 122, the circuits, or the device 132)…the scheduling engine 114 may promote (as prioritizing) a different independent job (as include a second quantum job associated with a second application) of the user ahead of another job (e.g., if an ongoing optimization task is currently running for the overtaken job, if the overtaken job is not ready for execution (as prioritizing the quantum jobs which including a second quantum job associated with a second application is performed); see Fig. 3, 202 compute job from job queue 120 to 132 premium quantum computing device for execution). In addition, Burleson teaches when prioritizing and that second quantum job is performed, it is during one of the grace periods (Burleson, [0142] lines 1-17, task scheduling 904, goals 906, and/or congestion avoidance 908 are second priority variables. For example, variables associated with downtime of users assigned to dependent tasks, such as order-filler and loader downtime. The impact prediction system can prioritize order-filling task assignments and breaks to minimize downtime by associates based on supply chain dependencies. For example, a goals variable can specify that corrective action should be taken to prevent a predicted incident associated with a lift driver's task if an order-filler is waiting for the lift driver to replenish a slot to avoid leaving the order-filler idle while the lift driver's task is delayed. Likewise, the variables can specify that an order-filler's task can be delayed by a predicted incident if a loader can be assigned to other tasks during the delay time (as processing during one of the grace periods) such that completion of tasks in the dependency chain do not negatively impact laws, scheduling). As per claim 15, it is a non-transitory storage medium claim of claim 5 above. Therefore, it is rejected for the same reason as claim 5 above. As per claim 16, it is a non-transitory storage medium claim of claim 6 above. Therefore, it is rejected for the same reason as claim 6 above. Claims 7 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Ravi, HAAH, Bishop, Burleson and Vasileiadis, as applied to claims 1 and 11 respectively above, and further in view of BALAKRISHNAN et al. (US Pub. 2018/0375784 A1). BALAKRISHNAN was cited in the previous Office Action. As per claim 7, Ravi, HAAH, Bishop, Burleson and Vasileiadis teach the invention according to claim 1 above. Ravi teaches the quantum jobs queue (Ravi, Fig. 1, 140 requests and jobs 122 placed in the queue 120, Fig. 3, request 140, and jobs 122 in queue 120; [0029] lines 1-18, job requests 140 represent requests for the Cloud Processing System 100 to perform execution of some computational workload. Some job requests 140 may include quantum programs). Although Ravi, HAAH, Bishop, Burleson and Vasileiadis teach the quantum jobs queue, Ravi, HAAH, Bishop, Burleson and Vasileiadis fail to specifically teach prioritizing the quantum queue using a heuristic function. However, BALAKRISHNAN teaches prioritizing the quantum queue using a heuristic function (BALAKRISHNAN, [0028] lines 23-25, various other techniques, algorithms and heuristics for queue prioritization are contemplated within the present disclosure). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Ravi, HAAH, Bishop, Burleson and Vasileiadis with BALAKRISHNAN because BALAKRISHNAN’s teaching of utilizing the heuristics function for queue prioritization would have provided Ravi, HAAH, Bishop, Burleson and Vasileiadis’s system with the advantage and capability to allow the system to prioritize purging the throttling queue to reduce the risk of running out of system resources in order to minimizing the constraints placed on processing requests from the throttling queue for improving system performance (see BALAKRISHNAN [0026] “prioritize purging the throttling queue to reduce the risk of running out of system resources…minimizing the constraints placed on processing requests from the throttling queue can improve system performance”). As per claim 17, it is a non-transitory storage medium claim of claim 7 above. Therefore, it is rejected for the same reason as claim 7 above. Claims 8 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Ravi, HAAH, Bishop, Burleson, Vasileiadis and BALAKRISHNAN, as applied to claims 7 and 17 respectively above, and further in view of Aliferis et al. (US Pub. 2011/0307437 A1). Aliferis was cited in the previous Office Action. As per claim 8, Ravi, HAAH, Bishop, Burleson, Vasileiadis and BALAKRISHNAN teach the invention according to claim 7 above. Ravi, HAAH, Bishop, Burleson, Vasileiadis and BALAKRISHNAN fail to specifically teach wherein the heuristic function is one of a greedy search, linear programming, or a heuristic search. However, Aliferis teaches wherein the heuristic function is one of a greedy search, linear programming, or a heuristic search (Aliferis, Claim 1, lines 15-16, apply a user-provided inclusion heuristic function to prioritize variables in the priority queue; [0058] lines 19-22, Greedy Search with backtracking, etc.) as well as a plurality of previously unknown methods by instantiating appropriate components (i.e., cost function and heuristic function); [0108] line 1, Random heuristic search informed by standard heuristic values). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Ravi, HAAH, Bishop, Burleson, Vasileiadis and BALAKRISHNAN with Aliferis because Aliferis’s teaching of heuristic function that is one of Greedy Search and/or heuristic search would have provided Ravi, HAAH, Bishop, Burleson, Vasileiadis and BALAKRISHNAN’s system with the advantage and capability to allow the system to derive more efficient heuristics which improving the system performance and efficiency (see Aliferis, [0110] “derive more efficient heuristics”). As per claim 18, it is a non-transitory storage medium claim of claim 8 above. Therefore, it is rejected for the same reason as claim 8 above. Claims 9 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Ravi, HAAH, Bishop, Burleson, Vasileiadis, BALAKRISHNAN and Aliferis, as applied to claims 8 and 18 respectively above, and further in view of Nott (US Pub. 2021/0294644 A1). Nott was cited in the previous Office Action. As per claim 9, Ravi, HAAH, Bishop, Burleson, Vasileiadis, BALAKRISHNAN and Aliferis teach the invention according to claim 8 above. Ravi teaches prioritizing the quantum jobs (Ravi, Fig. 1, 140 requests and jobs 122 placed in the queue 120; Fig. 3, request 140, and jobs 122 in queue 120; [0029] lines 1-18, job requests 140 represent requests for the Cloud Processing System 100 to perform execution of some computational workload. Some job requests 140 may include quantum programs…; [0049] lines 8-12, Requests 140 requiring multiple jobs 122 are referred to herein as “complex requests.” For example, some quantum applications are iterative in nature, requiring multiple jobs 122 to be performed as the algorithm approaches a solution; [0056] lines 1-27, the scheduling engine 114 may track and maintain an estimated time to execution (“ETE”) for the various jobs 122 on the queue 120… the queuing time may be broken down among the different possible optimizations (e.g., based on heuristics or some analysis of the job 122, the circuits, or the device 132)…the scheduling engine 114 may promote (as prioritizing) a different independent job (as include a second quantum job associated with a second application) of the user ahead of another job (e.g., if an ongoing optimization task is currently running for the overtaken job, if the overtaken job is not ready for execution (as prioritizing the quantum jobs); see Fig. 3, 202 compute job from job queue 120 to 132 premium quantum computing device for execution). Ravi, HAAH, Bishop, Burleson, Vasileiadis, BALAKRISHNAN and Aliferis fail to specifically teach when prioritizing, it is based on at least one of user intents, quantum job metadata, end-to-end application execution time, quantum job runtime characteristics, user-defined priority, or execution deadlines. However, Nott teaches when prioritizing, it is based on at least one of user intents, quantum job metadata, end-to-end application execution time, quantum job runtime characteristics, user-defined priority, or execution deadlines (Nott, [0048] lines 6-13, A condition for a process may be a computing device idle mode, a time-of-day, an event pattern, a duration since the last run or execution, or the like. In addition, process 1, process 2, . . . process N may be prioritized based on user-defined priorities (as user-defined priority) A process of process scheduler 302 may be added, as a request or process request, to priority queue 304 for subsequent execution). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Ravi, HAAH, Bishop, Burleson, Vasileiadis, BALAKRISHNAN and Aliferis with Nott because Nott’s teaching of prioritizing based on the user-defined priorities would have provided Ravi, HAAH, Bishop, Burleson, Vasileiadis, BALAKRISHNAN and Aliferis’s system with the advantage and capability to allow the user to customizing the priorities for the different processes/jobs which improving the user experience and system efficiency. As per claim 19, it is a non-transitory storage medium claim of claim 9 above. Therefore, it is rejected for the same reason as claim 9 above. Claims 10 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Ravi, HAAH, Bishop, Burleson and Vasileiadis, as applied to claims 1 and 11 respectively above, and further in view of Solomon (US Pub. 2009/0070550 A1). Solomon was cited in the previous Office Action. As per claim 10, Ravi, HAAH, Bishop, Burleson and Vasileiadis teach the invention according to claim 1 above. Ravi further teaches refining the quantum jobs and/or a number of the quantum jobs in the job queue and prioritizing the quantum jobs after refinement (Ravi, [0021] lines 3-8, a cloud-based quantum information processing (“QIP”) system seeks to improve the quality of results for quantum problems at hand, by means of optimizations to the quantum circuit(s) for the specified problem. These optimizations are performed while the jobs are queued up awaiting execution; [0022] lines 2-12, (1) intra-job optimizations and (2) inter-job optimizations. Intra-job optimizations target optimizing a particular quantum circuit (e.g., submitted as a “job” to be executed on a quantum computing device) by improving the quality of the circuit in various ways (e.g., to increase the probability of execution success, fidelity, or the like) after that job is placed on the queue but before execution. Inter-job optimizations target scenarios where a quantum problem requires the execution of multiple quantum circuits (e.g., multiple jobs) and optimizations are performed between job executions; [0067] lines 6-8, ordering the terms/jobs 122 in a priority order, which can be garnered from understanding the quantum problem at hand; [0056] lines 1-27, the scheduling engine 114 may track and maintain an estimated time to execution (“ETE”) for the various jobs 122 on the queue 120… the queuing time may be broken down among the different possible optimizations (e.g., based on heuristics or some analysis of the job 122, the circuits, or the device 132)… the scheduling engine 114 may perform optimizations incrementally more aggressively at each optimization trial. For example, after each trial, the optimization engine 116 may capture and save a snapshot of the state….the scheduling engine 114 may promote (as prioritizing) a different independent job of the user ahead of another job (e.g., if an ongoing optimization task is currently running for the overtaken job, if the overtaken job is not ready for execution) (as prioritizing the quantum jobs after refinement, i.e., prioritizing different independent job (after optimization) ahead of another job (i.e., if an ongoing optimization task is currently running for the overtaken job)); wherein refinement factors include number of quantum circuits (Ravi, [0078] lines 1-15, multiple circuits into a single quantum job can be thought of as one form of achieving multi-threading. The execution time of a single job 122 typically scales linearly with the number of circuits in that job's batch. For example, the more circuits included in the batch, the longer the quantum execution time is needed (e.g., since the jobs in the batch are executed individually one after the other). Thus, one way to control the time between jobs (e.g., the same goal as multi-threading) is via controlling the number of circuits in the job's batch; [0090] lines 4-21, e complexity of classical-quantum algorithms like VQA grow enormously. For example, even a small H2O molecule has nearly 100 quantum circuits/parameters. It is expected that as the complexity of these algorithms grow, considerable resources would be required for both the quantum as well as the classical components. For instance, for a QAOA algorithm, high complexity would mean more qubits and a deeper circuit from a quantum perspective, as well as a more arduous optimization scheme and higher compute requirements to tune the QAOA parameters. In such scenarios, it would be expected that both the classical as well as quantum optimizations will be performed on scarce resources on the cloud. Thus, there will be queues to access both the classical as well as quantum resources. In such a scenario there is room for in-queue optimization for both the classical as well as the quantum resources. Both sets of optimizations discussed earlier are suited to this hybrid model (as refinement factors include number of quantum circuits)), number of shots (Ravi, [0022] lines 13-18, inter-job optimizations focus on the ability to add, remove, or modify subsequent jobs based on analysis of the earlier executing jobs. This can improve quality or fidelity of the solution, reduce overall execution time for the problem, or otherwise improve quantum machine throughput; [0024] lines 15-17, each circuit in a job may be rapidly re-executed for a particular number of “shots.”; [0100] lines 3-10, the features of the execution time prediction model include batch size, the number of shots, circuit depth, circuit width, total number of quantum gates, and machine overheads (e.g., size and memory slots required (as refinement factors include number of shots). The QaO server 110 computes an execution time for each job 122 on a given queue 120 using the above execution time prediction model to determine how long the queuing time is for that particular queue 120), number of qubits (Ravi, [0025] lines 15-20, Even if QCs have the same number of qubits, their qubit error values may differ (e.g., errors in CX gate execution). Thus, the QIP system considers such machine characteristics and their impact on applications by analyzing how different QC characteristics affect application fidelity and schedule jobs to various QCs accordingly; [0036] lines 30-40, Such dynamic characteristics are dynamic because they may evolve over time. Such characteristics of qubits and gates may be recalibrated at some course granularity (e.g., once per day) and such calibrations may be non-uniform (e.g., on day's qubit fidelity may be very different from the next day's qubit fidelity). Accordingly, targeting of particular quantum computing devices 132 allows the QaO server 110 to target optimizations particular to a recent state of the dynamic characteristics of that targeted quantum computing device (as refinement factors include number of qubits)); and depth (Ravi, [0054] lines 1-13, In-queue compilations and other optimizations may be applicable to both gate-based and pulse-based jobs. Pulse compilations are typically longer than gate compilations and are more susceptible to “staleness,” and thus may particularly benefit from in-queue compilations and optimizations. Further, the scheduling of quantum circuits to particular QCs 132 is also useful to both gate-based and pulse-based jobs as both approaches can benefit from machine selection, such as described in FIG. 5. For example, for gate-based jobs, the QaO server 110 may use considerations such as circuit depth or number of two-qubit gates when determining which particular QC 132 may be best for a particular job 122; [0064] lines 7-8, To solve such problems, VQE uses O (1/∈.sup.2) iterations of depth-O; [0022] lines 13-18, inter-job optimizations focus on the ability to add, remove, or modify subsequent jobs based on analysis of the earlier executing jobs. This can improve quality or fidelity of the solution, reduce overall execution time for the problem, or otherwise improve quantum machine throughput; [0100] lines 3-5, the features of the execution time prediction model include batch size, the number of shots, circuit depth, circuit width, total number of quantum gates (as refinement factors include depth)) Although Ravi, HAAH, Bishop, Burleson and Vasileiadis teach prioritizing the quantum jobs after refinement, Ravi, HAAH, Bishop, Burleson and Vasileiadis fail to specifically teach prioritizing is reprioritizing the quantum jobs after refinement. However, Solomon teaches reprioritizing the quantum jobs after refinement (Solomon, [0068] lines 3-14, constantly reprioritizes tasks to auto-restructure operational processes in order to optimize task solutions. The system continuously routes multiple tasks to various nodes for the most efficient processing of optimization solutions. Specifically, the system sorts, and resorts, problems to various nodes so as to obtain solutions. The system is constantly satisfying different optimality objectives and reassigning problems to various nodes for problem solving. At the same time, the reconfigurable nodes constantly evolve their hardware configurations in order to optimize these solutions in the most efficient ways available (as reprioritizing the jobs after refinement (i.e., constantly optimizing)). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Ravi, HAAH, Bishop, Burleson and Vasileiadis with Solomon because Solomon’s teaching of constantly reprioritizes tasks and constantly satisfying different optimality objectives (constantly optimizing, after refinement) would have provided Ravi, HAAH, Bishop, Burleson and Vasileiadis’s system with the advantage and capability to allow the system to constantly evolve their hardware configurations in order to optimize these solutions in the most efficient ways available which improving the system performance and efficiency (see Solomon, [0068] “constantly evolve their hardware configurations in order to optimize these solutions in the most efficient ways available”). As per claim 20, it is a non-transitory storage medium claim of claim 10 above. Therefore, it is rejected for the same reason as claim 10 above. Response to Arguments Applicant’s arguments with respect to claims 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZUJIA XU whose telephone number is (571)272-0954. The examiner can normally be reached M-F 9:30-5:30 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aimee J Li can be reached at (571) 272-4169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ZUJIA XU/Examiner, Art Unit 2195
Read full office action

Prosecution Timeline

Jul 08, 2022
Application Filed
Mar 01, 2025
Non-Final Rejection — §103, §112
Jun 05, 2025
Response Filed
Sep 03, 2025
Final Rejection — §103, §112
Nov 17, 2025
Applicant Interview (Telephonic)
Nov 17, 2025
Request for Continued Examination
Nov 17, 2025
Examiner Interview Summary
Nov 24, 2025
Response after Non-Final Action
Jan 08, 2026
Non-Final Rejection — §103, §112
Apr 01, 2026
Interview Requested
Apr 07, 2026
Examiner Interview Summary
Apr 07, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602249
Hardware Resource Allocation System for Allocating Resources to Threads
2y 5m to grant Granted Apr 14, 2026
Patent 12541397
THREAD MANAGEMENT
2y 5m to grant Granted Feb 03, 2026
Patent 12504983
SUPERVISORY DEVICE WITH DEPLOYED INDEPENDENT APPLICATION CONTAINERS FOR AUTOMATION CONTROL PROGRAMS
2y 5m to grant Granted Dec 23, 2025
Patent 12498971
COMPUTING TASK SCHEDULING METHOD AND APPARATUS, ELECTRONIC DEVICE, AND READABLE STORAGE MEDIUM
2y 5m to grant Granted Dec 16, 2025
Patent 12436805
COMPUTER SYSTEM WITH PROCESSING CIRCUIT THAT WRITES DATA TO BE PROCESSED BY PROGRAM CODE EXECUTED ON PROCESSOR INTO EMBEDDED MEMORY INSIDE PROCESSOR
2y 5m to grant Granted Oct 07, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
68%
Grant Probability
99%
With Interview (+81.5%)
3y 6m
Median Time to Grant
High
PTA Risk
Based on 169 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month