Prosecution Insights
Last updated: April 19, 2026
Application No. 18/055,319

ADAPTIVE LEARNING FOR QUANTUM CIRCUITS

Final Rejection §103
Filed
Nov 14, 2022
Examiner
MAUNI, HUMAIRA ZAHIN
Art Unit
2141
Tech Center
2100 — Computer Architecture & Software
Assignee
Quantum Computing Inc.
OA Round
2 (Final)
38%
Grant Probability
At Risk
3-4
OA Rounds
4y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants only 38% of cases
38%
Career Allow Rate
6 granted / 16 resolved
-17.5% vs TC avg
Strong +67% interview lift
Without
With
+66.7%
Interview Lift
resolved cases with interview
Typical timeline
4y 6m
Avg Prosecution
39 currently pending
Career history
55
Total Applications
across all art units

Statute-Specific Performance

§101
35.9%
-4.1% vs TC avg
§103
40.2%
+0.2% vs TC avg
§102
10.9%
-29.1% vs TC avg
§112
13.0%
-27.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 16 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The amendments filed 11/25/2025 have been entered. Claims 1-20 remain pending within the application. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 15, 16, 18, 19, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Verdon-Akzam et al. (Pub. No.: US 2021/0097422 A1), hereafter Verdon, in view of Elfving et al. (EP 4273760 A1), hereafter Elfving. Regarding claim 15, Verdon discloses: One or more tangible, non-transitory, machine-readable media storing instructions that, when executed by one or more processors, effectuate operations comprising (Verdon, ¶[0148-0149]). executing a quantum circuit comprising a plurality of quantum logic gates to determine a first quantum circuit output based on a set of input parameters for an optimization operation (Verdon, Fig. 4, Fig. 6, ¶[0017], ¶[0089] teaches executing a quantum circuit comprising a plurality of quantum logic gates to determine a first output based on a set of input variational parameters for an optimization task), updating a learning rate for at least one subsequent execution of the quantum circuit based on: determining a gradient based on the set of input parameters (Verdon, Fig. 3 and ¶[0128] teaches an iterative process where the learning rate step size is updated based on evaluating different hyperparameters in equation (23) in each iteration as candidate learning rate hyperparameters, by determining an initial gradient for the current loss, i.e. cost output, at the current iteration), providing the quantum circuit with an updated set of input parameters to determine a second quantum circuit output (Verdon, ¶[0128] teaches the cost function and gradient update to be performed iteratively, thus an additional quantum circuit output is computed in the subsequent iteration by providing the quantum circuit with the gradient-updated set of input variational parameters), wherein the updated set of input parameters is determined based on a candidate learning rate parameter, the gradient and the set of input parameters (Verdon, ¶[0128] and equation (23) teaches updated set of input variational parameters based on candidate learning rate parameters in equation (23) of the gradient update, the gradient, and the initial input variational parameters), determining a comparison value based on the first quantum circuit output and the second quantum circuit output (Verdon, Fig. 3, ¶[0128] and ¶[0118] teaches a value of the loss function as a comparison value based on the output loss functions of the current and previous iteration as the current quantum circuit output and the additional quantum circuit output), updating the candidate learning rate parameter such that the comparison value satisfies a threshold (Verdon, Fig. 3, ¶[0095] and ¶[0128] teaches updating iteration dependent learning parameters in equation (23) such that the comparison value satisfies a threshold through convergence), determining an updated set of input parameters for the at least one subsequent execution of the quantum circuit based on the updated learning rate (Verdon, Fig. 3, Fig. 5, ¶[0064] and ¶[0128] teaches updating input variational parameters based on the updated learning rate step size via gradient descent). Verdon does not disclose: providing, at a set of classical processors coupled with the quantum circuit, an output of the at least one subsequent execution of the quantum circuit, the output representing a solution of the optimization operation. Elfving discloses: providing, at a set of classical processors coupled with the quantum circuit, an output of the at least one subsequent execution of the quantum circuit, the output representing a solution of the optimization operation (Elfving, Fig. 7A and 7B, ¶[0226-0230] teaches classical computer 702 obtaining provided outputs of quantum circuit representing a solution of optimization operations from previous iterations until exit conditions are reached). Verdon and Elfving are analogous art because they are from the same field of endeavor, quantum computing and gradient update. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Verdon to include providing, at a set of classical processors coupled with the quantum circuit, an output of the at least one subsequent execution of the quantum circuit, the output representing a solution of the optimization operation, based on the teachings of Elfving. One of ordinary skill in the art would have been motivated to make this modification in order to provide better solutions to the optimisation problem, as suggested by Elfving (page 14, ¶[0094]). Regarding claim 16, Verdon, in view of Elfving, discloses the machine-readable media of claim 15 (and thus the rejection of claim 15 is incorporated). Verdon further discloses: wherein the quantum circuit comprises at least one of a rotation operator, a controlled NOT gate, and a phase shift gate (Verdon, ¶[0068] teaches the circuit to comprise unitary rotation operators, a CX gate as a controlled NOT gate, and an S gate as a phase shift gate). Regarding claim 18, Verdon, in view of Elfving, discloses the machine-readable media of claim 15 (and thus the rejection of claim 15 is incorporated). Verdon further discloses: retrieving a previous learning rate parameter of a previous optimization operation associated with the quantum circuit (Verdon, ¶[0091] and ¶[0119] teaches retrieving values of the parameters obtained from initial input data or a previous iteration as retrieving a previous learning rate parameters of a previous optimization operation associated with the quantum circuit), setting the candidate learning rate parameter to be equal to the previous learning rate parameter (Verdon, ¶[0091] and ¶[0119] teaches generating the current state from parameters obtained from a previous iteration as setting the learning rate parameter to be equal to the previous learning rate parameter). Regarding claim 19, Verdon, in view of Elfving, discloses the machine-readable media of claim 15 (and thus the rejection of claim 15 is incorporated). Verdon further discloses: determining that the comparison value satisfies the threshold (Verdon, ¶[0118] teaches determining whether the comparison value satisfies the threshold), in response to determining that the comparison value satisfies the threshold, increasing the candidate learning rate parameter, wherein increasing the candidate learning rate parameter updates the comparison value, and wherein the updated comparison value satisfies the threshold (Verdon, ¶[0095], ¶[0128], and ¶[0119] teaches updating an iteration dependent gradient descent step size such that the comparison value satisfies a threshold through convergence). Regarding claim 20, Verdon, in view of Elfving, discloses the machine-readable media of claim 15 (and thus the rejection of claim 15 is incorporated). Verdon further discloses: executing the quantum circuit comprises executing the quantum circuit using a quantum processor (Verdon, ¶[0144]), updating the candidate learning rate parameter comprises updating the candidate learning rate parameter using a classical processor (Verdon, Fig. 1 and ¶[0088] teaches updating the candidate learning rate parameter step size by executing a quantum neural network on a classical processor to obtain the current learning rate parameter step size). Claims 1-2, 4-5, and 6-14 are rejected under 35 U.S.C. 103 as being unpatentable over Verdon-Akzam et al. (Pub. No.: US 2021/0097422 A1), hereafter Verdon, in view of Hauru et al. ("Riemannian optimization of isometric tensor networks"), hereafter Hauru, in further view of Elfving et al. (EP 4273760 A1), hereafter Elfving. Regarding claim 1, Verdon discloses: A system for adaptively updating a learning rate to increase a rate of convergence for an optimization operation performed by a quantum circuit, the system comprising (Verdon, Fig. 1, Fig. 3, ¶[0093], ¶[0089], and ¶[0128] teaches a system for quantum natural gradient descent which adaptively updates the learning rate, i.e. step size, in equation (23) to increase a rate of convergence for an optimization operation, i.e. an optimization task, performed by the quantum circuit), a set of processors comprising a quantum processor; and memory storing computer program instructions that, when executed by the set of processors, cause the set of processors to effectuate operations comprising (Verdon, Fig. 1, ¶[0072] and ¶[0144]), determining the quantum circuit comprising a plurality of quantum logic gates implementing a corresponding plurality of unitary operators (Verdon, Fig. 4, Fig. 6, ¶[0009] and ¶[0062] teaches unitary operators physically corresponding to a quantum circuit including multiple logic gates), executing the quantum circuit using the quantum processor to determine a current quantum circuit cost function output based on a set of input parameters for the optimization operation (Verdon, Fig. 3, ¶[0144] and ¶[0093-0095] teaches executing a quantum circuit using a quantum processor to determine a loss function, i.e. quantum circuit cost function output, based on a set of initial values of variational parameters as a set of input parameters for the optimization task operation), updating a learning rate for the optimization operation based on evaluating candidate learning rate hyperparameters by: determining a gradient based on the current quantum circuit cost function output (Verdon, Fig. 3 and ¶[0128] teaches an iterative process where the learning rate step size is updated based on evaluating different hyperparameters in equation (23) in each iteration as candidate learning rate hyperparameters, by determining an initial gradient for the current loss, i.e. cost output, at the current iteration) determining a gradient-updated set of input parameters based on the gradient and the set of input parameters (Verdon, Fig. 3 and ¶[0128] teaches updating respective parameters as a gradient updated set of input variational parameters based on the gradient and the initial input variational parameters), computing an additional quantum circuit cost function output by providing the quantum circuit with the gradient-updated set of input parameters (Verdon, ¶[0128] teaches the cost function and gradient update to be performed iteratively, thus an additional quantum circuit cost function output is computed in the subsequent iteration by providing the quantum circuit with the gradient-updated set of input variational parameters), determining a comparison value based on a candidate learning rate hyperparameter, the current quantum circuit cost function output, and the additional quantum circuit cost function output (Verdon, Fig. 3, ¶[0128] and ¶[0118] teaches a value of the loss function as a comparison value based on candidate learning rate hyperparameters in equation (23), and the loss functions of the current and previous iteration as the current quantum circuit cost function output and the additional quantum circuit cost function output), determining whether the comparison value satisfies the threshold (Verdon, Fig. 3, ¶[0118] teaches determining whether the comparison value satisfies the threshold), in response to a determination that the comparison value does not satisfy the threshold, updating the candidate learning rate hyperparameter and a learning rate step size associated with the candidate learning rate hyperparameter such that the comparison value satisfies the threshold (Verdon, Fig. 3, ¶[0095], ¶[0128], and ¶[0132] teaches updating an iteration dependent gradient descent step size with the other learning rate hyperparameters in equation (23) such that the comparison value satisfies a threshold through convergence) updating the set of input parameters based on the updated learning rate (Verdon, Fig. 3 and ¶[0128] teaches updating input variational parameters based on the updated learning rate step size in equation (23) via gradient descent), ..execution uses respective input parameters determined based on the learning rate being updated from a prior additional execution (Verdon, ¶[0093] and ¶0128] teaches iteratively performing executions in process 300 using updated parameters based on the learning rate being updated from a prior additional executions). While Verdon teaches a predetermined threshold (Verdon, ¶[0118]), they do not explicitly teach determining this threshold based on a gradient. Hauru teaches: determining a threshold based on a gradient (Hauru, page 7, equation 11 and paragraph below equation 11, lines 3-5 “Throughout our simulations, we use the linesearch algorithm described in Refs. 40,41, which also takes into account that the descent property of Eq. (11) (also known as the Armijo rule” teaches determining a threshold based on gradient). Verdon and Hauru are analogous art because they are from the same field of endeavor, quantum computing and gradient update. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Verdon to include determining a threshold based on the gradient, based on the teachings of Hauru. One of ordinary skill in the art would have been motivated to make this modification in order to guarantee convergence, as suggested by Hauru (page 7, paragraph 3, lines 3-4). While Verdon discloses execution uses respective input parameters determined based on the learning rate being updated from a prior additional execution, they do not explicitly recite performing one or more additional executions of the quantum circuit, a first of the one or more additional executions of the quantum circuit using the updated set of input parameters…, and subsequent to the one or more additional executions, obtaining, at one or more classical processors of the set of processors that are coupled with the quantum circuit, an output of the quantum circuit representing a solution of the optimization operation. Elfving discloses: performing one or more additional executions of the quantum circuit, a first of the one or more additional executions of the quantum circuit using the updated set of input parameters, wherein each additional execution uses respective input parameters determined based on the learning rate being updated … (Elfving, Fig. 7A and 7B, ¶[0226-0230] teaches performing repetitive, additional executions of quantum circuit 704 using updated set of input parameters determined based on an updated learning rate), subsequent to the one or more additional executions, obtaining, at one or more classical processors of the set of processors that are coupled with the quantum circuit, an output of the quantum circuit representing a solution of the optimization operation (Elfving, Fig. 7A and 7B, ¶[0226-0230] teaches classical computer 702 obtaining outputs of quantum circuit representing a solution of optimization operations from previous iterations until exit conditions are reached). Verdon, Hauro, and Elfving are analogous art because they are from the same field of endeavor, quantum computing and gradient update. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Verdon, in view of Hauro, to include performing one or more additional executions of the quantum circuit, a first of the one or more additional executions of the quantum circuit using the updated set of input parameters…, and subsequent to the one or more additional executions, obtaining, at one or more classical processors of the set of processors that are coupled with the quantum circuit, an output of the quantum circuit representing a solution of the optimization operation., based on the teachings of Elfving. One of ordinary skill in the art would have been motivated to make this modification in order to provide better solutions to the optimisation problem, as suggested by Elfving (page 14, ¶[0094]). Regarding claim 2, Verdon, in view of Hauru, in further view of Elfving, discloses the system of claim 1 (and thus the rejection of claim 1 is incorporated) determining the threshold. Verdon further discloses: determining a set of shifted input parameters by adding a constant value to a parameter of the set of input parameters (Verdon, ¶[100] teaches a magnitude shift of the input parameter as adding a constant value to a parameter of the set of input parameters), determining the gradient based on the set of shifted input parameters (Verdon, ¶[100] teaches parameter-shift gradient estimation as determining the gradient based on the set of shifted input parameters). Regarding claim 4, Verdon, in view of Hauru, in further view of Elfving, discloses the system of claim 1 (and thus the rejection of claim 1 is incorporated) wherein the candidate learning rate hyperparameter is a current learning rate hyperparameter. Verdon further discloses: obtaining a history of previous learning rate hyperparameters (Verdon, ¶[0091] and ¶[0119] teaches obtaining values of the parameters obtained from initial input data or a previous iteration as obtaining a history of previous learning rate hyperparameters), predicting the current learning rate hyperparameter based on the history of previous learning rate parameters (Verdon, ¶[0091] and ¶[0119] teaches predicting the current learning rate hyperparameter based on the history of previous learning rate parameters). Regarding claim 5, Verdon, in view of Hauru, in further view of Elfving, discloses the system of claim 4 (and thus the rejection of claim 4 is incorporated). Verdon further discloses: wherein predicting the current learning rate hyperparameter comprises executing a neural network on a classical processor to obtain the current learning rate hyperparameter based on the history of previous learning rate hyperparameters (Verdon, Fig. 1, ¶[0088], and ¶[0119] teaches executing a quantum neural network on a classical processor to obtain the current learning rate hyperparameter based on the history of previous learning rate hyperparameters through loss function convergence). Regarding claim 6, Verdon discloses: A method comprising: determining a quantum circuit comprising a plurality of quantum logic gates (Verdon, Fig. 4, Fig. 6, ¶[0009] and ¶[0062] teaches determining a quantum circuit comprising quantum logic gates), executing the quantum circuit to determine a first quantum circuit cost function output based on a set of input parameters for an optimization operation (Verdon, Fig. 3, ¶[0144], ¶[0089] and ¶[0093-0095] teaches executing a quantum circuit using a quantum processor to determine a loss function, i.e. quantum circuit cost function output, based on a set of initial values of variational parameters as a set of input parameters for an optimization task), updating a learning rate for the optimization operation based on evaluating candidate learning rate parameters by: determining a gradient based on the set of input parameters (Verdon, Fig. 3 and ¶[0128] teaches an iterative process where the learning rate step size is updated based on evaluating different hyperparameters in equation (23) in each iteration as candidate learning rate hyperparameters, by determining an initial gradient for the current loss, i.e. cost output, at the current iteration) computing a second quantum circuit cost function output by providing the quantum circuit with an updated set of input parameters (Verdon, ¶[0128] teaches the cost function and gradient update to be performed iteratively, thus an additional quantum circuit cost function output is computed in the subsequent iteration by providing the quantum circuit with the gradient-updated set of input variational parameters), wherein the updated set of input parameters is updated based on the gradient and a candidate learning rate parameter (Verdon, Fog. 3 and ¶[0128] teaches updating input parameters based on the gradient and the candidate learning rate parameters of equation (23) at each iteration), determining a comparison value based on the first quantum circuit cost function output and the second quantum circuit cost function output (Verdon, Fig. 3, ¶[0128] and ¶[0118] teaches a value of the loss function as a comparison value based on the loss functions of the current and previous iteration as the current quantum circuit cost function output and the additional quantum circuit cost function output), in response to a determination that the comparison value does not satisfy a threshold, updating the candidate learning rate parameter such that the comparison value satisfies the threshold (Verdon, Fig. 3, ¶[0095] and ¶[0128] teaches updating candidate learning rate parameters in equation (23) at each iteration such that the comparison value satisfies a threshold through convergence), updating the set of input parameters for the plurality of quantum logic gates based on the updated learning rates (Verdon, Fig. 3, and ¶[0128] teaches updating input variational parameters based on the updated learning rate step size), … execution uses respective input parameters based on the learning rate being updated from a prior additional execution (Verdon, ¶[0093] and ¶0128] teaches iteratively performing executions in process 300 using updated parameters based on the learning rate being updated from a prior additional execution). While Verdon teaches a predetermined threshold (Verdon, ¶[0118]), they do not explicitly teach determining this threshold based on the gradient. Hauru teaches: determining a threshold based on the gradient (Hauru, page 7, equation 11 and paragraph below equation 11, lines 3-5 “Throughout our simulations, we use the linesearch algorithm described in Refs. 40,41, which also takes into account that the descent property of Eq. (11) (also known as the Armijo rule” teaches determining a threshold based on gradient). Verdon and Hauru are analogous art because they are from the same field of endeavor, quantum computing and gradient update. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Verdon to include determining a threshold based on the gradient, based on the teachings of Hauru. One of ordinary skill in the art would have been motivated to make this modification in order to guarantee convergence, as suggested by Hauru (page 7, paragraph 3, lines 3-4). While Verdon discloses execution uses respective input parameters based on the learning rate being updated from a prior additional execution, they do not explicitly recite performing one or more additional executions of the quantum circuit using the updated set of input parameters, wherein each additional execution uses respective input parameters based on the learning rate being updated … and subsequent to the one or more additional executions, obtaining, at one or more classical processors of the set of processors that are coupled with the quantum circuit, an output representing a solution of the optimization operation. Elfving discloses: performing one or more additional executions of the quantum circuit using the updated set of input parameters, wherein each additional execution uses respective input parameters based on the learning rate being updated …(Elfving, Fig. 7A and 7B, ¶[0226-0230] teaches performing repetitive, additional executions of quantum circuit 704 using updated set of input parameters determined based on an updated learning rate), subsequent to the one or more additional executions, obtaining, at one or more classical processors of the set of processors that are coupled with the quantum circuit, an output representing a solution of the optimization operation (Elfving, Fig. 7A and 7B, ¶[0226-0230] teaches classical computer 702 obtaining outputs of quantum circuit representing a solution of optimization operations from previous iterations until exit conditions are reached). Regarding claim 7, Verdon, in view of Hauru, in further view of Elfving, discloses the method of claim 6 (and thus the rejection of claim 6 is incorporated) updating the set of input parameters. Verdon further discloses: determining a product based on the updated learning rate and the gradient (Verdon, ¶[0128] and equation (23) teaches determining a product based on the updated learning rate parameter and the gradient), updating the set of input parameters based on the product (Verdon, ¶[0128] teaches updating the set of input parameters based on the product). Regarding claim 8, Verdon, in view of Hauru, in further view of Elfving, discloses the method of claim 6 (and thus the rejection of claim 6 is incorporated) executing the quantum circuit. Verdon further discloses: providing the quantum circuit with a first set of input parameters to obtain the first quantum circuit cost function output (Verdon, Fig. 3, ¶[0144] and ¶[0093-0095] teaches providing a set of initial values of variational parameters as a set of input parameters to a quantum circuit to obtain a loss function, i.e. quantum circuit cost function output), wherein determining the gradient comprises: determining a set of differentials in a parameter space of the parameters; and determining the gradient based on the set of differentials (Verdon, Fig 3 and ¶[0110] teaches determining partial derivatives as a set of differentials in a parameter space of the parameters and determining the gradient based on the set of differentials). Regarding claim 9, Verdon, in view of Hauru, in further view of Elfving, discloses the method of claim 6 (and thus the rejection of claim 6 is incorporated) executing the quantum circuit. Verdon further discloses: executing the quantum circuit using a quantum processor (Verdon, ¶[0144]). Regarding claim 10, Verdon, in view of Hauru, in further view of Elfving, discloses the method of claim 6 (and thus the rejection of claim 6 is incorporated) determining the gradient. Verdon further discloses: wherein determining the gradient comprises determining diagonal terms and off-diagonal terms of a Fubini-Study metric tensor (Verdon, ¶[0046] and ¶[0137] teaches determining diagonal and off diagonal terms of a Fubini-Study matrix tensor). Regarding claim 11, Verdon, in view of Hauru, in further view of Elfving, discloses the method of claim 6 (and thus the rejection of claim 6 is incorporated). Verdon further discloses: wherein the quantum circuit is a first quantum circuit (Verdon, ¶[0062] and ¶[0064] teaches a first quantum circuit within respective quantum circuit), wherein determining the gradient comprises executing a second quantum circuit to determine the gradient (Verdon, ¶[0091] teaches executing a second quantum circuit to determine the gradient), wherein the first quantum circuit may be executed independently of the second quantum circuit (Verdon, ¶[0151] teaches parallel processing as independent execution of respective circuits). Regarding claim 12, Verdon, in view of Hauru, in further view of Elfving, discloses the method of claim 6 (and thus the rejection of claim 6 is incorporated). While Verdon, in view of Hauru, discloses a quantum circuit and executing a quantum circuit in claim 6, they do not disclose: the quantum circuit comprises a first layer and a second layer in a sequence; and executing the quantum circuit comprises providing, as an input to the second layer, an output of the first layer. Elfving discloses: the quantum circuit comprises a first layer and a second layer in a sequence (Elfving, Fig. 6A-6C and ¶[0297] teaches a quantum circuit comprising a first layer and a second layer in a sequence), executing the quantum circuit comprises providing, as an input to the second layer, an output of the first layer (Elfving, Fig. 6A-6C and ¶[0297] teaches providing, as an input to the second layer, an output of the first layer). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Verdon, in view of Hauru, to include the quantum circuit comprises a first layer and a second layer in a sequence; and executing the quantum circuit comprises providing, as an input to the second layer, an output of the first layer, based on the teachings of Elfving. One of ordinary skill in the art would have been motivated to make this modification in order to provide better solutions to the optimization problem, as suggested by Elfving (¶[0094], lines 1-2). Regarding claim 13, Verdon, in view of Hauru, in further view of Elfving, discloses the method of claim 6 (and thus the rejection of claim 6 is incorporated). Verdon further discloses: wherein the plurality of quantum logic gates comprises a rotation operator (Verdon, ¶[0083] teaches quantum logic gates to comprise a rotation operator). Regarding claim 14, Verdon, in view of Hauru, in further view of Elfving, discloses the method of claim 6 (and thus the rejection of claim 6 is incorporated). Verdon further discloses: wherein determining the comparison value comprises determining a difference between the first quantum circuit cost function output and the second quantum circuit cost function output (Verdon, Fig. 3 and ¶[0107] teaches subtracting a difference between the first quantum circuit cost function output and the second quantum circuit cost function output). Claims 17 is rejected under 35 U.S.C. 103 as being unpatentable over Verdon-Akzam et al. (Pub. No.: US 2021/0097422 A1), hereafter Verdon, in view of Elfving et al. (EP 4273760 A1), hereafter Elfving, in further view of MCKIERNAN et al. (WO 2020/168158 A1), hereafter MCKIERNAN. Regarding claim 17, Verdon, in view of Elfving, discloses the machine-readable media of claim 15 (and thus the rejection of claim 15 is incorporated). Verdon discloses a quantum circuit, but does not disclose the quantum circuit is configured based on an unconstrained binary optimization function. MCKIERNAN discloses: wherein the quantum circuit is configured based on an unconstrained binary optimization function (MCKIERNAN, ¶[0067], ¶[0037] and ¶[0074] teaches quantum circuit is configured based on an unconstrained binary optimization function). Verdon, Elfving, and MCKIERNAN are analogous art because they are from the same field of endeavor, quantum computing. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Verdon, in view of Elfving, to include wherein the quantum circuit is configured based on an unconstrained binary optimization function, based on the teachings of MCKIERNAN. One of ordinary skill in the art would have been motivated to make this modification in order to improve the speed, efficiency and accuracy with which quantum resources are used to solve optimization problems, as suggested by MCKIERNAN (¶[0019]). Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Verdon-Akzam et al. (Pub. No.: US 2021/0097422 A1), hereafter Verdon, in view of Hauru et al. ("Riemannian optimization of isometric tensor networks"), hereafter Hauru, in further view of Elfving et al. (EP 4273760 A1), hereafter Elfving, in further view of Youssry et al. (“Efficient online quantum state estimation using a matrix exponentiated gradient method”), hereafter Youssry. Regarding claim 3, Verdon, in view of Hauru, in further view of Elfving, discloses the system of claim 1 (and thus the rejection of claim 1 is incorporated) determining the threshold. Verdon further discloses: determining whether the comparison value satisfies the threshold comprises determining whether the comparison value is less than the threshold (Verdon, ¶[0095] teaches determining whether the comparison value is within, i.e. less than, the threshold). While Verdon, in view of Hauru, discloses determining a threshold in claim 1, they do not disclose …determining a power law output based on the gradient. Youssry discloses: determining a power law output based on the gradient (Youssry, page 12, theorem 4 and equation 108 teaches determining a power law output based on the gradient). Verdon, Hauru, Elfving, and Youssry are analogous art because they are from the same field of endeavor, quantum computing and gradient update. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Verdon, in view of Hauru, in further view of Elfving, to include determining a power law output based on the gradient, based on the teachings of Youssry. One of ordinary skill in the art would have been motivated to make this modification in order to improve the performance, as suggested by Youssry (page 18, paragraph 1, line 4). Response to Arguments Applicant's arguments filed 11/25/2025 have been fully considered with regards to the 35 U.S.C. 101 rejection, and they are persuasive. The rejections are withdrawn. Applicant's arguments filed 11/25/2025 have been fully considered with regards to the 35 U.S.C. 102/103 rejection, but they are not persuasive. The applicant asserts on page 12 of the remarks “Although Verdun discloses these different examples of gradients, Verdun fails to disclose updating a learning rate, including a learning rate step size. A person of ordinary skill in the art would understand the distinctions between gradients and learning rates and would not have taken Verdun's disclosure related to gradients as immediately and obviously applicable to learning rates”. The examiner respectfully disagrees, as Verdon’s use of natural gradient descent in ¶[0128] explicitly recites “updating the respective parameters via gradient descent”, which results in the “iteration dependent” learning rate step size to be updated at each subsequent iteration. The applicant asserts on page 12 of the remarks “Moreover, Verdun discloses how a gradient is applied (see, e.g., Verdun at [0118]), Verdun fails to disclose any application of the gradients involving "evaluating candidate learning rate hyperparameters.”. The Examiner respectfully disagrees, as Verdon states “When performing example process 300 or 500 above, the system can optimize the corresponding loss functions by computing gradients and updating the respective parameters via gradient descent,”. The iterative update of hyperparameters disclosed in equation (23) of Verdon are directed to "evaluating candidate learning rate hyperparameters”, in order to compute the gradients of process 300 in Fig. 3. Regarding the arguments are directed to newly amended limitations that were not previously examined by the examiner, applicant's arguments are rendered moot. The examiner refers to the rejection under 35 USC § 103 in the current office action for more details. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Stokes et al. (“Quantum Natural gradients”) teaches updating learning rates and quantum computing. Knight et al. (“Natural Gradient Deep Q-learning”) teaches updating learning rates and quantum computing. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to HUMAIRA ZAHIN MAUNI whose telephone number is (703)756-5654. The examiner can normally be reached Monday - Friday, 9 am - 5 pm (ET). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, MATT ELL can be reached at (571) 270-3264. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /H.Z.M./Examiner, Art Unit 2141 /MATTHEW ELL/Supervisory Patent Examiner, Art Unit 2141
Read full office action

Prosecution Timeline

Nov 14, 2022
Application Filed
Aug 21, 2025
Non-Final Rejection — §103
Nov 25, 2025
Response Filed
Feb 27, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585969
GENERATING CONFIDENCE SCORES FOR MACHINE LEARNING MODEL PREDICTIONS
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
38%
Grant Probability
99%
With Interview (+66.7%)
4y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 16 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month