Prosecution Insights
Last updated: April 19, 2026
Application No. 17/302,402

SYSTEM AND METHOD OF QUANTUM ENHANCED ACCELERATED NEURAL NETWORK TRAINING

Non-Final OA §102§103§112
Filed
May 01, 2021
Examiner
GODO, MORIAM MOSUNMOLA
Art Unit
2148
Tech Center
2100 — Computer Architecture & Software
Assignee
Equal1 Labs Inc.
OA Round
3 (Non-Final)
44%
Grant Probability
Moderate
3-4
OA Rounds
4y 8m
To Grant
78%
With Interview

Examiner Intelligence

Grants 44% of resolved cases
44%
Career Allow Rate
30 granted / 68 resolved
-10.9% vs TC avg
Strong +33% interview lift
Without
With
+33.4%
Interview Lift
resolved cases with interview
Typical timeline
4y 8m
Avg Prosecution
47 currently pending
Career history
115
Total Applications
across all art units

Statute-Specific Performance

§101
16.1%
-23.9% vs TC avg
§103
56.7%
+16.7% vs TC avg
§102
12.7%
-27.3% vs TC avg
§112
12.9%
-27.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 68 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION 1. This office action is in response to the Application No. 17302402 filed on 12/05/2025. Claims 1-25 are presented for examination and are currently pending. Notice of Pre-AIA or AIA Status 2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 3. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant’s submission filed on 12/05/2025 has been entered. Response to Arguments 4. The Examiner is withdrawing the rejections in the previous Office action because Applicant’s amendment necessitated the new grounds of rejection presented in this Office action. It is noted that the arguments have been considered but are moot in light of the newly added primary reference. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. 5. Claims 3 and 19-22 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 3 recites “said plurality of activation tensors” which lacks antecedent basis because claim 1 only recites “a plurality of activation function tensors”. It is not clear which activation tensors “said plurality of activation tensors” is referring to. For the purpose of examination, the Examiner has interpreted “said plurality of activation tensors” to mean “said plurality of activation function tensors”. Claim 14 recites “assigned to each activation function”. It is unclear which activation function is referred to because claim 10 which it depends on recites “ activation function tensors”. For the purpose of examination, the Examiner has interpreted “assigned to each activation function” to be “assigned to each activation function tensors”. Claim 19 recites “wherein each activation function”. It is unclear which activation function is referred to because claim 17 which it depends on recites “ activation function tensors”. For the purpose of examination, the Examiner has interpreted “wherein each activation function” to be “wherein each activation function tensors”. Claims 20-22 that are not specifically mentioned are rejected due to dependency. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. 6. Claims 1 and 8 are rejected under 35 U.S.C 102(a)(1) as being anticipated by Verdon et al. ("Learning to learn with quantum neural networks via classical neural networks." arXiv preprint arXiv:1907.05415 (2019)). Regarding claim 1, Verdon teaches a method of quantum enhanced accelerated training of a classic neural network (The meta-learning neural network architecture used in this paper is depicted in Figure 2 (pg. 3, right col., last para.); Specifically, we train classical recurrent neural networks to find approximately optimal parameters within a small number of queries of the cost function for the Quantum Approximate Optimization Algorithm (QAOA) (abstract); The Examiner notes the Applicant’s disclosure “implement quantum approximate optimization algorithms”, instant specification [0039]), said method comprising: receiving a plurality of activation function tensors corresponding to features across layers in the classic neural network (θt-2 → RNN, θt-1 → RNN, θt → RNN are received across layers in the RNN, Fig. 2, pg. 3; The Examiner notes θt-2, θt-1 , θt are activation function outputs vectors or tensors which reads on “activation function outputs vectors or tensors 106 from the features f1 through fn” in Applicant’s Fig. 2 (instant specification [0142])); mapping said plurality of activation function tensors of the classic neural network to energy levels representing a quantum state in a quantum system (θt-1 → QNN, θt → QNN, Fig. 2, pg. 3; The goal of the QAOA (Quantum Approximate Optimization Algorithm) is to prepare low-energy states of a cost Hamiltonian ˆ HC, which is usually a Hamiltonian which is diagonal in the computational basis (pg. 5, right col., section A). According to the instant specification: “Note that the quantum system may comprise any suitable system that … can perform energy optimization (i.e. energy minimization) to find the minimum energy state … or implement quantum approximate optimization algorithms” [0139]), wherein said quantum system is used to accelerate training of the classic neural network (We see that backpropagating from the meta-loss node to the RNN’s necessitates gradients to pass through the QNN (pg. 3, Fig. 2);This access to gradients during training is not strictly necessary, but can speed up training, pg. 4, left col., last para., last sentence to first sentence, right col.,); manipulating said quantum system into a state that represents the complete or partial state of the classic neural network (The meta-learning neural network architecture used in this paper is depicted in Figure 2, there, an LSTM recurrent neural network is used to recursively propose updates to the QNN parameters, pg. 3, right col., last para. to pg. 4, left col., first para.); allowing said quantum system to transition to its optimum state and detecting minimum energy states at one or more observation points in said quantum system (What is needed instead is a loss function that encourages exploration of the landscape in order to find a better optimum. The loss function we chose for our experiments is the observed improvement at each time step, summed over the history of the optimization, pg. 5, left col., first para.) after said quantum system converges to a minimum total energy (In this case, the QNN’s variational parameters are the control parameters of the dynamics, and by appropriately tuning these parameters via quantum classical optimization, one can cause the wavefunction to effectively descend the energetic landscape towards lower-energy regions, pg. 4, left col., third para.); reading the output state of said quantum system (As mentioned previously, for our QNN’s of interest, the cost function to be optimized is the expectation value of a certain Hamiltonian operator ˆ H, with respect to a parameterized class of states … output by a family of parametrized quantum circuits (pg. 3, left col., second to the last para.); For both testing and training, we squashed the read out of the cost function by a quantity which bounds the operator norm of the Hamiltonian, pg. 7, right col., last para.) to infer optimum neural network parameters for the classic neural network (Neural network training and inference was done in TensorFlow, pg. 7, right col., second o the last para.); and determining an update to said neural network parameters in accordance with said one or more minimum energy states detected (The objective of quantum-classical meta-learning is to train our RNN to learn an efficient parameter update scheme for a family of cost functions of interest (pg. 4, right col., first para.); In a sense, these QAOA-like QNN’s are simply variational methods to descend the energy landscape, pg. 10, left col., last sentence to right col., first sentence.), thereby greatly reducing the time required to train the classic neural network fully or as part of a transfer learning methodology (This opens up the possibility of training on small, classically simulatable problem instances … thereby significantly reducing the number of required quantum-classical optimization iterations, abstract). Regarding claim 8, Verdon teaches the method according to claim 1, Verdon teaches wherein said one or more neural network parameters comprises weights and/or biases (In our case the output of the RNN optimizer after a fixed number of iterations is used to initialize the parameters of the QNN’s near a typical optimal set of parameters, pg. 5, left col., last para.). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 7. Claim 2 is rejected under 35 U.S.C 103 as being unpatentable over Verdon et al. ("Learning to learn with quantum neural networks via classical neural networks." arXiv preprint arXiv:1907.05415 (2019)) in view of Lee et al. (US20210011748 filed 07/08/2019) Regarding claim 2, Verdon teaches the method according to claim 1, Lee teaches wherein said quantum system comprises an array of quantum dots (A quantum computing portion may include a quantum processor which may include a quantum computing system, for example but not limited to, … quantum dots [0022]). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Verdon to incorporate the teachings of Lee for the benefit of simulating the ground state of molecules or other quantum systems of interest by combing quantum imaginary time evolution and Restricted Boltzmann Machine [0026] which is a type of neural network (Lee [0025]) 8. Claims 4 and 5 are rejected under 35 U.S.C 103 as being unpatentable over Verdon et al. ("Learning to learn with quantum neural networks via classical neural networks." arXiv preprint arXiv:1907.05415 (2019)) in view of Kharkov et al. (“Revealing quantum chaos with machine learning”., Physical Review B 101, 064406 (2020), published 5 February 2020) Regarding claim 4, Verdon teaches the method according to claim 1, Verdon does not explicitly teach further comprising utilizing a first helper neural network to reduce the number of signals input to said quantum system before quantum operations are performed. Kharkov teaches further comprising utilizing a first helper neural network (encoder neural network, Fig. 9) to reduce the number of signals input to said quantum system before quantum operations are performed (To reduce the size of the images, we perform a coarse graining (downsampling) to images with dimensions 36 × 36, pg. 064406-5, right col., last para.). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Verdon to incorporate the teachings of Kharkov for the benefit of Quantum billiards have been realized in various experimental setups including graphene quantum dots (pg. 064406-2, left col., second to the last para.) using machine-learning-based analysis of experimental data (Kharkov, introduction) Regarding claim 5, Verdon teaches the method according to claim 1, Verdon teaches the method according to claim 1, Verdon does not explicitly teach further comprising expanding detected results output from said quantum system into a larger set of said neural network parameter updates utilizing a second helper neural network. Kharkov teaches further comprising expanding detected results output from said quantum system (In this paper, we realize machine-learning algorithms to perform a classification between regular and chaotic states in single-particle and many-body systems. The input data contains a probability density function (PDF) representing configurations of excited states and the output is provided by two neurons, which distinguish between integrable and chaotic classes, see Fig. 1, pg. 064406-1, right col., first para.) into a larger set of said neural network parameter updates utilizing a second helper neural network (decoder neural network, Fig. 9; Using a pretrained VAE, we generate a set of points in the latent space corresponding to the scarred chaotic wave functions, see Fig. 3(c). The anomalous cluster representing scarred wave functions falls outside of the chaotic cluster and has a large overlap with a regular cluster, pg. 064406-4, left col., first para.). The same motivation to combine dependent claim 4 applies here. 9. Claim 6 is rejected under 35 U.S.C 103 as being unpatentable over Verdon et al. ("Learning to learn with quantum neural networks via classical neural networks." arXiv preprint arXiv:1907.05415 (2019)) in view of Kauffman et al (US20190302107) Regarding claim 6, Verdon teaches the method according to claim 1, Verdon does not explicitly teach wherein only a certain subset of energy levels are observable in quantum detectors during said detecting. Kauffman teaches wherein only a certain subset of energy levels are observable (The energy eigenvalues of the system are random and follow a Poissonian distribution. The nearest neighbor level spacing distribution is exponential p(s)=exp(−s); where sn=(En+1−En)/Δ(En) is the level spacing measured in the units of local mean level spacing Δ(E) at energy E [0077]) in quantum detectors during said detecting (The detection element of this phase may include a beam splitter to couple out a small fraction of the optical beam and then measure the phase using a homodyne detector [0158]). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Verdon to incorporate the teachings of utilizing a plurality of quantum processors that are coupled by classical means to form large hybrid quantum-classical architectures that can solve problems approximately, involving more qubits than a single fully quantum processor (Kauffman [0153]) 10. Claim 7 is rejected under 35 U.S.C 103 as being unpatentable over Verdon et al. ("Learning to learn with quantum neural networks via classical neural networks." arXiv preprint arXiv:1907.05415 (2019)) in view of Liu et al. ("Repetitive readout enhanced by machine learning." Machine Learning: Science and Technology 1.1 (2020): 015003, 4 February 2020). Regarding claim 7, Verdon teaches the method according to claim 1, Verdon does not explicitly teach wherein said detecting comprises achieving quantum read out by performing multiple quantum non-demolition read-outs in sequence to yield nearly complete information. Liu teaches wherein said detecting comprises achieving quantum read out by performing multiple quantum non-demolition read-outs in sequence to yield nearly complete information (One solution is to use the repetitive quantum-non-demolition readout technique, where the qubit is correlated with an ancilla, which is subsequently read out (abstract); The input data is the time trace of single photon detector clicks through the repetitive readout process (Figure 1(c)), pg. 3, last para.) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Verdon to incorporate the teachings of Liu for the benefit of repetitive readout enhanced my machine learning (Liu, abstract). 11. Claim 3, 9, 10, 13 are rejected under 35 U.S.C 103 as being unpatentable over Verdon et al. ("Learning to learn with quantum neural networks via classical neural networks." arXiv preprint arXiv:1907.05415 (2019)) in view of Romero et al. ("Quantum autoencoders for efficient compression of quantum data." Quantum Science and Technology 2.4 (2017): 045001 hereinafter “Romero NPL”) Regarding claim 3, Verdon teaches the method according to claim 1, Verdon teaches already teaches plurality of activation tensors (θt-2 → RNN, θt-1 → RNN, θt → RNN Fig. 2, pg. 3; The Examiner notes θt-2, θt-1 , θt are activation function outputs vectors or tensors which reads on “activation function outputs vectors or tensors 106 from the features f1 through fn” in Applicant’s Fig. 2 (instant specification [0142])) Verdon does not explicitly teach further comprising compressing said plurality of activation tensors using an energy based model. Romero NPL teaches further comprising compressing said plurality of activation tensors (The structure of the underlying autoencoder network can be chosen to represent the data on a smaller dimension, effectively compressing the input, abstract) using an energy based model (Table 1 shows the average error in the fidelities and the energies obtained after a cycle of compression and decompression through the optimal quantum autoencoder, pg. 7, second para.). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Verdon to incorporate the teachings of Romero NPL for the benefit of circuit models are able to achieve high fidelities for the encoding, producing decoded wavefunctions with energies that are close to the original values within chemical accuracy (Romero NPL, pg. 7, second para.) Regarding claim 9, Verdon teaches the method according to claim 1, Verdon does not explicitly teach wherein detecting comprises performing quantum tomography to measure higher energy state levels by shifting higher energy levels down to said minimum energy state. Romero NPL teaches wherein detecting comprises performing quantum tomography to measure higher energy state levels by shifting higher energy levels down to said minimum energy state (For instance, in VQE, a series of measurements corresponds to some electronic energy which is then minimized. (Alternatively, the electronic energy may be maximized, such as by minimizing the negative of the electronic energy.) In general, one could use fidelity or state tomography to measure the quality [0041]). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Verdon to incorporate the teachings of Romero NPL for the benefit of circuit models are able to achieve high fidelities for the encoding, producing decoded wavefunctions with energies that are close to the original values within chemical accuracy (Romero NPL, pg. 7, second para.) Regarding claim 10, Verdon teaches quantum optimizer apparatus for accelerating training of a classic neural network (The meta-learning neural network architecture used in this paper is depicted in Figure 2 (pg. 3, left col., last para.); Specifically, we train classical recurrent neural networks to find approximately optimal parameters within a small number of queries of the cost function for the Quantum Approximate Optimization Algorithm (QAOA) (abstract); The Examiner notes the Applicant’s disclosure “implement quantum approximate optimization algorithms”, instant specification [0039]), comprising: a quantum system (QNN, Fig. 2, pg. 3) operative to accelerate training of the classic neural network (RNN, Fig. 2, pg. 3); a classic processor (CPU, Fig. 1, pg. 2) coupled to said quantum system (QPU, Fig. 1, pg. 2) and operative to: receive neural network activation function tensors corresponding to features across layers in the classic neural network (θt-2 → RNN, θt-1 → RNN, θt → RNN are received across layers in the RNN, Fig. 2, pg. 3; The Examiner notes θt-2, θt-1 , θt are activation function outputs vectors or tensors which reads on “activation function outputs vectors or tensors 106 from the features f1 through fn” in Applicant’s Fig. 2 (instant specification [0142])); map activation energy represented by said energy based model to quantum states in said quantum system (θt-1 → QNN, θt → QNN, Fig. 2, pg. 3; The goal of the QAOA (Quantum Approximate Optimization Algorithm) is to prepare low-energy states of a cost Hamiltonian ˆ HC, which is usually a Hamiltonian which is diagonal in the computational basis (pg. 5, right col., section A). According to the instant specification: “Note that the quantum system may comprise any suitable system that … can perform energy optimization (i.e. energy minimization) to find the minimum energy state … or implement quantum approximate optimization algorithms” [0139]); manipulate said quantum system into a state that represents the complete or partial state of the classic neural network The meta-learning neural network architecture used in this paper is depicted in Figure 2, there, an LSTM recurrent neural network is used to recursively propose updates to the QNN parameters, pg. 3, right col., last para. to pg. 4, left col., first para.); allow said quantum system to transition to its optimum state and detect an energy state at one or more observation ports in said quantum system (What is needed instead is a loss function that encourages exploration of the landscape in order to find a better optimum. The loss function we chose for our experiments is the observed improvement at each time step, summed over the history of the optimization, pg. 5, left col., first para.) after said quantum system collapses to a minimum total energy (In this case, the QNN’s variational parameters are the control parameters of the dynamics, and by appropriately tuning these parameters via quantum classical optimization, one can cause the wavefunction to effectively descend the energetic landscape towards lower-energy regions, pg. 4, left col., third para.); read the output state of said quantum system (As mentioned previously, for our QNN’s of interest, the cost function to be optimized is the expectation value of a certain Hamiltonian operator ˆ H, with respect to a parameterized class of states … output by a family of parametrized quantum circuits (pg. 3, left col., second to the last para.); For both testing and training, we squashed the read out of the cost function by a quantity which bounds the operator norm of the Hamiltonian, pg. 7, right col., last para.) to infer optimum neural network parameters for the classic neural network (Neural network training and inference was done in TensorFlow, pg. 7, right col., second o the last para.); and determine updates to one or more neural network parameters in accordance with one or more energy states detected (The objective of quantum-classical meta-learning is to train our RNN to learn an efficient parameter update scheme for a family of cost functions of interest (pg. 4, right col., first para.); In a sense, these QAOA-like QNN’s are simply variational methods to descend the energy landscape, pg. 10, left col., last sentence to right col., first sentence.), thereby greatly reducing the time required to train the classic neural network fully or as part of a transfer learning methodology (This opens up the possibility of training on small, classically simulatable problem instances … thereby significantly reducing the number of required quantum-classical optimization iterations, abstract). Verdon does not explicitly teach compress said activation function tensor utilizing an energy based model; Romero NPL teaches compress said activation function tensor (The structure of the underlying autoencoder network can be chosen to represent the data on a smaller dimension, effectively compressing the input, abstract) utilizing an energy based model (Table 1 shows the average error in the fidelities and the energies obtained after a cycle of compression and decompression through the optimal quantum autoencoder, pg. 7, second para.). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Adachi to incorporate the teachings of Romero NPL for the benefit of circuit models are able to achieve high fidelities for the encoding, producing decoded wavefunctions with energies that are close to the original values within chemical accuracy (Romero NPL, pg. 7, second para.) Regarding claim 13, Verdon teaches the apparatus according to claim 10, Verdon does not explicitly teach wherein said activation function tensor is compressed utilizing a helper neural network. Romero NPL teaches wherein said activation function tensor is compressed (The structure of the underlying autoencoder network can be chosen to represent the data on a smaller dimension, effectively compressing the input, abstract) utilizing a helper neural network (Table 1 shows the average error in the fidelities and the energies obtained after a cycle of compression and decompression through the optimal quantum autoencoder, pg. 7, second para.). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Verdon to incorporate the teachings of Romero NPL for the benefit of circuit models are able to achieve high fidelities for the encoding, producing decoded wavefunctions with energies that are close to the original values within chemical accuracy (Romero NPL, pg. 7, second para.) 12. Claim 11 are rejected under 35 U.S.C 103 as being unpatentable over Verdon et al. ("Learning to learn with quantum neural networks via classical neural networks." arXiv preprint arXiv:1907.05415 (2019)) in view of Romero et al. ("Quantum autoencoders for efficient compression of quantum data." Quantum Science and Technology 2.4 (2017): 045001 hereinafter “Romero NPL”) and further in view of Lee et al. (US20210011748 filed 07/08/2019) Regarding claim 11, Modified Verdon teaches the apparatus according to claim 10, Lee teaches wherein said quantum system comprises a plurality of quantum dots (A quantum computing portion may include a quantum processor which may include a quantum computing system, for example but not limited to, … quantum dots [0022]). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Verdon to incorporate the teachings of Lee for the benefit of simulating the ground state of molecules or other quantum systems of interest by combing quantum imaginary time evolution and Restricted Boltzmann Machine [0026] which is a type of neural network (Lee [0025]) 13. Claim 12 are rejected under 35 U.S.C 103 as being unpatentable over Verdon et al. ("Learning to learn with quantum neural networks via classical neural networks." arXiv preprint arXiv:1907.05415 (2019)) in view of Romero et al. ("Quantum autoencoders for efficient compression of quantum data." Quantum Science and Technology 2.4 (2017): 045001 hereinafter “Romero NPL”) and further in view of Carolan et al. (US20200372334 filed 03/23/2020) Regarding claim 12, Verdon teaches the apparatus according to claim 10, Verdon does not explicitly wherein said quantum system comprises a quantum dot array including said plurality of quantum dots organized in a plurality of rows with at least one observation port on either end of each row. Carolan teaches wherein said quantum system comprises a quantum dot array including said plurality of quantum dots organized in a plurality of rows with at least one observation port on either end of each row (The arrays of single-photon nonlinearities can include arrays of … quantum dots [0010]). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Verdon to incorporate the teachings of Carolan for the benefit of a QONN (Quantum Optical Neural Networks) to perform a range of quantum information processing tasks, including newly developed protocols for quantum optical state compression (Carolan abstract) 14. Claims 14-16 are rejected under 35 U.S.C. 103 as being unpatentable over Verdon et al. ("Learning to learn with quantum neural networks via classical neural networks." arXiv preprint arXiv:1907.05415 (2019)) in view of Romero et al. ("Quantum autoencoders for efficient compression of quantum data." Quantum Science and Technology 2.4 (2017): 045001 hereinafter “Romero NPL”) and further in view of Lampert et al. (US20190392352) Regarding claim 14, Verdon teaches the apparatus according to claim 10, Verdon does not explicitly teach wherein a different imposer frequency is assigned to each activation function, and wherein imposer signal durations correspond to activation levels. Lampert teaches wherein a different imposer frequency is assigned to each activation function (and thereby control the formation of quantum dots 142 under each of the gates 106 and 108. Additionally, the relative potential energy profiles under different ones of the gates 106 and 108 allow the quantum dot qubit device 100 to tune the potential interaction between quantum dots 142 under adjacent gates [0045]), and wherein imposer signal durations correspond to activation levels (For these reasons, typical frequencies of qubits are in 1-10 GHz, e.g. in 4-10 GHz [0028]; quantum dots 142 [0044]; quantum dots 142 in the fin 104-1 into electrical signals [0050]). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Verdon to incorporate the teachings of Lampert for the benefit of control logic configured to implement machine learning and other predictive methodologies to gradually adapt the signals applied to quantum dot qubit devices (Lampert [0018]) Regarding claim 15, Modified Verdon teaches the apparatus according to claim 14, Lampert teaches wherein a pulse duration of an imposer signal (During operation of the quantum dot qubit device 100, voltages may be applied to the gates 106/108 to adjust the potential energy [0042]. The Examiner notes the gates are also known as imposers) determines a probability ratio between an original energy level and a destination energy level based on a Rabi oscillation process (the signals may be fine-tuned to achieve a higher probability of the desired qubit(s) in the quantum dot qubit device being eventually set to the desired state [0018]; To determine the phase of the qubit after driving, the readout of a driven Rabi oscillation may be fed through a fast Fourier transform (FFT) to determine the Rabi reference frequency upon initialization [0096]). The same motivation to combine dependent claim 14 applies here. Regarding claim 16, Modified Verdon teaches the apparatus according to claim 14, Lampert teaches wherein said classic processor (As shown in FIG. 15, the data processing system 300 may include at least one processor 302 coupled to memory elements 304 through a system bus 306 [0084]) is further operative to map quantum states back to known imposer values that yield minimum energy values (a process of programming the qubits that includes applying one or more signals to various gates of the quantum dot qubit devices to set different qubits to desired initial quantum states [0017]; During operation of the quantum dot qubit device 100, voltages may be applied to the gates 106/108 to adjust the potential energy in the quantum well layer [0042]). The same motivation to combine dependent claim 14 applies here. 15. Claims 17 and 18 rejected under 35 U.S.C 103 as being unpatentable over Verdon et al. ("Learning to learn with quantum neural networks via classical neural networks." arXiv preprint arXiv:1907.05415 (2019)) in view of Lee et al. (US20210011748 filed 07/08/2019) and further in view of Romero et al. ("Quantum autoencoders for efficient compression of quantum data." Quantum Science and Technology 2.4 (2017): 045001 hereinafter “Romero NPL”) Regarding claim 17, Verdon teaches a method of quantum enhanced accelerated training of a classic neural network (The meta-learning neural network architecture used in this paper is depicted in Figure 2 (pg. 3, right col., last para.); Specifically, we train classical recurrent neural networks to find approximately optimal parameters within a small number of queries of the cost function for the Quantum Approximate Optimization Algorithm (QAOA) (abstract); The Examiner notes the Applicant’s disclosure “implement quantum approximate optimization algorithms”, instant specification [0039]), said method comprising: receiving a plurality of activation function tensors corresponding to features and layers in the classic neural network (θt-2 → RNN, θt-1 → RNN, θt → RNN are received across layers in the RNN, Fig. 2, pg. 3; The Examiner notes θt-2, θt-1 , θt are activation function outputs vectors or tensors which reads on “activation function outputs vectors or tensors 106 from the features f1 through fn” in Applicant’s Fig. 2 (instant specification [0142])); mapping said reduced number of activation function tensors signals to energy levels representing a quantum state (The meta-learning neural network architecture used in this paper is depicted in Figure 2, there, an LSTM recurrent neural network is used to recursively propose updates to the QNN parameters, pg. 3, right col., last para. to pg. 4, left col., first para.) wherein said quantum dot array is used to accelerate training of the classic neural network (We see that backpropagating from the meta-loss node to the RNN’s necessitates gradients to pass through the QNN (pg. 3, Fig. 2);This access to gradients during training is not strictly necessary, but can speed up training, pg. 4, left col., last para., last sentence to first sentence, right col.,); manipulating said quantum dot away into a state that represents the complete or partial state of the classic neural network (The meta-learning neural network architecture used in this paper is depicted in Figure 2, there, an LSTM recurrent neural network is used to recursively propose updates to the QNN parameters, pg. 3, right col., last para. to pg. 4, left col., first para.); allowing said quantum dot array to transition to its optimum state and detecting minimum energy states at one or more observation points in said quantum dot array (What is needed instead is a loss function that encourages exploration of the landscape in order to find a better optimum. The loss function we chose for our experiments is the observed improvement at each time step, summed over the history of the optimization, pg. 5, left col., first para.) once said quantum dot array converges to a minimum total energy (In this case, the QNN’s variational parameters are the control parameters of the dynamics, and by appropriately tuning these parameters via quantum classical optimization, one can cause the wavefunction to effectively descend the energetic landscape towards lower-energy regions, pg. 4, left col., third para.); reading the output state of said quantum dot array (As mentioned previously, for our QNN’s of interest, the cost function to be optimized is the expectation value of a certain Hamiltonian operator ˆ H, with respect to a parameterized class of states … output by a family of parametrized quantum circuits (pg. 3, left col., second to the last para.); For both testing and training, we squashed the read out of the cost function by a quantity which bounds the operator norm of the Hamiltonian, pg. 7, right col., last para.) to infer optimum neural network parameters for the classic neural network (Neural network training and inference was done in TensorFlow, pg. 7, right col., second to the last para.); and determining an update to neural network parameters in accordance with said one or more minimum energy states detected (The objective of quantum-classical meta-learning is to train our RNN to learn an efficient parameter update scheme for a family of cost functions of interest (pg. 4, right col., first para.); In a sense, these QAOA-like QNN’s are simply variational methods to descend the energy landscape, pg. 10, left col., last sentence to right col., first sentence.), thereby greatly reducing the time required to train the classic neural network fully or as part of a transfer learning methodology (This opens up the possibility of training on small, classically simulatable problem instances … thereby significantly reducing the number of required quantum-classical optimization iterations, abstract). Verdon does not explicitly teach compressing said plurality of activation function tensors of the neural network utilizing an energy based model to reduce the number of activation function tensors; representing a quantum state in a quantum dot array incorporating a plurality of quantum dots, Lee teaches representing a quantum state in a quantum dot array incorporating a plurality of quantum dots (A quantum computing portion may include a quantum processor which may include a quantum computing system, for example but not limited to, … quantum dots [0022]), It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Verdon to incorporate the teachings of Lee for the benefit of a device comprising a quantum computing portion and a classical computing portion in communication with the quantum computing portion [0005], and simulating the ground state of molecules or other quantum systems of interest by combing quantum imaginary time evolution and Restricted Boltzmann Machine [0026] which is a type of neural network (Lee [0025]) Modified Verdon does not explicitly teach compressing said plurality of activation function tensors of the neural network utilizing an energy based model to reduce the number of activation function tensors; Romero NPL teaches compressing said plurality of activation function tensors of the neural network (The structure of the underlying autoencoder network can be chosen to represent the data on a smaller dimension, effectively compressing the input, abstract) utilizing an energy based model to reduce the number of activation function tensors (Table 1 shows the average error in the fidelities and the energies obtained after a cycle of compression and decompression through the optimal quantum autoencoder, pg. 7, second para.); It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Verdon to incorporate the teachings of Romero NPL for the benefit of circuit models are able to achieve high fidelities for the encoding, producing decoded wavefunctions with energies that are close to the original values within chemical accuracy (Romero NPL, pg. 7, second para.) Regarding claim 18, Modified Verdon teaches the method according to claim 17, Modified Verdon does not explicitly teach wherein said compressing comprises compressing said plurality of activation function tensors utilizing a helper neural network. Romero NPL teaches wherein said compressing comprises compressing said plurality of activation function tensors (The structure of the underlying autoencoder network can be chosen to represent the data on a smaller dimension, effectively compressing the input, abstract) utilizing a helper neural network (Table 1 shows the average error in the fidelities and the energies obtained after a cycle of compression and decompression through the optimal quantum autoencoder, pg. 7, second para.). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Verdon to incorporate the teachings of Romero NPL for the benefit of circuit models are able to achieve high fidelities for the encoding, producing decoded wavefunctions with energies that are close to the original values within chemical accuracy (Romero NPL, pg. 7, second para.) 16. Claims 19-22 rejected under 35 U.S.C 103 as being unpatentable over Verdon et al. ("Learning to learn with quantum neural networks via classical neural networks." arXiv preprint arXiv:1907.05415 (2019)) in view of Lee et al. (US20210011748 filed 07/08/2019) in view of Romero et al. ("Quantum autoencoders for efficient compression of quantum data." Quantum Science and Technology 2.4 (2017): 045001 hereinafter “Romero NPL”) and further in view of Lampert et al. (US20190392352) Regarding claim 19, Modified Verdon teaches the method according to claim 17, Modified Verdon does not explicitly teach wherein each activation function is assigned an imposer signal frequency applied to one or more quantum dots, and wherein said imposer signal frequency determines an energy level of one or more quantum dots in said quantum system. Lampert teaches wherein each activation function is assigned an imposer signal frequency applied to one or more quantum dots (thereby control the formation of quantum dots 142 under each of the gates 106 and 108. Additionally, the relative potential energy profiles under different ones of the gates 106 and 108 allow the quantum dot qubit device 100 to tune the potential interaction between quantum dots 142 under adjacent gates [0045]), and wherein said imposer signal frequency determines an energy level of one or more quantum dots in said quantum system (During operation of the quantum dot qubit device 100, voltages may be applied to the gates 106/108 to adjust the potential energy [0042]; For these reasons, typical frequencies of qubits are in 1-10 GHz, e.g. in 4-10 GHz [0028]; quantum dots 142 [0044]; quantum dots 142 in the fin 104-1 into electrical signals [0050]. The Examiner notes the gates are also known as imposers). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Verdon to incorporate the teachings of Lampert for the benefit of control logic configured to implement machine learning and other predictive methodologies to gradually adapt the signals applied to quantum dot qubit devices (Lampert [0018]) Regarding claim 20, Modified Verdon teaches the method according to claim 19, Lampert teaches wherein a duration of each frequency applied to an imposer is roughly proportional to an activation level (For these reasons, typical frequencies of qubits are in 1-10 GHz, e.g. in 4-10 GHz [0028]; quantum dots 142 [0044]; quantum dots 142 in the fin 104-1 into electrical signals [0050]). The same motivation to combine independent claim 19 applies here. Regarding claim 21, Modified Verdon teaches the method according to claim 19, Lampert teaches wherein a pulse duration of an imposer signal (During operation of the quantum dot qubit device 100, voltages may be applied to the gates 106/108 to adjust the potential energy [0042]. The Examiner notes the gates are also known as imposers) determines a probability ratio between an original energy level and a destination energy level based on a Rabi oscillation process (the signals may be fine-tuned to achieve a higher probability of the desired qubit(s) in the quantum dot qubit device being eventually set to the desired state [0018]; To determine the phase of the qubit after driving, the readout of a driven Rabi oscillation may be fed through a fast Fourier transform (FFT) to determine the Rabi reference frequency upon initialization [0096])). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Verdon to incorporate the teachings of Lampert for the benefit of control logic configured to implement machine learning and other predictive methodologies to gradually adapt the signals applied to quantum dot qubit devices (Lampert [0018]). Regarding claim 22, Modified Verdon teaches the apparatus according to claim 19, Lampert teaches further comprising mapping quantum states back to known imposer values that yield minimum energy values (a process of programming the qubits that includes applying one or more signals to various gates of the quantum dot qubit devices to set different qubits to desired initial quantum states [0017]; During operation of the quantum dot qubit device 100, voltages may be applied to the gates 106/108 to adjust the potential energy in the quantum well layer [0042]). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Verdon to incorporate the teachings of Lampert for the benefit of control logic configured to implement machine learning and other predictive methodologies to gradually adapt the signals applied to quantum dot qubit devices (Lampert [0018]). 17. Claims 23-25 are rejected under 35 U.S.C. 103 as being unpatentable over Verdon et al. ("Learning to learn with quantum neural networks via classical neural networks." arXiv preprint arXiv:1907.05415 (2019)) in view of Romero et al. (US20200005186 filed 07/02/2019) and further in view of Sete et al. (US20190007051) Regarding claim 23, Verdon teaches a quantum optimizer apparatus for accelerating training of a classic neural network (The meta-learning neural network architecture used in this paper is depicted in Figure 2 (pg. 3, left col., last para.); Specifically, we train classical recurrent neural networks to find approximately optimal parameters within a small number of queries of the cost function for the Quantum Approximate Optimization Algorithm (QAOA) (abstract); The Examiner notes the Applicant’s disclosure “implement quantum approximate optimization algorithms”, instant specification [0039]), comprising: a quantum system (QNN, Fig. 2, pg. 3) operative to accelerate training of the classic neural network (RNN, Fig. 2, pg. 3); a first neural network coupled to said quantum system and operative to (Figure 2. Unrolled temporal quantum-classical computational graph for the meta-learning optimization of the recurrent neural network (RNN) optimizer and a quantum neural network (QNN), pg. 3): map the reduced number of activation and loss function outputs to unique quantum state energy levels in said quantum system (θt-1 → QNN, θt → QNN, Fig. 2, pg. 3; The goal of the QAOA (Quantum Approximate Optimization Algorithm) is to prepare low-energy states of a cost Hamiltonian ˆ HC, which is usually a Hamiltonian which is diagonal in the computational basis (pg. 5, right col., section A); This meta loss function L is a functional of the history of expectation value estimate samples y = {yt}T t=1, and is thus indirectly dependent on the RNN parameters ϕ, Fig. 2, pg. 3; the Examiner notes according to the instant specification: “Note that the quantum system may comprise any suitable system that … can perform energy optimization (i.e. energy minimization) to find the minimum energy state … or implement quantum approximate optimization algorithms” [0139]; The Examiner notes θt-1 , θt are activation function outputs vectors or tensors which reads on “activation function outputs vectors or tensors 106 from the features f1 through fn” in Applicant’s Fig. 2 (instant specification [0142])); a circuit (The quantum circuits used for training and testing the recurrent neural network were executed using the Cirq quantum circuit simulator [69] running on a classical computer, pg. 7, right col., third para.) operative to manipulate said quantum system into a state that represents the complete or partial state of the classic neural network (The meta-learning neural network architecture used in this paper is depicted in Figure 2, there, an LSTM recurrent neural network is used to recursively propose updates to the QNN parameters, pg. 3, right col., last para. to pg. 4, left col., first para.) and to apply said energy level mappings to said quantum system (By applying and variationally optimizing the QAOA, one obtains a wave function which, when measured in the computational basis, has a high probability of yielding a bitstring corresponding to a partition of large cut size, pg. 6, left col., last sentence ti right col., first sentence); operative to read the output state of said quantum system (As mentioned previously, for our QNN’s of interest, the cost function to be optimized is the expectation value of a certain Hamiltonian operator ˆ H, with respect to a parameterized class of states … output by a family of parametrized quantum circuits (pg. 3, left col., second to the last para.); For both testing and training, we squashed the read out of the cost function by a quantity which bounds the operator norm of the Hamiltonian, pg. 7, right col., last para.) to infer optimum neural network parameters for the classic neural network (Neural network training and inference was done in TensorFlow, pg. 7, right col., second to the last para.) by detecting an energy state at one or more observation ports in said quantum system (What is needed instead is a loss function that encourages exploration of the landscape in order to find a better optimum. The loss function we chose for our experiments is the observed improvement at each time step, summed over the history of the optimization, pg. 5, left col., first para.) after said quantum system evolves to a minimum total energy (In this case, the QNN’s variational parameters are the control parameters of the dynamics, and by appropriately tuning these parameters via quantum classical optimization, one can cause the wavefunction to effectively descend the energetic landscape towards lower-energy regions, pg. 4, left col., third para.); and operative to generate updates to neural network parameters to said classic neural network in accordance with a plurality of detected energy states (The objective of quantum-classical meta-learning is to train our RNN to learn an efficient parameter update scheme for a family of cost functions of interest (pg. 4, right col., first para.); In a sense, these QAOA-like QNN’s are simply variational methods to descend the energy landscape, pg. 10, left col., last sentence to right col., first sentence.) thereby greatly reducing the time required to train said classic neural network fully or as part of a transfer learning methodology (This opens up the possibility of training on small, classically simulatable problem instances … thereby significantly reducing the number of required quantum-classical optimization iterations, abstract). Verdon does not explicitly teach compress a plurality of activation and loss function outputs from a classic neural network utilizing an energy based model to generate a reduced number of activation and loss function outputs; wherein said first neural network is operative to select an optimum choice of frequencies and pulse durations to apply to said quantum system, a plurality of detectors and a second neural network coupled to said quantum system Romero teaches compress a plurality of activation and loss function outputs from a classic neural network utilizing an energy based model to generate a reduced number of activation and loss function outputs (an encoder of the quantum autoencoder compresses each of a plurality of training states into a corresponding compressed state [0043]; Said determining, for each of the compressed states, the generator parameter set may include optimizing the generator parameter set to minimize a cost function. The cost function may depend on fidelity between an output of the compressed-state generator and said each of the compressed states [0047]); loss function outputs to unique quantum state energy levels in said quantum system (The cost function may depend on fidelity between an output of the compressed-state generator and said each of the compressed states [0047]); to apply said energy level mappings to said quantum system (VQE uses expectation values that correspond to a ground state energy as a quality metric. [0032]; (autoencoder training) may include the following. The quantum autoencoder (QAE) circuit may be trained in any of a variety of way ... For example, an appropriate circuit may be chosen that is conducive to running on the architecture of the target hardware's quantum processor [0033]); a plurality of detectors (the measurement unit 110 may be a laser and either a CCD or a photodetector (e.g., a photomultiplier tube) [0084]) after said quantum system evolves to a minimum total energy (If the rate of change of the system Hamiltonian is slow enough, the system stays close to the ground state of the instantaneous Hamiltonian. If the rate of change of the system Hamiltonian is accelerated, the system may leave the ground state temporarily but produce a higher likelihood of concluding in the ground state of the final problem Hamiltonian [0070]. The Examiner notes ground state energy as the minimum energy); a second neural network coupled to said quantum system (the decoder is synthesized on the quantum computer (e.g., quantum computer component 102 of FIGS. 1 and 3) [0110]) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Verdon to incorporate the teachings of Romero for the benefit of using compressed unsupervised state preparation (CUSP) to reduce circuit depth for quantum-state generator circuits which advantageously speeds up initial state generation, in turn providing at least three key benefits for quantum computers (Romero [0006]) Modified Verdon does not explicitly teach wherein said first neural network is operative to select an optimum choice of frequencies and pulse durations to apply to said quantum system; Sete teaches wherein said first neural network is operative to select an optimum choice of frequencies and pulse durations to apply to said quantum system (The modulation frequency ωm and other parameters of the control signal can be selected to achieve a specified quantum logic gate in some cases … The duration of the interaction produced by the modulation frequency ωm may also be selected to achieve a specified quantum logic gate in some cases [0080]); It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Verdon to incorporate the teachings of Sete for the benefit of achieving a scalable quantum computing system (Sete [0038]) Regarding claim 24, Modified Verdon teaches the apparatus according to claim 23, Romero teaches wherein said first neural network and/or said second neural network learn one or more characteristics of said quantum system (Second quantum circuit 402 is used to train encoder Y using classical machine-learning techniques. More specifically, this training identifies a decoder parameter vector {right arrow over (y)} [0105]). The same motivation to combine independent claim 23 applies here. Regarding claim 25, Modified Verdon teaches the method according to claim 23, Romero teaches wherein said quantum system comprises a plurality of quantum dots (Each of the qubits may be one of a superconducting qubit, a trapped-ion qubit, and a quantum dot qubit [0045]; Examples of such physical media include superconducting material, trapped ions, photons, optical cavities, individual electrons trapped within quantum dots [0062]). The same motivation to combine independent claim 23 applies here. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MORIAM MOSUNMOLA GODO whose telephone number is (571)272-8670. The examiner can normally be reached Monday-Friday 8:00am-5:00pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle T. Bechtold can be reached on (571) 431-0762. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /M.G./Examiner, Art Unit 2148 /MICHELLE T BECHTOLD/ Supervisory Patent Examiner, Art Unit 2148
Read full office action

Prosecution Timeline

May 01, 2021
Application Filed
Sep 20, 2024
Non-Final Rejection — §102, §103, §112
Nov 12, 2024
Applicant Interview (Telephonic)
Nov 12, 2024
Examiner Interview Summary
Feb 18, 2025
Response Filed
Jul 22, 2025
Final Rejection — §102, §103, §112
Dec 05, 2025
Response after Non-Final Action
Jan 29, 2026
Request for Continued Examination
Feb 08, 2026
Response after Non-Final Action
Mar 11, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602586
SUPERVISORY NEURON FOR CONTINUOUSLY ADAPTIVE NEURAL NETWORK
2y 5m to grant Granted Apr 14, 2026
Patent 12530583
VOLUME PRESERVING ARTIFICIAL NEURAL NETWORK AND SYSTEM AND METHOD FOR BUILDING A VOLUME PRESERVING TRAINABLE ARTIFICIAL NEURAL NETWORK
2y 5m to grant Granted Jan 20, 2026
Patent 12511528
NEURAL NETWORK METHOD AND APPARATUS
2y 5m to grant Granted Dec 30, 2025
Patent 12367381
CHAINED NEURAL ENGINE WRITE-BACK ARCHITECTURE
2y 5m to grant Granted Jul 22, 2025
Patent 12314847
TRAINING OF MACHINE READING AND COMPREHENSION SYSTEMS
2y 5m to grant Granted May 27, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
44%
Grant Probability
78%
With Interview (+33.4%)
4y 8m
Median Time to Grant
High
PTA Risk
Based on 68 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month