Prosecution Insights
Last updated: April 19, 2026
Application No. 18/237,323

QUANTUM ERROR CORRECTION USING NEURAL NETWORKS

Non-Final OA §102
Filed
Aug 23, 2023
Examiner
LIN, KATHERINE Y
Art Unit
2113
Tech Center
2100 — Computer Architecture & Software
Assignee
Google LLC
OA Round
3 (Non-Final)
91%
Grant Probability
Favorable
3-4
OA Rounds
2y 5m
To Grant
98%
With Interview

Examiner Intelligence

Grants 91% — above average
91%
Career Allow Rate
320 granted / 351 resolved
+36.2% vs TC avg
Moderate +7% lift
Without
With
+7.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
31 currently pending
Career history
382
Total Applications
across all art units

Statute-Specific Performance

§101
23.4%
-16.6% vs TC avg
§103
36.8%
-3.2% vs TC avg
§102
22.1%
-17.9% vs TC avg
§112
6.3%
-33.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 351 resolved cases

Office Action

§102
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-3, 5-12, 22-24, 26-27 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Choukroun et al. (Deep Quantum Error Correction). Choukroun discloses: 1. A method for detecting errors in a computation performed by a quantum computer comprising a plurality of data qubits, the method comprising: obtaining error correction data for each of a plurality of time steps during the computation, the error correction data for each time step comprising a respective feature for each of a plurality of stabilizer qubits that each correspond to a respective subset of the data qubits for the time step; (p 5, 4.3. Noisy Syndrome Measurements: each syndrome measurement is repeated T times. This gives the decoder input an additional time dimension; p 3, 3.2. Quantum Error Correction Code: stabilizer measurements and are performed via additional qubits (ancilla bits). The result of all of the stabilizer measurements on a given state is called the syndrome; fig 3, 5) initializing a decoder state that represents information about the plurality of stabilizer qubits, wherein the decoder state comprises a vector representation for each of the plurality of stabilizer qubits; and (p 3: QECC teaches stabilizer qubits; fig 1(b); p 4: "decoder that is parameterized by a vector of weights"; p 5: 4.4. Architecture and Training: decoder; fig 2: Quantum Error Correction Code Transformer (QECCT) architecture (QECCT)) for each of a plurality of updating time steps, wherein each updating time step corresponds to one or more of the time steps: generating an intermediate representation comprising, for each of the stabilizer qubits, one or more embeddings representing the respective feature for the stabilizer qubit at the one or more time steps corresponding to the updating time step; and (p 5: 4.4. Architecture and Training: The initial encoding is defined as a d dimensional one-hot encoding of the n+ns input elements where n is the number of physical qubits and ns the length of the syndrome. The network gw is defined as a shallow network with two fully connected layers of hidden dimensions equal to 5ns and with a GELU non-linearity; fig 2) processing a time step input for the updating time step through a Transformer neural network to update the decoder state for the updating time step, wherein the time step input comprises (i) the intermediate representation for the updating time step that comprises the one or more embeddings representing the respective feature for each of the stabilizer qubits and (ii) the decoder state for a preceding updating time step that comprises the vector representations for each of the stabilizer qubits after being updated at the preceding updating time step; and (p 3: QECC teaches stabilizer qubits; p 4: "decoder that is parameterized by a vector of weights"; p 5: 4.3 In the presence of measurement errors, each syndrome measurement is repeated T times. This gives the decoder input an additional time dimension; p 5: 4.4. Architecture and Training: embedding; fig 2: decoding, QECCT) generating a prediction of whether an error occurred in the computation, comprising: generating, from the decoder state for the last updating time step of the plurality of updating time steps, a respective input corresponding to each of one or more prediction neural networks; and (p 5, 4.3. Noisy Syndrome Measurements: each syndrome measurement is repeated T times. This gives the decoder input an additional time dimension; fig 2: decoding, output, equation 11, Quantum Error Correction Code Transformer architecture; p 5: 4.4. Architecture and Training) processing each respective input using the corresponding prediction neural network to generate the prediction. (p 5: 4.4. Architecture and Training: The output is obtained via two fully connected layers. The first layer reduces the element-wise embedding to a one-dimensional n + ns vector and the second to an n dimensional vector representing the soft decoded noise, trained) 2. The method of claim 1, wherein the decoder state represents information about the plurality of data qubits. (p 5: 4.4. Architecture and Training; fig 2) 3. The method of claim 1, wherein the respective feature for each of the stabilizer qubits is an analog measurement of the corresponding subset of data qubits at the time step. (p 3: ancilla bits) 5. The method of claim 1, wherein the respective feature for each of the stabilizer qubits comprises posterior probabilities of a stabilizer measurement given analog measurements of the corresponding subset of data qubits at the time step. (p 3: ancilla bits) 6. The method of claim 1, wherein the respective feature for each of the stabilizer qubits comprises a time series of analog measurements of the corresponding subset of data qubits for a period of time ending at the time step. (p 3: ancilla bits) 7. The method of claim 1, wherein the error correction data for each time step comprises stabilizer events for one or more of the stabilizer qubits at the time step. (p 3, 3.2. Quantum Error Correction Code: syndrome) 8. The method of claim 1, wherein generating an intermediate representation comprising, for each of the stabilizer qubits, one or more embeddings representing the respective feature for the stabilizer qubit at the one or more time steps corresponding to the updating time step comprises: generating respective embeddings for the stabilizer qubits for the updating time step; obtaining a respective positional embedding for each stabilizer qubit characterizing a position of the stabilizer qubit within the quantum computer; and processing the respective embeddings for the stabilizer qubits and the respective positional embeddings for the stabilizer qubits using an encoding neural network to generate the intermediate representation for the updating time step. (p 4: 3.3. Error Correction Code Transformer: positional embedding; p 5: 4.4. Architecture and Training: The network gw is defined as a shallow network with two fully connected layers of hidden dimensions equal to 5ns and with a GELU non-linearity) 9. The method of claim 1, wherein generating an intermediate representation comprising, for each of the stabilizer qubits, one or more embeddings representing the respective feature for the stabilizer qubit at the one or more time steps corresponding to the updating time step comprises: generating respective embeddings for the stabilizer qubits for each time step corresponding to the updating time step; obtaining a respective positional embedding for each stabilizer qubit characterizing a position of the stabilizer qubit within the quantum computer; and processing the respective embeddings for the stabilizer qubits and the respective positional embeddings for the stabilizer qubits using an encoding neural network to generate the intermediate representation for the updating time step. (p 4: 3.3. Error Correction Code Transformer: positional embedding; p 5: 4.4. Architecture and Training: The network gw is defined as a shallow network with two fully connected layers of hidden dimensions equal to 5ns and with a GELU non-linearity) 10. The method of claim 1, wherein the one or more prediction neural networks comprise only one prediction neural network, and wherein processing each respective input using the corresponding prediction neural network to generate the prediction comprises processing the respective input using the prediction neural network to generate a score that indicates whether an error occurred. (p 5: 4.4. Architecture and Training: The output is obtained via two fully connected layers. The first layer reduces the element-wise embedding to a one-dimensional n + ns vector and the second to an n dimensional vector representing the soft decoded noise, trained) 11. The method of claim 1, wherein one or more initial updating time steps correspond to a respective plurality of time steps and a last updating time step corresponds only to a last time step of the plurality of time steps, and wherein the one or more prediction neural networks comprise only one prediction neural network, and wherein processing each respective input using the corresponding prediction neural network to generate the prediction comprises processing the respective input using the prediction neural network to generate a score that indicates whether an error occurred. (p 5: 4.4. Architecture and Training: The output is obtained via two fully connected layers. The first layer reduces the element-wise embedding to a one-dimensional n + ns vector and the second to an n dimensional vector representing the soft decoded noise, trained) 12. The method of claim 1, wherein the Transformer neural network comprises one or more Transformer layers, and wherein each Transformer layer comprises a self-attention layer and a feed-forward layer. (fig 2; p 5: 4.4. Architecture and Training: The decoder is defined as a concatenation of N decoding layers composed of self-attention and feed-forward layers) 22. The method of claim 1, wherein the prediction of whether an error occurred in the computation comprises a respective prediction for each of a plurality of logical observables. (p 4, 3.2. Quantum Error Correction Code: we are interested in the logical qubits, predicting the code up to the logical operators mapping L) 23. The method of claim 22, wherein the one or more prediction neural networks comprise a plurality of prediction neural networks, and wherein processing each respective input using the corresponding prediction neural network to generate the prediction comprises processing each respective input using the corresponding prediction neural network to generate a respective score that indicates whether an error for a corresponding logical observable of a plurality of logical observables occurred. (p 5: 4.4. Architecture and Training; fig 2) 24. The method of claim 22, wherein the one or more prediction neural networks comprise only one prediction neural network, and wherein each of the respective inputs is a transposed version of another respective input, wherein processing each respective input using the corresponding prediction neural network to generate the prediction comprises processing each respective input using the prediction neural network to generate a respective score that indicates whether an error for a corresponding logical observable for the respective input of a plurality of logical observables occurred. (p 5: 4.4. Architecture and Training; fig 2) Claim(s) 26 is/are rejected as being the system implemented by the method of claim(s) 1, and is/are rejected on the same grounds. Claim(s) 27 is/are rejected as being the media implemented by the method of claim(s) 1, and is/are rejected on the same grounds. Allowable Subject Matter Claim(s) 4, 13-21, 25 is/are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Response to Remarks DP rejection is withdrawn, since claims are narrower than claims in 18237204. Applicant's Remarks have been fully considered but they are not persuasive. Regarding the prior art rejection under 35 USC 102, the Remarks state, “However, Choukroun makes no mention of "generating, from the decoder state for the last updating time step of the plurality of updating time steps, a respective input corresponding to each of one or more prediction neural networks" because there is no "decoder state for the last updating time step." Thus, Choukroun describes "pooling over the time dimension" to generate the resulting "element-wise embedding" that is provided to a fully connected layer...” However, the examiner respectfully disagrees. Choukroun discloses, on p 5, 4.3. Noisy Syndrome Measurements, each syndrome measurement is repeated T times. This gives the decoder input an additional time dimension. Choukroun further discloses in fig 2: decoding and Quantum Error Correction Code Transformer architecture. The Remarks state, “Furthermore, the Office Action appears to allege that a "vector representing the soft decoded noise" corresponds with "decoder state" as recited in the claim (See Office Action at p. 4 and 5). The "vector representing the soft decoded noise" cannot be mapped to both the "decoder state" and "a prediction of whether an error occurred in the computation." Such a mapping contradicts "generating, from the decoder state for the last updating time step of the plurality of updating time steps, a respective input corresponding to each of one or more prediction neural networks; and processing each respective input using the corresponding prediction neural network to generate the prediction," as recited in amended claim 1.” The OA has been updated. “The decoder state comprises a vector representation for each of the plurality of stabilizer qubits” is mapped to fig 1(b), p 4: "decoder that is parameterized by a vector of weights…" Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to KATHERINE LIN whose telephone number is (571)431-0706. The examiner can normally be reached Monday-Friday; 8 a.m. - 5 p.m. EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bryce Bonzo can be reached on (571) 272-3655. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KATHERINE LIN/ Primary Examiner, Art Unit 2113
Read full office action

Prosecution Timeline

Aug 23, 2023
Application Filed
Mar 08, 2025
Non-Final Rejection — §102
Jun 04, 2025
Applicant Interview (Telephonic)
Jun 13, 2025
Examiner Interview Summary
Jul 03, 2025
Response Filed
Oct 28, 2025
Final Rejection — §102
Jan 29, 2026
Request for Continued Examination
Feb 04, 2026
Response after Non-Final Action
Feb 06, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596953
QUANTUM ERROR CORRECTION USING NEURAL NETWORKS
2y 5m to grant Granted Apr 07, 2026
Patent 12591476
EMPTY PAGE DETECTION
2y 5m to grant Granted Mar 31, 2026
Patent 12585556
ACTIVE COMPONENT DRIVEN COMPUTATIONAL SERVER RELIABILITY AND FAILURE PREVENTION SYSTEM
2y 5m to grant Granted Mar 24, 2026
Patent 12585530
SINGLE SIGNAL DEBUG PORT
2y 5m to grant Granted Mar 24, 2026
Patent 12585560
REFINING PARAMETER SETTINGS FOR COPY SERVICES
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
91%
Grant Probability
98%
With Interview (+7.1%)
2y 5m
Median Time to Grant
High
PTA Risk
Based on 351 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month