Prosecution Insights
Last updated: April 19, 2026
Application No. 17/550,882

MACHINE LEARNING BASED STABILIZER FOR NUMERICAL METHODS

Final Rejection §101§102§103
Filed
Dec 14, 2021
Examiner
KAWSAR, ABDULLAH AL
Art Unit
2127
Tech Center
2100 — Computer Architecture & Software
Assignee
Advanced Micro Devices, Inc.
OA Round
2 (Final)
79%
Grant Probability
Favorable
3-4
OA Rounds
4y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
312 granted / 395 resolved
+24.0% vs TC avg
Strong +58% interview lift
Without
With
+58.0%
Interview Lift
resolved cases with interview
Typical timeline
4y 11m
Avg Prosecution
14 currently pending
Career history
409
Total Applications
across all art units

Statute-Specific Performance

§101
16.3%
-23.7% vs TC avg
§103
43.5%
+3.5% vs TC avg
§102
12.4%
-27.6% vs TC avg
§112
23.1%
-16.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 395 resolved cases

Office Action

§101 §102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claims 1 and 10 are objected to because of the following informalities: duplicate word error "that that". Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-8 and 10-17 are rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 According to the first part of the analysis, in the instant case, claims 1-9 are directed to a microprocessor and claims 10-18 are directed to a method. Therefore, claims 1-18 are directed to either a process, machine, manufacture or composition of matter. Regarding claim 1 Step 2A Prong 1 “determining based on a prediction (abstract idea (e.g. evaluation that can practically be performed in the human mind or by a human using pen and paper) – a mental process. See MPEP 2106.04(a)(2) III C). Claim 1 therefore recites an abstract idea. Step 2A Prong 2 and 2B This judicial exception is not integrated into a practical application. Additional elements: “prediction from a machine learning model and provided to an arithmetic logic unit in the microprocessor” (This step is adding the words “apply it” (or an equivalent) with the judicial exception, or merely applying a machine learning model as a tool to perform the abstract idea – see MPEP 2106.05(f)). “A microprocessor comprising logic configured to cause:” (understood to be mere instructions to implement an abstract idea on a computer, or merely use a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)). “wherein the machine learning model is trained specific to a particular algorithmic computation and a particular datatype.” (understood to be mere instructions to implement an abstract idea on a computer – see MPEP 2016.05(f)). The additional elements as disclosed above alone or in combination do not integrate the judicial exception into practical application as they are generic computer functions implemented to perform the disclosed abstract idea above. Regarding claim 2 Step 2A Prong 1 “using a low precision input value as input, performing one or more steps of the algorithmic computation to generate a low precision result value as output.” (abstract idea - a mathematical calculation (algorithmic computation which generates a value i.e. mathematical operation(s)). See MPEP 2106.04(a)(2)). Step 2A Prong 2 and 2B This judicial exception is not integrated into a practical application. Additional elements: “wherein the microprocessor is further configured to cause:” (understood to be mere instructions to implement an abstract idea on a computer, or merely use a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)). Regarding claim 3 Step 2A Prong 1 “wherein the compensation value indicates a difference between the low precision result value of the algorithmic computation and the high precision result value.” (abstract idea – a mathematical calculation (subtraction between the high and low precision result values). See MPEP 2106.04(a)(2)). “predict a high precision result value of the algorithmic computation (abstract idea – mental process” (abstract idea – mental process, performing predictions can be done in the human mind (e.g. evaluation)). Step 2A Prong 2 and 2B This judicial exception is not integrated into a practical application. Additional elements: “wherein the machine learning model is trained to predict a high precision result value of the algorithmic computation;” (This step is adding the words “apply it” (or an equivalent) with the judicial exception, or merely using the machine learning model as a tool to perform the abstract idea – see MPEP 2106.05(f)). Regarding claim 4 Step 2A Prong 1 “combining the compensation value with the low precision result value to compensate for roundoff error in the algorithmic computation.” (abstract idea - a mental process (e.g. evaluation that can practically be performed in the human mind or by a human using pen and paper). See MPEP 2106.04(a)(2)). Step 2A Prong 2 and 2B This judicial exception is not integrated into a practical application. Additional elements: “wherein the microprocessor is further configured to cause:” (understood to be mere instructions to implement an abstract idea on a computer, or merely use a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)). Regarding claim 5 Step 2A Prong 1: None Step 2A Prong 2 and 2B This judicial exception is not integrated into a practical application. Additional elements: “wherein training the machine learning model comprises training the machine learning model using a training dataset comprising: pairs of low precision values, and pairs of high precision values that correspond to the pairs of low precision values.” (This step is adding the words “apply it” (or an equivalent) with the judicial exception, or merely applying the machine learning model as a tool to perform the abstract idea – see MPEP 2106.05(f)). Regarding claim 6 Step 2A Prong 1: None Step 2A Prong 2 and 2B This judicial exception is not integrated into a practical application. Additional elements: “wherein the machine learning model comprises a neural network.” (merely specifies a particular technological environment in which the abstract idea is to take place, ie. a field of use, and thus does not integrate the abstract idea into a practical application nor cannot provide significantly more than the abstract idea itself - see MPEP 2106.05(h)). Regarding claim 7 Step 2A Prong 1: None Step 2A Prong 2 and 2B This judicial exception is not integrated into a practical application. Additional elements: “wherein the machine learning model includes one or more rectified linear unit (ReLU) activation functions that are associated with one or more nodes of the machine learning model.” (merely specifies a particular technological environment in which the abstract idea is to take place, ie. a field of use, and thus does not integrate the abstract idea into a practical application nor cannot provide significantly more than the abstract idea itself - see MPEP 2106.05(h)). Regarding claim 8 Step 2A Prong 1: None Step 2A Prong 2 and 2B This judicial exception is not integrated into a practical application. Additional elements: “wherein the machine learning model includes multiple distinct activation functions that are associated with one or more nodes of the machine learning model.” (merely specifies a particular technological environment in which the abstract idea is to take place, ie. a field of use, and thus does not integrate the abstract idea into a practical application nor cannot provide significantly more than the abstract idea itself - see MPEP 2106.05(h)). Regarding claim 10: See rejection of claim 1, same rationale applies. Regarding claim 11: See rejection of claim 2, same rationale applies. Regarding claim 12: See rejection of claim 3, same rationale applies. Regarding claim 13: See rejection of claim 4, same rationale applies. Regarding claim 14: See rejection of claim 5, same rationale applies. Regarding claim 15: See rejection of claim 6, same rationale applies. Regarding claim 16: See rejection of claim 7, same rationale applies. Regarding claim 17: See rejection of claim 8, same rationale applies. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1, 2, 4, 6, 10, 11, 13 and 15 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Swagath Venkataramani et al. (US 20200005125 A1, hereinafter "Swagath"). Regarding claim 1, Swagath teaches: “A microprocessor comprising logic configured to cause: determining, based on a prediction from a machine learning model and provided to an arithmetic logic unit in the microprocessor, a compensation value that that compensates for roundoff error in an algorithmic computation” (abstract, par. 14-15. Par.0031-0033 and 0035, A compensated deep neural network (DNN) is provided that performs Dot product operation on quantized value and estimates/predicts compensation of the quantized Dot product to reduce error in calculation. The Dot product calculation and compensation is performed on a processing element including a MAC unit and the output value along with the compensation instruction is provided to another processing element which implies the compensation value estimated/predicted by the DNN and provided to an arithmetic logic unit of the microprocessor.). “wherein the machine learning model is trained specific to a particular algorithmic computation and a particular datatype.” (Swagath teaches that DNNs operate in two phases: training and inference [para. 0002] and DNN re-training is suggested as a strategy to minimize accuracy loss due to quantization [para. 0014]. Thus, it is implied that the DNN model regarding the invention has been trained (e.g. for a model to be re-trained it must first be trained). Regarding the datatype: “Some embodiments of the disclosure provide compensated-DNN, in which errors introduced by quantization are dynamically compensated during execution. Numbers in compensated-DNN are represented in Fixed Point with Error Compensation (FPEC) format. The bits in FPEC are split between computation bits and compensation bits.” [para. 0015]). Regarding claim 2, “wherein the microprocessor is further configured to cause: using a low precision input value as input, performing one or more steps of the algorithmic computation to generate a low precision result value as output.” (Each neuron in a DNN layer evaluates a multi-input, single-output function that computes dot-product of its inputs and weights [para. 0002]. “The computation bits use conventional floating-point notation (FxP) to represent the number at low-precision” [para. 0015]. The compensation unit of the processing unit (FPEC-PE) evaluates the quantization error at the dot-product output using the compensation bits [para. 0034]). Regarding claim 4, “combining the compensation value with the low precision result value to compensate for roundoff error in the algorithmic computation.” ([0015], “In some embodiments, a low-overhead sparse compensation scheme based on the compensation bits is used to estimate the error accrued during MAC operations, which is then added to the MAC output to minimize the impact of quantization.” The compensation bits are added to the MAC output (low precision result) to approximate the error and later add it back to the MAC output to compensate for quantization (roundoff) error). Regarding claim 6, “wherein the machine learning model comprises a neural network.” ([0015], “Some embodiments of the disclosure provide compensated-DNN, in which errors introduced by quantization are dynamically compensated during execution.”). Regarding claim 10, This claim recites a method that performs the steps of the method described in claim 1. Therefore claim 10 is rejected under the same reasons mentioned for claim 1. The additional elements of claim 10 are addressed below: “A method comprising:” ([0047], The present application may be a system, a method, and/or a computer program product at any possible technical detail level of integration). Regarding claim 11 – rejected under the same rationale of claim 2. In addition, claim 11 is dependent on claim 10, therefore same rationale applies. Regarding claim 13 – rejected under the same rationale of claim 4. In addition, claim 13 is dependent on claim 10, therefore same rationale applies. Regarding claim 15 – rejected under the same rationale of claim 6. In addition, claim 15 is dependent on claim 10, therefore same rationale applies. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 3 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Swagath in view of Van Baalen et al. (US 20200302298 A1, hereinafter "Van Baalen"). Regarding claim 3, Swagath teaches: “wherein the compensation value indicates a difference between the low precision result value of the algorithmic computation and the high precision result value” ([0034], The compensation unit evaluates the quantization error (i.e. the difference between low and high result values) at the dot-product output using the compensation bits. “This involves shifting the X vector component using EMB bits of xi (at shifter 421) and the Y vector component using EMB bits of yi (at shifter 422) and appropriately adding/subtracting (at adder 424) them from the compensation sum (at adder 426 and ErrComp register 428) based on the respective EDB bits.” Swagath does not disclose: “wherein the machine learning model is trained to predict a high precision result value of the algorithmic computation;” However, Van Baalen discloses in the same field of endeavor: “wherein the machine learning model is trained to predict a high precision result value of the algorithmic computation;” ([Abstract and 0017], Neural networks trained to perform various functions on a target computing device are generated by a computer learning environment that trains a neural network using a training data set. “Various embodiments include methods and neural network computing devices implementing the methods for methods for method for generating an approximation neural network correcting for errors due to approximation operations.” and “A CLE computing device may have a precision or “bit-width” (e.g., capable of 32-bit floating point (FP) operations) as well as processing resources (e.g., energy, FLOPS, etc.) great than the precision or bit-width and processing resources of the target computing device on which the trained neural network will be implemented.”). It would be obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to improve the performance of neural networks executing using low-precision values by reducing errors resulting from post-training approximations [Van Baalen: para. 0019]. Regarding claim 12, This claim recites a method that performs the steps of the method described in claim 3 and therefore is rejected under the same rationale/reasons of claim 3. The additional elements of claim 10 are addressed below: “A method comprising:” ([0047], The present application may be a system, a method, and/or a computer program product at any possible technical detail level of integration). Claims 5 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Swagath in view of Protter et al. (US 20210004075 A1, hereinafter "Protter"). Regarding claim 5, Swagath teaches: “wherein training the machine learning model comprises training the machine learning model using a training dataset comprising:” ([0002], “DNNs operate in two phases: (i) Training and (ii) Inference. Training is performed based on a labeled dataset, where the weights of the DNN are iteratively refined using the Stochastic Gradient Descent (SGD) algorithm”). Swagath does not teach: “pairs of low precision values, and pairs of high precision values that correspond to the pairs of low precision values. However, Protter teaches: pairs of low precision values, and pairs of high precision values that correspond to the pairs of low precision values.” ([0027], “During a training phase, the dual-precision sensors system may train a model, e.g., using machine learning, to transform input data from the relatively low-precision motion signals to output data from the relatively high-precision motion signals. The model may be an N-layered neural network inputting pairs of corresponding data of the same scene from the relatively low-precision sensors and data from the relatively high-precision sensors”). It would be obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to train the machine learning model with a dataset containing pairs of low and high precision values to optimize/minimize the compensation error. Regarding claim 14, This claim recites a method that performs the steps of the method described in claim 5 and therefore is rejected under the same rationale/reasons of claim 5. The additional elements of claim 14 are addressed below: “A method comprising:” ([0047], The present application may be a system, a method, and/or a computer program product at any possible technical detail level of integration). Claims 7-9 and 16-18 are rejected under 35 U.S.C. 103 as being unpatentable over Swagath in view of "Troia" (US 20210065754 A1). Regarding claim 7, Swagath teaches: “wherein the machine learning model” ([Abstract], “A compensated deep neural network is provided”). Swagath does not disclose: “wherein the machine learning model includes one or more rectified linear unit (ReLU) activation functions that are associated with one or more nodes of the machine learning model.” However, Troia teaches: “wherein the machine learning model includes one or more rectified linear unit (ReLU) activation functions that are associated with one or more nodes of the machine learning model.” ([0018], “In a number of embodiments, the activation functions can be pre-defined activation functions and/or custom activation functions.” ReLU is included/listed as a pre-defined activation function, among other common activation functions). It would be obvious to one of ordinary skill in the art before the effective filing data of the claimed invention to have a machine learning model which utilizes one or more activation functions (e.g. ReLU) for the model to predict/calculate high precision results and compensation values. Regarding claim 8, Troia teaches: “wherein the machine learning model includes multiple distinct activation functions that are associated with one or more nodes of the machine learning model.” ([0081], there are multiple distinct activation functions selected for each layer of the machine learning model. “An activation function can be selected among pre-defined and custom activation functions based on the result of the previous AI operation and a different activation function can be selected for each layer.” Regarding nodes, memory arrays may be individual nodes that may store data and/or weights to be combined with input data and summed (operated on) then passed to an activation function [para. 0032]). Regarding claim 9, Troia teaches: “wherein the one or more nodes include gates that are configured to select one or more activation functions of the multiple distinct activation functions to combine into an approximation of a target algorithmic computation.” ([0108 and FIG. 8], the memory arrays can be considered as nodes [para. 0032] and according to FIG. 8, the sensing circuitry of a memory device shows there are two pass gates 8172-1 and 8172-2 that are coupled to the operation selection logic 8178. Recall that activation functions are located in registers on the memory device [para. 0017]. Regarding being configured to select one or more distinct activation functions to be combined, [0081] “In some examples, the number of activation functions 6122-1, . . . , 6122-13 can be modified (e.g., changed) to a custom activation function. As discussed in connection with FIG. 3B, a custom activation function can be created based on a result of a previous AI operation. In some examples, a custom activation function can be based on a result of a previous AI operation and one or more of the number of activation functions 6122-1, . . . , 6122-13. An activation function can be selected among pre-defined and custom activation functions based on the result of the previous AI operation and a different activation function can be selected for each layer.” Activation functions from the disclosed, non-exhaustive list of activation functions 6122-1,…, 6122-13 [para. [0080]. All hidden/intermediate layers (along with their multiple distinct activation functions) are combined into the output layer/node of a neural network to obtain the output value or result. Regarding an approximation of a target algorithmic computation, [0098] “Operations described herein can include operations associated with a processing in memory (PIM) capable device. PIM capable device operations can use bit vector based operations. As used herein, the term “bit vector” is intended to mean a physically contiguous number of bits on a bit vector memory device (e.g., a PIM device) stored physically contiguous in a row of an array of memory cells. Thus, as used herein a “bit vector operation” is intended to mean an operation that is performed on a bit vector that is a contiguous portion of virtual address space (e.g., used by a PIM device)”. Also, the compute operations may include addition and subtraction [para. 0090]. Target algorithmic computation is the expected output of the machine learning model after the input values have been operated on). Regarding claims 16-18, These claims recite a method that performs the steps of the methods described in claim 7-9 and therefore are rejected under the same rationale/reasons of claims 7-9. The additional elements of claims 16-19 are addressed below: “A method comprising:” ([0047], The present application may be a system, a method, and/or a computer program product at any possible technical detail level of integration). Response to Arguments Applicant's arguments filed 11/26/2025 along with claims amendment have been fully considered and the argument regarding 101 and 103 rejections are not persuasive. In Remarks applicant Agues: Argument Regarding 101 rejection: Applicant argues that the claimed invention is improving accuracy and stability in of the internal operation of the microprocessor in an unconventional way therefore the claimed invention is integrated in practical application and therefore patent eligible. Response to argument: Examiner respectfully disagrees with applicant. The claimed limitations as recited do not specifically disclose any claimed limitations that specifically discloses any internal operation of a microprocessor or how that operation is improving accuracy or stability in the internal operation in an unconventional manner. The claim limitation as recites rather discloses determining a compensation of roundoff error which is a mental process because a person mentally with the aid of pen and paper calculate any roundoff error in a calculation and determine a compensation that would be needed for the roundoff error. The claim only recites performing the operation using a generic machine learning model without any specific detail or technical detail or any unconventional manner/technique that is being performed in the claim to achieve the claimed improvement as claimed. In fact the claim does not disclose any detail or any steps how this unconventional improvement is being performed rather having a generic allegation that claimed invention if performing an unconventional improvement. Therefore the claimed improvement as recited in the instant independent claim is nothing more than an abstract idea. Similarly applicant’s argument that all the claims are eligible subject matter is not persuasive. Argument regarding 103 Rejection: Applicant argues Swagath fails to teach the claimed invention specifically applicant argues that the compensation in Swagath is performed inside the DNN. Hidden internal compensation is not a prediction as outputted from the DNN. Claim 1 in its present form instead recites "based on a prediction from a machine learning model a compensation", which Swagath lacks. Additionally applicant argues that the cited reference fails to disclose the dependent claims as the cited reference fails to disclose the independent claims. Response to argument: Examiner respectfully disagrees with applicant. The claim limitation broadly states “determining based on prediction from a machine learning model …. a compensation” without specifically reciting that the compensation is based on final a prediction output or a final output of the model or during the prediction execution of the model. Under broadest reasonable interpretation the claim limitation can be interested to calculate compensation during the execution of the model performing the prediction. Swagath specifically discloses during the DNN execution of the operation quantization error for the Dot product calculation is estimated/predicted to generate compensation instruction or value which is provided as an output to another processing element for further processing (abstract, par. 14-15. Par.0031-0033 and 0035,). A compensated deep neural network (DNN) is provided that performs Dot product operation on quantized value and estimates/predicts compensation of the quantized Dot product to reduce error in calculation. The Dot product calculation and compensation is performed on a processing element including a MAC unit and the output value along with the compensation instruction is provided to another processing element which implies the compensation value estimated/predicted by the DNN and provided to an arithmetic logic unit of the microprocessor as claimed. Accordingly applicant’s argument is not persuasive. Similarly applicant’s argument the cited references fails to teach the limitations of the dependent claims are not persuasive. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ABDULLAH AL KAWSAR whose telephone number is (571)270-3169. The examiner can normally be reached M-F 7:30am-4:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Wiley can be reached at (571) 272-4150. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ABDULLAH AL KAWSAR/Supervisory Patent Examiner, Art Unit 2127
Read full office action

Prosecution Timeline

Dec 14, 2021
Application Filed
Aug 22, 2025
Non-Final Rejection — §101, §102, §103
Nov 12, 2025
Applicant Interview (Telephonic)
Nov 12, 2025
Examiner Interview Summary
Nov 26, 2025
Response Filed
Mar 12, 2026
Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572799
METHODS FOR RELIABLE OVER-THE-AIR COMPUTATION AND FEDERATED EDGE LEARNING
2y 5m to grant Granted Mar 10, 2026
Patent 12541568
Method, System, and Computer Program Product for Recurrent Neural Networks for Asynchronous Sequences
2y 5m to grant Granted Feb 03, 2026
Patent 12536434
Computing Method And Apparatus For Convolutional Neural Network Model
2y 5m to grant Granted Jan 27, 2026
Patent 11501195
SYSTEMS AND METHODS FOR QUANTUM PROCESSING OF DATA USING A SPARSE CODED DICTIONARY LEARNED FROM UNLABELED DATA AND SUPERVISED LEARNING USING ENCODED LABELED DATA ELEMENTS
2y 5m to grant Granted Nov 15, 2022
Patent 11455545
Computer-Implemented System And Method For Building Context Models In Real Time
2y 5m to grant Granted Sep 27, 2022
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
79%
Grant Probability
99%
With Interview (+58.0%)
4y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 395 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month