Prosecution Insights
Last updated: April 19, 2026
Application No. 17/494,176

TRAINING A MACHINE LEARNING MODEL USING INCREMENTAL LEARNING WITHOUT FORGETTING

Final Rejection §103
Filed
Oct 05, 2021
Examiner
MILLER, ALEXANDRIA JOSEPHINE
Art Unit
2142
Tech Center
2100 — Computer Architecture & Software
Assignee
Actimize Ltd.
OA Round
4 (Final)
18%
Grant Probability
At Risk
5-6
OA Rounds
4y 5m
To Grant
90%
With Interview

Examiner Intelligence

Grants only 18% of cases
18%
Career Allow Rate
5 granted / 27 resolved
-36.5% vs TC avg
Strong +71% interview lift
Without
With
+71.4%
Interview Lift
resolved cases with interview
Typical timeline
4y 5m
Avg Prosecution
40 currently pending
Career history
67
Total Applications
across all art units

Statute-Specific Performance

§101
32.6%
-7.4% vs TC avg
§103
52.4%
+12.4% vs TC avg
§102
3.3%
-36.7% vs TC avg
§112
8.5%
-31.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 27 resolved cases

Office Action

§103
DETAILED ACTION Claims 1-3, 5-12, and 14-22 are presented for examination. This office action is in response to submission of application on 16-JANUARY-2026. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 05-OCTOBER-2021 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Response to Amendment The amendment filed 16-JANUARY-2026 in response to the non-final office action mailed 16-JULY-2025 has been entered. Claims 1-3, 5-12, and 14-22 remain pending in the application. With regards to the previous office action’s rejections under 103, the amendments to the claims necessitated a new consideration of the art. After this consideration, the examiner respectfully disagrees with the applicant’s arguments that the art referenced in the previous office action does not teach the amendment claim limitations. A new 103 rejection over the prior art has been provided. Regarding the applicant’s arguments: The applicant argues that the previously presented art does not teach shared parameters common to many tasks, task-specific parameters associated with the current task that are trained, and task-specific parameters not associated with the current task that are not trained. However, the examiner believes that Xie in view of Shen does teach each of these three parameter types: Regarding shared parameters: Shen teaches that in one embodiment, model parameters are shared among a plurality of computer systems, although only one computer system has the ability modify model parameters at a time (Paragraph 62). Therefore, each model has shared parameters that are common to the plurality of training tasks for each model. Regarding task-specific parameters associated with the current task that are trained: Shen teaches that a machine-learning model may be divided into multiple sets of independent parameters such that each set may be training independently (Paragraph 60). Each set would be task-specific parameters for the current training iteration. Regarding task-specific parameters not associated with the current task that are not trained: Shen further teaches for a model, there may be two sets of model parameters, which may be trained independently of one another (Paragraph 81). The set of model parameters that are not trained would be the task-specific parameters not associated with the current task that are not trained as the individual training of a model would be a task for the machine learning system. Furthermore, regarding the amended limitations of claim 1, 10, and 19: Shen discloses wherein the training samples used in the constraining are: not added to a training dataset for the current training iteration, and are used without the labels associated with said training samples: Shen teaches incremental learning that may retain knowledge during an initial training when presented with a new dataset, and that used labeled and unlabeled data (Paragraph 103). Considering the presence of a new dataset for the current training iteration, the original dataset that formed the initial training is not added to the training dataset for the current training iteration as it is remembered through iterations, wherein it may be used without labels associated with said training samples as it may consist of unlabeled data. Shen discloses wherein in each iteration, the subset of shared parameters is trained, and task-specific parameters not associated with the current training task are not trained: Shen further teaches for a model, there may be two sets of model parameters, which may be trained independently of one another (Paragraph 81). The set of model parameters that are not trained would be the task-specific parameters not associated with the current task that are not trained as the individual training of a model would be a task for the machine learning system. Furthermore, the trained parameter set may be broadcasted to act as the shared parameters that are trained as they are distributed to other machine learning models. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 5, 8-11, 14, and 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Xie et al. (Pub. No. WO 2020198501 A1, filed March 26th 2020, hereinafter Xie) in view of Shen et al. (Pub. No. US 20220180125 A1, filed December 22nd 2020, hereinafter Shen). Regarding claim 1: Claim 1 recites: A method for training a machine learning model using incremental learning without forgetting, the method comprising, using a computer processor: receiving a sequence of a plurality of training tasks, wherein each training task is associated with one or more training samples and corresponding labels respectively associated with the one or more training samples; training the machine learning model in a sequence of a plurality of sequential training iterations respectively associated with the sequence of a plurality of training tasks, wherein the model comprises a subset of shared model parameters that are common to the plurality of training tasks and a subset of task-specific model parameters for each of the plurality of training tasks, and wherein in each of the plurality of sequential training iterations the machine learning model is trained by: generating the task-specific parameters for the current training iteration, constraining the training of the task-specific parameters of the model for the current training task by one or more of the training samples associated with a previous training task in a previous iteration, wherein the training samples used in the constraining are: not added to a training dataset for the current training iteration, and are used without the labels associated with said training samples; and classifying the one or more samples associated with the current training task based on the machine learning model defined by combining the subset of shared parameters and the task-specific parameters generated for the current training iteration: and wherein in each iteration, the subset of shared parameters is trained, and task-specific parameters not associated with the current training task are not trained. Xie discloses a method for training a machine learning model using incremental learning without forgetting, the method comprising, using a computer processor: Xie teaches a machine learning model that implements fast incremental learning without forgetting past knowledge (Paragraph 37). Xie discloses receiving a sequence of a plurality of training tasks, wherein each training task is associated with one or more training samples and corresponding labels respectively associated with the one or more training samples: Xie teaches that in some implementations of meta learning, meta learning uses a set of tasks containing a training set and a testing set, wherein the tasks are used in supervised image classification (Paragraph 24). Supervised classification would include corresponding labels associated with training samples. Xie discloses constraining the training of the task-specific parameters of the model for the current training task by one or more of the training samples associated with a previous training task in a previous training iteration: Xie teaches that part of a conventional approach to meta learning includes selectively storing a subset of training data to represent the learning classes, such that future models may use the stored subset (Paragraph 62). Although Xie does not use this approach, it still teaches it as part of a conventional method. The stored training data would be the one or more of the training samples associated with a previous training task in a previous training iteration with the new model being constrained in the task-specific parameters by those past representative samples. Xie discloses classifying the one or more samples associated with the current training task based on the machine learning model defined by combining the subset of shared parameters and the task-specific parameters generated for the current training iteration: Xie teaches that the above approach of by combining the subset of shared parameters and the task-specific parameters generated for the current training iteration may be use as part of classification problem (Paragraph 62). Shen discloses training the machine learning model in a sequence of a plurality of sequential training iterations respectively associated with the sequence of a plurality of training tasks: Shen in the same field of endeavor of reinforcement learning teaches that multiple neural networks may be performed simultaneously or sequentially (Paragraph 175). This would comprise training the machine learning model in a sequence of a plurality of sequential training iterations respectively associated with the sequence of a plurality of training tasks. Xie, Shen, and the present application are all analogous art because they are in the same field of endeavor of reinforcement learning. Shen discloses wherein the model comprises a subset of shared model parameters that are common to the plurality of training tasks and a subset of task-specific model parameters for each of the plurality of training tasks, and wherein in each of the plurality of sequential training iterations the machine learning model is trained by: generating the task-specific parameters for the current training iteration: Shen teaches that in one embodiment, model parameters are shared among a plurality of computer systems, although only one computer system has the ability modify model parameters at a time (Paragraph 62). Therefore, each model has shared parameters that are common to the plurality of training tasks for each model wherein there are sequential training iterations as the models update one at a time. Furthermore, Shen teaches that a machine-learning model may be divided into multiple sets of independent parameters such that each set may be training independently (Paragraph 60). Each set would be task-specific parameters for the current training iteration. Shen discloses wherein the training samples used in the constraining are: not added to a training dataset for the current training iteration, and are used without the labels associated with said training samples: Shen teaches incremental learning that may retain knowledge during an initial training when presented with a new dataset, and that used labeled and unlabeled data (Paragraph 103). Considering the presence of a new dataset for the current training iteration, the original dataset that formed the initial training is not added to the training dataset for the current training iteration as it is remembered through iterations, wherein it may be used without labels associated with said training samples as it may consist of unlabeled data. Shen discloses wherein in each iteration, the subset of shared parameters is trained, and task-specific parameters not associated with the current training task are not trained: Shen further teaches for a model, there may be two sets of model parameters, which may be trained independently of one another (Paragraph 81). The set of model parameters that are not trained would be the task-specific parameters not associated with the current task that are not trained as the individual training of a model would be a task for the machine learning system. Furthermore, the trained parameter set may be broadcasted to act as the shared parameters that are trained as they are distributed to other machine learning models. It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a method that utilized the teachings of Xie and the teachings of Shen. This would have provided the advantage of retaining some information privately while still improving the model (Shen, paragraph 59). Regarding claim 2, which is dependent upon claim 1: Claim 2 recites: The method of claim 1, wherein the model is constrained to reduce variations of one or more layer outputs of the model caused by changes in the subset of shared parameters and the propagator resulting from the current training iteration by using the one or more training samples associated with the previous training task Xie in view of Shen discloses the method of claim 1 upon which claim 2 depends. Shen discloses wherein the subset of task-specific model parameters for each of the plurality of training tasks are generated by a propagator, wherein the propagator is modified by prior task training data, wherein the generating of the task-specific parameters for the current training iteration is performed by applying the propagator to the one or more training samples associated with the current training task: The model updates are from training data that is exclusive to the model being trained at the time (Paragraph 62). The parameters generated from this would be the subset of task-specific model parameters that are generated by a propagator for each of the plurality of training tasks that are later updated into the shared parameters. Furthermore, Shen teaches that the propagator that is modified by prior training data, as there is a machine learning client that trains a model (which would include generation of the parameters) and generates updated parameters from the training data, wherein the updated parameters are used to update other machine learning clients (Paragraph 65). The training data here would be prior training data, as prior training data is taken to refer to previous training data used in past iterations or training of the model, and the machine learning client would be the propagator, as it is a component responsible for the generation of task-specific parameters, wherein the task is the current machine learning task. Shen teaches that a machine-learning model may be divided into multiple sets of independent parameters such that each set may be training independently (Paragraph 60). Each set would be task-specific parameters for the current training iteration wherein the system that divides the parameters would be the propagator. Furthermore, Xie discloses wherein the model is constrained to reduce variations of one or more layer outputs of the model caused by changes in the subset of shared parameters and the propagator resulting from the current training iteration by using the one or more training samples associated with the previous training task: Xie teaches finding the value that minimizes the expectation of the combined loss over the task space, which is performed during the meta training phase (Paragraphs 83-84). As meta machine learning uses the outcomes of previous model, Xie would use the one or more training samples associated with the previous training task in order to reduce the combined loss over the task space, wherein the combined loss would be an example of one or more layer outputs of the model caused by changes in the subset of shared parameters and the propagator resulting from the current training iteration as it is a product of the machine learning models. Furthermore, Shen has previous disclosed a propagator wherein its combination with Xie would be obvious for reasons of the improvement presented below. It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a method that utilized the teachings of Xie and the teachings of Shen. This would have provided the advantage of retaining some information privately while still improving the model (Shen, paragraph 59). Regarding claim 5, which is dependent upon claim 1: Claim 5 recites: The method of claim 1, wherein the subset of shared model parameters are modified when training all of the plurality of training tasks and the subset of task-specific parameters are modified only when training the specific associated task but not the other non-specifically associated tasks. Xie in view of Shen discloses the method of claim 1 upon which claim 5 depends. Furthermore, Xie teaches: Paragraph 24, excerpt: “…In the meta training phase, a meta-learner is trained by learning from a number of tasks from an auxiliary dataset to capture transferable knowledge across the tasks…” Paragraph 87, excerpt: “…Unlike conventional optimizer learning approaches which learn for the optimization conditions that can be used for weights update in the meta-testing phase, the module generator according to various embodiments of the present invention directly learns to output the weights of the category mapping discriminator and therefore no further fine-tuning is required in the meta testing phase…” This discloses wherein the subset of shared model parameters are modified when training all of the plurality of training tasks and the subset of task-specific parameters are modified only when training the specific associated task but not the other non-specifically associated tasks as the meta-learner’s transferable knowledge would be analogous to the modified shared model parameters across the plurality of training tasks, and the category mapping discriminator would be the specific associated task. Regarding claim 8, which is dependent upon claim 2: Claim 8 recites: The method of claim 2 wherein the one or more of the training samples associated with the previous training task are generated based on an aggregated distribution of a plurality of the training samples to which the propagator was applied in the previous training iteration. Xie in view of Shen discloses the method of claim 2 upon which claim 8 depends, as well as the generation . Furthermore, Xie teaches: Paragraph 62, excerpt: “...For example, one approach stores a subset of previous training samples which can best represent the corresponding category and trains a class- incremental learner based on nearest neighbor classification…” This discloses wherein the one or more of the training samples associated with the previous training task are generated based on an aggregated distribution of a plurality of the training samples to which the propagator was applied in the previous training iteration as the previous training samples that best represent the corresponding category would be roughly analogous to the aggregated distribution of a plurality of training samples. Regarding claim 9, which is dependent upon claim 1: Xie in view of Shen discloses the method of claim 1 upon which claim 9 depends. Furthermore, Xie teaches: Paragraph 35: “In various embodiments, the machine learning model comprises a neural network. For example, the neural network may be a convolutional neural network” This discloses wherein the classification model is a neural network (NN) selected from the group consisting of: convolutional neural network (CNN) […] Claims 10-11, 14, 17-18 recite a system that parallel the method of claims 1-2, 5, 8-9 respectively. Therefore, the analysis discussed above with respect to claims 1-2, 5, 8-9 also applies to claims 10-11, 14, 17-18 respectively. Accordingly, claims 10-11, 14, 17-18 are rejected based on substantially the same rationale as set forth above with respect to claims 1-2, 5, 8-9 respectively. Claims 19 and 20 recite a non-transitory computer readable medium that parallels the method of claims 1 and 8 respectively. Therefore, the analysis discussed above with respect to claims 1 and 8 also applies to claims 19 and 20 respectively. Accordingly, claims 19 and 20 are rejected based on substantially the same rationale as set forth above with respect to claims 1 and 8 respectively. Regarding claim 22, which depends upon claim 2: Claim 22 recites: The method of claim 2, comprising: applying the propagator to a test instance to obtain the task specific parameters of the test instance; and classifying the test instance using the model defined by the subset of shared model parameters and the task specific parameters of the test instance. Xie in view of Shen teaches the method of claim 2 upon which claim 22 depends. Furthermore, regarding the limitations of claim 22: Xie teaches that in conventional approaches of meta learning there is a meta testing phase in which training images and testing images are projected into the learned embedding space and classification is implemented (Paragraph 4). The testing images being projected into the learned space would be applying the propagator to a test image to obtain the task specific parameters and the implantation of classification would be classifying the test instance using the model defined by the subset of shared model parameters and the task specific parameters of the test instance as seen in the conventional methods described by Xie. Claims 3 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Xie in view of Shen, further in view of Guo et al. (Pub. No. 111178427 A, published May 19th 2020 , hereinafter Guo). Regarding claim 3, which depends upon claim 2: Claim 3 recites: The method of claim 2, wherein the propagator for the current training task is generated based on the one or more of the training samples associated with the previous training task but not the corresponding labels respectively associated therewith. Xie in view of Shen teach the method of claim 2 upon which claim 3 depends. However, Xie in view of Shen does not teach wherein the propagator for the current training task is generated based on the one or more of the training samples associated with the previous training task: Guo in the same field of endeavor of reinforcement learning recites: “S2, in the first class increment learning session, taking the base task as the training data set of the first task, learning to obtain the base task network model” This teaches a base set of training samples that are used in each model, that would be the one or more training samples associated with the previous training task. Xie in view of Shen and Guo are analogous art because they are in the same field of endeavor of reinforcement learning. Furthermore, Xie in view of Shen does not teach but not the corresponding labels respectively associated therewith. Guo recites: “This is a using a deep neural network and learning feature representation and unsupervised method optimizing cluster allocation” Guo here teaches unsupervised learning, which is a form of learning that does not use labels. It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a method that utilized the teachings of Xie in view of Shen and the teachings of Guo. This would have provided the advantage of learning new knowledge while retaining old knowledge (Guo, “the invention is concerned with constructing an effective characterization space for small sample type increment learning, can well balance the old knowledge of reservation and adaptation of the new knowledge”) Claim 12 recites a system that parallels the method of claim 3. Therefore, the analysis discussed above with respect to claim 3 also applies to claim 12. Accordingly, claim 12 is rejected based on substantially the same rationale as set forth above with respect to claim 3. Claims 6 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Xie in view of Shen, further in view of Li et al. (Pub. No. CN 111931807 A, published November 13th 2020, hereinafter Li). Regarding claim 6, which is dependent on claim 1: Xie in view of Shen discloses the method of claim 1 upon which claim 6 depends. Xie in view of Shen does not disclose wherein the task-specific parameters for the current training iteration are generated based on a […] associated with […] associated with the previous training task. However, Li in the same field of endeavor of incremental learning teaches: Contents of the invention, excerpt: “…The combined characteristic space of the invention is composed of base task knowledge space and lifelong learning knowledge space; it can adaptively encode new task knowledge and effectively keep the characteristic expression of the base task...” The encoded new task knowledge would be roughly analogous to the compressed encoding on one or more training samples associated with the current task; the lifelong learning knowledge space would be roughly analogous to the non-compressed version of the one or more training samples associated with the previous training task. Xie in view of Shen and Li are analogous art because they are in the same field of endeavor. It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a method that utilized the teachings of Xie in view of Shen that disclosed the method of claim 1 and the teachings of Li that disclosed wherein the task-specific parameters for the current training iteration are generated based on a […] associated with […] associated with the previous training task. This would have provided the advantage to Xie in view of Shen of reducing inaccurate training through limiting the use of early labels. Claim 15 recites a system that parallels the method of claim 6. Therefore, the analysis discussed above with respect to claim 6 also applies to claim 15. Accordingly, claim 15 is rejected based on substantially the same rationale as set forth above with respect to claim 6. Claims 7 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Xie in view of Shen, further in view of Li, further in view of Guo. Regarding claim 7, which is dependent on claim 1: Xie in view of Shen further in view of Li discloses the method of claim 6 upon which claim 7 depends. Xie in view of Shen further in view of Li does not disclose wherein the compressed encoding is generated by an encoder trained by adding a mean square error reconstruction loss of the one or more training samples associated with the previous training task to a penalized form of a Wasserstein distance between the distribution of the compressed encoding and a multivariate normal distribution of an embedded low dimensional space. However, Guo in the same field of endeavor of classification teaches: Guo recites “Compared with the existing technology, the invention uses the Sliced-Wasserstein distance-based self-encoded network frame, and based on this introduces a mean square error loss, L1 loss, soft clustering loss distribution, and KL loss is jointly optimized and clustered in the iterative training process of network, and optimizing the network self-encoding module and a clustering module” Guo further recites: “based on Sliced-Wasserstein distance from the encoding network (SWAE) module. the automatic coding network structure formed by universal encoder f (x θ) and the decoder g (z θg), as shown in FIG. 2. new image low-dimension feature vector z by encoder of the network” Guo teaches computing together a Sliced-Wasserstein distance and a mean square error loss in order to generate an encoding, wherein the Sliced-Wasserstein distance would be a penalized form of a Wasserstein distance wherein the Wasserstein distance is from an embedded low-dimension space. It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a method that utilized the teachings of Xie in view of Shen further in view of Li and the teachings of Guo. This would have provided the advantage of learning new knowledge while retaining old knowledge (Guo, “the invention is concerned with constructing an effective characterization space for small sample type increment learning, can well balance the old knowledge of reservation and adaptation of the new knowledge”). Claim 16 recites a system that parallels the method of claim 7. Therefore, the analysis discussed above with respect to claim 7 also applies to claim 16. Accordingly, claim 16 is rejected based on substantially the same rationale as set forth above with respect to claim 7. Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Xie in view of Shen, further in view of Subramanian et al. (Pub. No. US 20160203485 A1, filed January 8th 2015, hereinafter Subramanian). Regarding claim 21, which depends upon claim 1: Xie in view of Shen teaches claim 1 upon which claim 21 depends. However, Xie in view of Shen do not fully teach the limitations of claim 21: Subramanian in the same field of endeavor as the present application of using reinforcement learning to address financial risk teaches that a risk score is generated for a pending transaction based on historical transaction data, wherein an authentication request is provided based on the risk score (Paragraph 5). The transaction would be the received transaction data and the authentication request would be the alert based on the risk score. Xie in view of Shen do teach the machine learning model trained by incremental learning without forgetting. Subramanian and the present application are analogous art because they are in the same field of endeavor of using reinforcement learning to address financial risk. It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a method that utilized the teachings of Xie in view of Shen and the teachings of Subramanian. This would have provided the advantage of verifying valid account information (Subramanian, paragraph 2). Conclusion A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALEXANDRIA JOSEPHINE MILLER whose telephone number is (703)756-5684. The examiner can normally be reached Monday-Thursday: 7:30 - 5:00 pm, every other Friday 7:30 - 4:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mariela Reyes can be reached on (571) 270-1006. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /A.J.M./Examiner, Art Unit 2175 /HAIMEI JIANG/Primary Examiner, Art Unit 2142
Read full office action

Prosecution Timeline

Oct 05, 2021
Application Filed
Oct 28, 2024
Non-Final Rejection — §103
Jan 09, 2025
Interview Requested
Jan 22, 2025
Examiner Interview Summary
Mar 03, 2025
Response Filed
Apr 08, 2025
Final Rejection — §103
Jun 10, 2025
Response after Non-Final Action
Jun 30, 2025
Request for Continued Examination
Jul 03, 2025
Response after Non-Final Action
Jul 14, 2025
Non-Final Rejection — §103
Jan 16, 2026
Response Filed
Mar 02, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12566943
METHOD AND APPARATUS WITH NEURAL NETWORK QUANTIZATION
2y 5m to grant Granted Mar 03, 2026
Patent 12481890
SYSTEMS AND METHODS FOR APPLYING SEMI-DISCRETE CALCULUS TO META MACHINE LEARNING
2y 5m to grant Granted Nov 25, 2025
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
18%
Grant Probability
90%
With Interview (+71.4%)
4y 5m
Median Time to Grant
High
PTA Risk
Based on 27 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month