Prosecution Insights
Last updated: April 19, 2026
Application No. 18/258,728

MACHINE LEARNING MODEL RENEWAL

Non-Final OA §101§102§103
Filed
Jun 21, 2023
Examiner
SPRATT, BEAU D
Art Unit
2143
Tech Center
2100 — Computer Architecture & Software
Assignee
Nokia Technologies Oy
OA Round
1 (Non-Final)
79%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
342 granted / 432 resolved
+24.2% vs TC avg
Strong +27% interview lift
Without
With
+26.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
37 currently pending
Career history
469
Total Applications
across all art units

Statute-Specific Performance

§101
12.2%
-27.8% vs TC avg
§103
63.7%
+23.7% vs TC avg
§102
11.9%
-28.1% vs TC avg
§112
5.4%
-34.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 432 resolved cases

Office Action

§101 §102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 72-91 are presented in the case. Information Disclosure Statement The information disclosure statement submitted on 07/10/2023 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements is being considered by the examiner. Priority Acknowledgment is made of applicant's claim for priority based on PCT application PCT/EP2021/052092 on 01/29/2021. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 77-78 and 90 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The analysis of the claims will follow the 2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50 (“2019 PEG”) Claim 77 and 90 have the following abstract idea analysis. Step 1: The claim is directed to “a method”. The claims are directed to the statutory categories accordingly. Step 2A Prong 1: claims recite the abstract idea limitations of "determining a mismatch degree between said network related input data and said first training data" and "deciding, if said mismatch degree is larger than a predetermined value, to retrain ". These limitations include mental concepts (act of evaluating. Mental processes – concepts performed in the human mind (including an observation, evaluation, judgment, opinion) (see MPEP § 2106.04(a)(2)). "comparing collected information to a predefined threshold, which is an act of evaluating information that can be practically performed in the human mind. The specification also provides example operations performed such threshold checks for retraining. See USPGPUB ¶112. Other sections of the claims such as "training a first machine learning inference model ", "transmitting said first machine learning inference model" and obtaining network related input data," are advanced processes, too generic or high level to be listed as a judicial exception given the available descriptions and MPEP comparisons. Step 2A Prong 2: The judicial exceptions recited in these claims are not integrated into a practical application. Merely invoking "machine learning inference model", "network data", "a processor" and/or "memory" do not yield eligibility. Claims are still in line with mental concepts such as claim 77 and 90 are not specific to a practical application. The additional elements as such are processors and instructions which do not include specialized hardware. See MPEP § 2106.05(f). Claim 77 and 90 do not include a particular field but even doing so may not be sufficient to overcome the abstract idea rejection. Merely applying an model to a field or data without an advancement in the new field or new hardware is ineligible. MPEP § 2106.05(h). Step 2B: The claims do not contain significantly more than their judicial exceptions. Processors, memory and other hardware are in their standard forms in the field. These additional elements are well-understood, routine, and conventional activity, see MPEP 2106.05(d)(II). Claims lacks any particular "how" or algorithm for a solution in a field in a novel way. Claims require more specificity on processes that would be incapable of simple mathematics, mental processes or use more substantial structure than conventional devices such as non-textbook implementations. Regarding claim 78 it merely narrows the previously recited abstract idea limitations with more abstract concepts and/or routine fundamental processes. For the reasons described above with respect to claim 77 this judicial exception is not meaningfully integrated into a practical application, or significantly more than the abstract idea. Abstract idea steps 1, 2A prong 1 and 2 remain the same as independent analysis above. See specification for more practical application concepts as none are seen in claims 77. With respect to step 2B These claims disclose similar limitations described for the dependent claims above and do not provide anything significantly more than mathematical or mental concepts. Claim 78 recites the additional elements of "training, if said mismatch degree is larger than a predetermined value, a second machine learning monitoring model based on said second training data set." These elements are more abstract concepts, generic applications to a field of use or well-understood, routine, conventional activity (see MPEP § 2106.05(d) and can't be simply appended to qualify as significantly more or being a practical application. What type of application, or structure of components beyond generic machine learning is still unknown for these claims. Therefore claim 78 also recites abstract ideas that do not integrate into a practical application or amount to significantly more than the judicial exception, and are rejected under U.S.C. 101. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 72-76, 79 and 86-89 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by WENCHEL et al. (US 20210174258 A1 hereinafter Wenchel). As to independent claim 72, Wenchel teaches a method performed by a core network function for a communication system, the method comprising [cloud network ¶76] training a first machine learning inference model using a first training data set, [training data for model (first model) ¶20 " data used to train ML models (“training data”) may be sampled, possibly via a noisy process, from some underlying and potentially opaque true data distribution or source."] transmitting said first machine learning inference model, [deploys model ¶67, inference ¶59] retraining, upon unsuitability of said first machine learning inference model for a current network context, said first machine learning inference model using a second training data set, and [retraining triggered by scores or accuracy metric below a threshold (unsuitable) and resamples training data (second training data) ¶68 " modal alert based on a function of (or rule based on) multiple risk scores (e.g., if (score1+score2=>threshold1) or (score1>threshold2 and score3<threshold3)). The system may also facilitate more complicated automated and semi-automated actions. For example, regret bounds (from the transfer learning literature) can be used to trigger resampling of new data, or to trigger adaptation of the source training data, followed by retraining of a ML model or models"] transmitting said re-trained first machine learning inference model. [automatic redeploy ¶78 "trigger an automated/automatic redeployment process 722"] As to dependent claim 73, the rejection of claim 72 is incorporated, Wenchel further teaches transmitting a first data collection request, receiving a first data collection response including first collected data, and generating said first training data set based on said first collected data. [Wenchel sampling is a request for training data for a model ¶20 " data used to train ML models (“training data”) may be sampled, possibly via a noisy process, from some underlying and potentially opaque true data distribution or source."] As to dependent claim 74, the rejection of claim 72 is incorporated, Wenchel further teaches training a first machine learning monitoring model based on said first training data set. [Wenchel training a model ¶76 "training pipeline 705 is a process that selects, trains, creates, or generates AI models."], [Wenchel model is for monitoring and outputs explanations ¶44 " model outputs 114 can include explanations of the ML model inferences. The explanations can be based on the metrics at any phase of the system (pre, during, and/or post model inference)"] As to dependent claim 75, the rejection of claim 74 is incorporated, Wenchel further teaches transmitting network related input data, and [Wenchel could network receives input data ¶77 "an API in the multi-stage system cloud network 702 that receives new feature values (e.g. including input data, such as input data 608"] feeding said first machine learning monitoring model with said network related input data. [Wenchel updates model accordingly ¶77 "The model endpoint 710 can update one or more of at least the following components of FIG. 6: explainer 610, model 601A, reference data 606"] As to dependent claim 76, the rejection of claim 75 is incorporated, Wenchel further teaches transmitting a second data collection request, and receiving a second data collection response including second collected data as said network related input data. [Wenchel resampling requests and collects data ¶20 "re-sample at least a portion or subset of the data"] As to dependent claim 79, the rejection of claim 75 is incorporated, Wenchel further teaches said first machine learning monitoring model is included by a first machine learning model message. [Wenchel sends models to the cloud ¶76 "a user model, model metadata associated with the user model, and one or more explainers are sent to the multi-stage system cloud network 702."] As to dependent claim 86, the rejection of claim 75 is incorporated, Wenchel further teaches an apparatus comprising at least one processor, at least one memory including computer program code, and at least one interface configured for communication with at least another apparatus, the at least one processor, with the at least one memory and the computer program code, being configured to cause the apparatus to perform the method of claim 72. [Wenchel processor, interfaces, memory and instructions ¶81-82] As to independent claim 87, Wenchel teaches a method performed by a user equipment, the method comprising [user cloud network ¶76] receiving a first machine learning inference model, [receives models in the cloud ¶76 "a user model, model metadata associated with the user model, and one or more explainers are sent to the multi-stage system cloud network 702."] , [inference ¶59] obtaining network related input data, [generates data from a stream ¶4, ¶77 "receives new feature values (e.g. including input data, such as input data 608 in FIG. 6) from the multi-stage system interface 706" ] feeding said first machine learning inference model with said network related input data, [training data for model (feeds model) ¶20 " data used to train ML models (“training data”) may be sampled, possibly via a noisy process, from some underlying and potentially opaque true data distribution or source."] requesting, upon unsuitability of said first machine learning inference model for a change to a network context, a second machine learning inference model, [scores or accuracy metric below a threshold (unsuitable) and resamples training data for a new model ¶68 " modal alert based on a function of (or rule based on) multiple risk scores (e.g., if (score1+score2=>threshold1) or (score1>threshold2 and score3<threshold3)). The system may also facilitate more complicated automated and semi-automated actions. For example, regret bounds (from the transfer learning literature) can be used to trigger resampling of new data, or to trigger adaptation of the source training data, followed by retraining of a ML model or models"] receiving the second machine learning inference model, and [receives and updates models ¶77] replacing said first machine learning inference model with said second machine learning inference model. [automatic redeployment ¶78 " automated/automatic redeployment process 722 that causes the user(s)' newly-retrained model to the model server 704."] As to dependent claim 88, the rejection of claim 87 is incorporated, Wenchel further teaches in relation to said obtaining, the method further comprises receiving said network related input data. [Wenchel generates data from a stream ¶4, ¶77 "receives new feature values (e.g. including input data, such as input data 608 in FIG. 6) from the multi-stage system interface 706"] As to dependent claim 89, the rejection of claim 87 is incorporated, Wenchel further teaches a first machine learning monitoring model is included by a first machine learning model message, and in relation to said obtaining, the method further comprises collecting said network related input data. [Wenchel generates data from a stream ¶4, ¶77 "receives new feature values (e.g. including input data, such as input data 608 in FIG. 6) from the multi-stage system interface 706" ] Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 77-78 and 90-11 are rejected under 35 U.S.C. 103 as being unpatentable over Wenchel in view of WALTERS et al. (US 20200012900 A1 hereinafter Walters) As to dependent claim 77, Wenchel teaches the rejection of claim 75 above. Wenchel does not specifically teach determining a mismatch degree between said network related input data and said first training data set based on a result of said first machine learning monitoring model fed with said network related input data, and deciding, if said mismatch degree is larger than a predetermined value, to retrain said first machine learning inference model based on said network related input data as said second training data set. However, Walters teaches determining a mismatch degree between said network related input data and said first training data set based on a result of said first machine learning monitoring model fed with said network related input data, and [Walters detects a drift based on similarity or difference (mismatch) ¶10, ¶98-99 " generate a difference matrix using a covariance matrix of the normalized reference dataset and a covariance matrix of the synthetic dataset"] deciding, if said mismatch degree is larger than a predetermined value, to retrain said first machine learning inference model based on said network related input data as said second training data set. [Walters threshold ¶184 "detecting a difference between predicted data and event data includes determining whether a difference between generated data and event data meets or exceeds a threshold difference"] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the model training disclosed by Wenchel by incorporating the determining a mismatch degree between said network related input data and said first training data set based on a result of said first machine learning monitoring model fed with said network related input data, and deciding, if said mismatch degree is larger than a predetermined value, to retrain said first machine learning inference model based on said network related input data as said second training data set disclosed by Walters because both techniques address the same field of machine learning and by incorporating Walters into Wenchel helps prevent models from becoming obsolete with improved detection of data drift [Walters ¶4] As to dependent claim 78, the rejection of claim 77 is incorporated, Wenchel and Walters further teach training, if said mismatch degree is larger than a predetermined value, a second machine learning monitoring model based on said second training data set. [Walters corrects model by training based on detected drift using other data ¶184-185 "model may be corrected (updated) based on detected drift. Correcting the model may include model training and/or hyperparameter tuning, consistent with disclosed embodiments. Correcting the model may be involve model training or hyperparameter tuning using the received event data and/or other data"] As to dependent claim 90, Wenchel teaches the rejection of claim 89 above. Wenchel further teaches said machine learning model retraining request message includes a machine learning model retraining request and at least a portion of said network related input data. [Wenchel resamples and retrains accordingly ¶45 "remediation actions can include, but are not limited to: resampling of new input data 108, adaptation of training data, resampling of data from the data stream 102, retraining of the ML model"] Wenchel does not specifically teach feeding said first machine learning monitoring model with said network related input data, determining a mismatch degree between said network related input data and a first training data set utilized for training of said first machine learning inference model and said first machine learning monitoring model based on a result of said first machine learning monitoring model fed with said network related input data, and transmitting, if said mismatch degree is larger than a predetermined value, a machine learning model retraining request message. However, Walters teaches feeding said first machine learning monitoring model with said network related input data, determining a mismatch degree between said network related input data and a first training data set utilized for training of said first machine learning inference model and said first machine learning monitoring model based on a result of said first machine learning monitoring model fed with said network related input data, and [Walters detects a drift based on similarity or difference (mismatch) ¶10, ¶98-99 " generate a difference matrix using a covariance matrix of the normalized reference dataset and a covariance matrix of the synthetic dataset"] transmitting, if said mismatch degree is larger than a predetermined value, a machine learning model retraining request message, wherein [Walters threshold ¶184 "detecting a difference between predicted data and event data includes determining whether a difference between generated data and event data meets or exceeds a threshold difference"], [Walters instructs correction of model accordingly ¶185] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the model training disclosed by Wenchel by incorporating the feeding said first machine learning monitoring model with said network related input data, determining a mismatch degree between said network related input data and a first training data set utilized for training of said first machine learning inference model and said first machine learning monitoring model based on a result of said first machine learning monitoring model fed with said network related input data, and transmitting, if said mismatch degree is larger than a predetermined value, a machine learning model retraining request message disclosed by Walters because both techniques address the same field of machine learning and by incorporating Walters into Wenchel helps prevent models from becoming obsolete with improved detection of data drift [Walters ¶4] As to dependent claim 91, Wenchel teaches the rejection of claim 87 above. Wenchel does not specifically teach said first machine learning model message includes information on a first reference performance of said first machine learning inference model, and the method further comprises achieving information indicative of ground truth data, and determining an actual performance of said first machine learning inference model by comparing a result of said first machine learning inference model fed with said network related input data with said information indicative of ground truth data. However, Walters teaches said first machine learning model message includes information on a first reference performance of said first machine learning inference model, and the method further comprises [Walters performance feedback received from environment ¶66] achieving information indicative of ground truth data, and [Walters ground truth ¶86] determining an actual performance of said first machine learning inference model by comparing a result of said first machine learning inference model fed with said network related input data with said information indicative of ground truth data. [Walters checks results against ground truth labels ¶86 permeance criteria ¶51 " evaluate the performance of the trained synthetic data model. When the performance of the trained synthetic data model satisfies performance criteria, model optimizer 107 can be configured to store the trained synthetic data model in model storage 109. For example, model optimizer " ] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the model training disclosed by Wenchel by incorporating the said first machine learning model message includes information on a first reference performance of said first machine learning inference model, and the method further comprises achieving information indicative of ground truth data, and determining an actual performance of said first machine learning inference model by comparing a result of said first machine learning inference model fed with said network related input data with said information indicative of ground truth data disclosed by Walters because both techniques address the same field of machine learning and by incorporating Walters into Wenchel helps prevent models from becoming obsolete with improved detection of data drift [Walters ¶4] Claims 80-85 are rejected under 35 U.S.C. 103 as being unpatentable over Wenchel in view of Pezzillo et al. (US 20190370686 A1 hereinafter Pezzillo) As to dependent claim 80, Wenchel teaches the rejection of claim 79 above. Wenchel does not specifically teach receiving a machine learning model retraining request message, wherein said machine learning model retraining request message includes a machine learning model retraining request and at least a portion of network related input data as said second training data set. However, Pezzilo teaches receiving a machine learning model retraining request message, wherein [Pezzillo feedback data leads to retraining ¶4] said machine learning model retraining request message includes a machine learning model retraining request and at least a portion of network related input data as said second training data set. [Pezzillo includes feedback is a request and includes observations ¶4 "Feedback data is collected, via the one or more communications networks, from the plurality of edge computing devices. The feedback data includes labeled observations generated by the execution of the trained instance of the machine learning model at the plurality of edge computing devices on unlabeled observations captured by the plurality of edge computing devices"] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the model training disclosed by Wenchel by incorporating the receiving a machine learning model retraining request message, wherein said machine learning model retraining request message includes a machine learning model retraining request and at least a portion of network related input data as said second training data set disclosed by Pezzillo because both techniques address the same field of machine learning and by incorporating Pezzillo into Wenchel updates models for improves performance and preserves privacy [Pezzillo ¶3]. As to dependent claim 81, the rejection of claim 80 is incorporated, Wenchel and Pezzillo further teach training a second machine learning monitoring model based on said second training data set, wherein said second machine learning monitoring model is included by a second machine learning model message. [Pezzillo retrained model is sent to edge devices ¶4] As to dependent claim 82, Wenchel teaches the rejection of claim 79 above. Wenchel does not specifically teach computing a first reference performance of said first machine learning inference model based on said first machine learning inference model and said first training data set, wherein said first machine learning model message includes information on said first reference performance of said first machine learning inference model. However, Pezzilo teaches computing a first reference performance of said first machine learning inference model based on said first machine learning inference model and said first training data set, wherein said first machine learning model message includes information on said first reference performance of said first machine learning inference model. [Pezzillo performance scores and accuracy from training ¶15 " In some implementations, a confidence score describes a level of confidence associated with each labeled observation generated by a trained instance of a machine learning model executed by an edge computing device, and a performance score describes accuracy of labeled observations generated by a trained instance of a machine learning model executed at an edge computing device"] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the model training disclosed by Wenchel by incorporating the computing a first reference performance of said first machine learning inference model based on said first machine learning inference model and said first training data set, wherein said first machine learning model message includes information on said first reference performance of said first machine learning inference model disclosed by Pezzillo because both techniques address the same field of machine learning and by incorporating Pezzillo into Wenchel updates models for improves performance and preserves privacy [Pezzillo ¶3]. As to dependent claim 83, the rejection of claim 82 is incorporated, Wenchel and Pezzillo further teach receiving a machine learning model retraining request message, wherein said machine learning model retraining request message includes a machine learning model retraining request and at least a portion of network related input data as said second training data set. [Pezzillo includes observations ¶4 "Feedback data is collected, via the one or more communications networks, from the plurality of edge computing devices. The feedback data includes labeled observations generated by the execution of the trained instance of the machine learning model at the plurality of edge computing devices on unlabeled observations captured by the plurality of edge computing devices"] As to dependent claim 84, the rejection of claim 83 is incorporated, Wenchel and Pezzillo further teach transmitting a second data collection request, receiving a second data collection response including second collected data, and updating said second training data set based on said second collected data. [Pezzillo collects data for retraining ¶4 iterative loop ¶57 "manages ML model re-training and deployment on an iterative basis. This description will start with the receipt of ML model feedback data from the edge computing devices 302, although this should not be interpreted to require that the receipt of the ML model feedback data is necessarily the first operation in the re-training and deployment loop."] As to dependent claim 85, the rejection of claim 83 is incorporated, Wenchel and Pezzillo further teach computing a second reference performance of said second machine learning inference model based on said second machine learning inference model and said second training data set, wherein said second machine learning model message includes information on said second reference performance of said second machine learning inference model. [Pezzillo performance scores and accuracy from training ¶15 " In some implementations, a confidence score describes a level of confidence associated with each labeled observation generated by a trained instance of a machine learning model executed by an edge computing device, and a performance score describes accuracy of labeled observations generated by a trained instance of a machine learning model executed at an edge computing device"] Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Applicant is required under 37 C.F.R. § 1.111(c) to consider these references fully when responding to this action. Christiansen et al. (US 20210125104 A1) teaches sampling and update modules for models with triggers (see ¶57) It is noted that any citation to specific pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. In re Heck, 699 F.2d 1331, 1332-33, 216 U.S.P.Q. 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006, 1009, 158 U.S.P.Q. 275, 277 (C.C.P.A. 1968)). Any inquiry concerning this communication or earlier communications from the examiner should be directed to Beau Spratt whose telephone number is 571 272 9919. The examiner can normally be reached 8:30am to 5:00pm (PST). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Welch can be reached at 571 272 7212. The fax phone number for the organization where this application or proceeding is assigned is 571 483 7388. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866 217 9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800 786 9199 (IN USA OR CANADA) or 571 272 1000. /BEAU D SPRATT/Primary Examiner, Art Unit 2143
Read full office action

Prosecution Timeline

Jun 21, 2023
Application Filed
Feb 27, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12595715
Cementing Lab Data Validation based On Machine Learning
2y 5m to grant Granted Apr 07, 2026
Patent 12596955
REWARD FEEDBACK FOR LEARNING CONTROL POLICIES USING NATURAL LANGUAGE AND VISION DATA
2y 5m to grant Granted Apr 07, 2026
Patent 12596956
INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD FOR PRESENTING REACTION-ADAPTIVE EXPLANATION OF AUTOMATIC OPERATIONS
2y 5m to grant Granted Apr 07, 2026
Patent 12561464
CATALYST 4 CONNECTIONS
2y 5m to grant Granted Feb 24, 2026
Patent 12561606
TECHNIQUES FOR POLL INTENTION DETECTION AND POLL CREATION
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
79%
Grant Probability
99%
With Interview (+26.6%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 432 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month