Prosecution Insights
Last updated: April 19, 2026
Application No. 17/987,141

MACHINE LEARNING TECHNIQUES FOR PREDICTING CLASSIFICATION PROGRESSION

Final Rejection §101§103
Filed
Nov 15, 2022
Examiner
GARNER, CASEY R
Art Unit
2123
Tech Center
2100 — Computer Architecture & Software
Assignee
UNITEDHEALTH GROUP, INCORPORATED
OA Round
2 (Final)
70%
Grant Probability
Favorable
3-4
OA Rounds
3y 7m
To Grant
87%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
184 granted / 261 resolved
+15.5% vs TC avg
Strong +17% interview lift
Without
With
+16.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
19 currently pending
Career history
280
Total Applications
across all art units

Statute-Specific Performance

§101
30.6%
-9.4% vs TC avg
§103
45.7%
+5.7% vs TC avg
§102
7.1%
-32.9% vs TC avg
§112
12.2%
-27.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 261 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is responsive to the Amendment filed on 11/24/2025. Claims 1-3, 6-10, 13-17, and 20-26 are pending in the case. Claims 1, 8, and 15 are independent claims. Response to Arguments Applicant's amendments to the claims and arguments regarding the 35 U.S.C. § 101 rejections have been fully considered but are not found persuasive. Applicant analogizes to Ex Parte Desjardines and points specifically to the inputting step as overcoming the 101 rejection. The inputting step, however, is merely a pre-solution activity, namely, inputting data into a model. MPEP 2106.05(d) indicates that merely “storing and retrieving information in memory” and/or "receiving or transmitting data over a network" are well‐understood, routine, conventional functions when they are claimed in a merely generic manner (as it is in the present claim). Accordingly, these rejections are hereby withdrawn. Applicant's prior art arguments have been fully considered but are moot in view of the new grounds of rejection presented below. Claim Rejections - 35 U.S.C. § 101 35 U.S.C. § 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-3, 6-10, 13-17, and 20-26 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: Claims 1-3, 6, 7, 21, and 22 are directed towards the statutory category of a process. Claims 8-10, 13, 14, 23, and 24 are directed towards the statutory category of a machine. Claims 15-17, 20, 25, and 26 are directed towards the statutory category of an article of manufacture. With respect to claim 1: 2A Prong 1: This claim is directed to a judicial exception. A… method comprising (mental process): (1) generating a base cohort from feature data associated with the plurality of entities, the base cohort comprising selected ones of the plurality of entities including initial severity level labels associated with the first time period that match a selected set of severity level labels (mental process), (2) assigning a selected one of a plurality of outcome labels to the selected ones of the plurality of entities based at least in part on a difference in the one or more initial severity level labels and the one or more subsequent severity level labels (mental process), and (3) generating the plurality of model features based at least in part on feature data associated with the selected ones of the plurality of entities, wherein the progression machine learning model is trained based on the model dataset (mental process); and initiating… performance of one or more prediction-based actions based at least in part on the predictive outputs (mental process). 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: computer-implemented (merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f)); inputting, by one or more processors, input data to a progression machine learning model to receive a predictive output indicating (i) a classification comprising a first severity level label of a plurality of severity level labels and (ii) a likelihood of the first severity level label progressing to a second severity level label of the plurality of severity level labels, wherein training the progression machine learning model comprises (adding insignificant extra-solution activity to the judicial exception, as discussed in MPEP § 2106.05(g)); receiving a model dataset comprising (i) a plurality of model features associated with a plurality of entities, and (ii) a plurality of classifications of the plurality of entities, the plurality of classifications comprising one or more initial severity level labels associated with a first time period, and one or more subsequent severity level labels associated with a second time period, wherein the model dataset is generated by (adding insignificant extra-solution activity to the judicial exception, as discussed in MPEP § 2106.05(g)); machine learning (merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f)); and by the one or more processors (merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f)). 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: computer-implemented (merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f)); inputting, by one or more processors, input data to a progression machine learning model to receive a predictive output indicating (i) a classification comprising a first severity level label of a plurality of severity level labels and (ii) a likelihood of the first severity level label progressing to a second severity level label of the plurality of severity level labels, wherein training the progression machine learning model comprises (MPEP 2106.05(d) indicates that merely “storing and retrieving information in memory” and/or "receiving or transmitting data over a network" are well‐understood, routine, conventional functions when they are claimed in a merely generic manner (as it is in the present claim). Thereby, a conclusion that the claimed step is well-understood, routine, conventional activity is supported under Berkheimer); receiving a model dataset comprising (i) a plurality of model features associated with a plurality of entities, and (ii) a plurality of classifications of the plurality of entities, the plurality of classifications comprising one or more initial severity level labels associated with a first time period, and one or more subsequent severity level labels associated with a second time period, wherein the model dataset is generated by (MPEP 2106.05(d) indicates that merely “storing and retrieving information in memory” and/or "receiving or transmitting data over a network" are well‐understood, routine, conventional functions when they are claimed in a merely generic manner (as it is in the present claim). Thereby, a conclusion that the claimed step is well-understood, routine, conventional activity is supported under Berkheimer); machine learning (merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f)); and by the one or more processors (merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f)). With respect to claim 2: 2A Prong 1: This claim is directed to a judicial exception. 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: the progression machine learning model comprises a distributed gradient boosting machine learning model (merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f) – high level recitation of machine learning). 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: the progression machine learning model comprises a distributed gradient boosting machine learning model (merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f) – high level recitation of machine learning). With respect to claim 3: 2A Prong 1: This claim is directed to a judicial exception. 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: training the progression machine learning model further comprises: generating a training data subset, a validation data subset, and a testing data subset from the model dataset (merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f) – high level recitation of machine learning). 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: training the progression machine learning model further comprises: generating a training data subset, a validation data subset, and a testing data subset from the model dataset (merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f) – high level recitation of machine learning). With respect to claim 6: 2A Prong 1: This claim is directed to a judicial exception. the classification is based at least in part a plurality of on criteria corresponding to the plurality of severity level labels (mental process). 2A Prong 2: This judicial exception is not integrated into a practical application. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. With respect to claim 7: 2A Prong 1: This claim is directed to a judicial exception. the input data comprises feature data associated with the plurality of entities over a third time period (mental process). 2A Prong 2: This judicial exception is not integrated into a practical application. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. With respect to claim 21: 2A Prong 1: This claim is directed to a judicial exception. determining… whether one or more computational resources should be allocated to a computing system based on the predictive output (mental process). 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: by the one or more processors (merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f)); and initiating… allocation of the one or more computational resources to the computing system based on the determination (adding insignificant extra-solution activity to the judicial exception, as discussed in MPEP § 2106.05(g)). 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: by the one or more processors (merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f)); and initiating… allocation of the one or more computational resources to the computing system based on the determination (MPEP 2106.05(d) indicates that merely “storing and retrieving information in memory” and/or "receiving or transmitting data over a network" are well‐understood, routine, conventional functions when they are claimed in a merely generic manner (as it is in the present claim). Thereby, a conclusion that the claimed step is well-understood, routine, conventional activity is supported under Berkheimer). With respect to claim 22: 2A Prong 1: This claim is directed to a judicial exception. the second severity level label is a higher severity level than the first severity level label (mental process). 2A Prong 2: This judicial exception is not integrated into a practical application. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The remaining claims 8-10, 13-17, and 20-26 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to an abstract idea without significantly more for at least the same reasons as those given above with respect to claims 1-3, 6, 7, 21, and 22 with only the addition of generic computer components under step 2A prong 1. Under the broadest reasonable interpretation, these limitations are process steps that cover mental processes including an observation, evaluation, judgment or opinion that could be performed in the human mind or with the aid of pencil and paper but for the recitation of a generic computer component. If a claim, under its broadest reasonable interpretation, covers a mental process but for the recitation of generic computer components, then it falls within the "Mental Process" grouping of abstract ideas. A person would readily be able to perform this process either mentally or with the assistance of pen and paper. See MPEP § 2106.04(a)(2). Limitations that merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f). These additional elements do not integrate the judicial exception into a practical application under step 2A prong 2. Refer to MPEP §2106.04(d). Moreover, the limitations are merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f). These additional elements do not recite any additional elements/limitations that amount to significantly more. Accordingly, the claimed invention recites an abstract idea without significantly more. Claim Rejections - 35 U.S.C. § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. §§ 102 and 103 (or as subject to pre-AIA 35 U.S.C. §§ 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant are advised of the obligation under 37 C.F.R. § 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. § 102(b)(2)(C) for any potential 35 U.S.C. § 102(a)(2) prior art against the later invention. Claims 1-3, 6-10, 13-17, 20, 22, 24, and 26 are rejected under 35 U.S.C. § 103 as being unpatentable over Neumann (U.S. Pat. App. Pub. No. 2022/0277830, hereinafter Neumann) in view of Drake et al. (U.S. Pat. App. Pub. No. 2023/0020908, hereinafter Drake) and Madabhushi et al. (U.S. Pat. App. Pub. No. 2020/0005931, hereinafter Madabhushi). As to independent claims 1, 8, and 15, Neumann teaches A computer-implemented method comprising (Title and abstract):… initiating, by the one or more processors, performance of one or more prediction-based actions based at least in part on the predictive output (Paragraph 25, "The remote device may perform the progression machine-learning process using the progression training set to generate progression phase and transmit the output to computing device 104. The remote device may transmit a signal, bit, datum, or parameter to computing device 104 that at least relates to progression phases."). Neumann does not appear to expressly teach receiving a model dataset, the model dataset comprising: receiving a model dataset comprising (i) a plurality of model features associated with a plurality of entities, and (ii) a plurality of classifications of the plurality of entities, the plurality of classifications comprising one or more initial severity level labels associated with a first time period and subsequent severity level labels associated with a second time period, wherein the model dataset is generated by: (1) generating a base cohort from feature data associated with the plurality of entities, the base cohort comprising selected ones of the plurality of entities including initial severity level labels associated with the first time period that match a selected set of severity level labels, (2) assigning a selected one of a plurality of outcome labels to the selected ones of the plurality of entities based at least in part on a difference in the one or more initial severity level labels and the one or more subsequent severity level labels; (3) generating the plurality of model features based at least in part on feature data associated with the selected ones of the plurality of entities, wherein the progression machine learning model is trained based on the model dataset. Drake teaches receiving a model dataset comprising (i) a plurality of model features associated with a plurality of entities, and (ii) a plurality of classifications of the plurality of entities, the plurality of classifications comprising one or more initial severity level labels associated with a first time period and subsequent severity level labels associated with a second time period, wherein the model dataset is generated by (Paragraph 6, "receiving historical data for a plurality of entities." Paragraph 42, "the machine learning model training module 132 may obtain relevant patient data within a specified training time period prior to the randomly selected offset value." Paragraph 90 et seq. mortality or fall score): (1) generating a base cohort from feature data associated with the plurality of entities, the base cohort comprising selected ones of the plurality of entities including initial severity level labels associated with the first time period that match a selected set of severity level labels (Paragraph 45, "may be specified based on existing patients of a healthcare provider, existing patients of a specified health plan, healthcare patients in a geographic location, patients within a specified demographic group, and so on." Paragraph 90 et seq. mortality or fall score), (2) assigning a selected one of a plurality of outcome labels to the selected ones of the plurality of entities based at least in part on a difference in the one or more initial severity level labels and the one or more subsequent severity level labels (Paragraph 60, "simulate an outcome of the machine learning prediction when it processes a new patient entity in future scenarios"); (3) generating the plurality of model features based at least in part on feature data associated with the selected ones of the plurality of entities, wherein the progression machine learning model is trained based on the model dataset (Paragraph 12, "a set of the features of the feature vector input has been determined by receiving historical data for a plurality of entities. For each entity of the plurality of entities, the method includes determining whether the entity has experienced a fall and, in response to the entity having experienced a fall, the method includes identifying a set of healthcare classification codes from the historical data for the entity. Additionally, the set of the features of the feature vector input has been determined by generating a subset of fall-influencing classification codes from the set of healthcare classification codes associated with the plurality of entities that have experienced a fall and representing at least one of the fall influencing classification codes as a feature of the set of features"). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the machine learning of Neumann to include the machine learning techniques of Drake to allow care program resources to be deployed to patients who may gain the greatest benefits (see Drake at paragraph 28). Neumann does not appear to expressly teach inputting, by one or more processors, input data to a progression machine learning model to receive a predictive output indicating (i) a classification comprising a first severity level label of a plurality of severity level labels and (ii) a likelihood of the first severity level label progressing to a second severity level label of the plurality of severity level labels, wherein training the progression machine learning model comprises. Madabhushi teaches inputting, by one or more processors, input data to a progression machine learning model to receive a predictive output indicating (i) a classification comprising a first severity level label of a plurality of severity level labels and (ii) a likelihood of the first severity level label progressing to a second severity level label of the plurality of severity level labels, wherein training the progression machine learning model comprises (Figure 1, provide 150 radiomic features to machine learning classifier. Figure 1, receive 160 probability from machine learning classifier. Paragraph 41, "receiving, from the machine learning classifier, a probability that the ROI is a member of the first class". Paragraph 42, "classifying the ROI as a member of the first class or the second, different class based, at least in part, on the probability. In various embodiments, the classification may include one or more of a most likely outcome (e.g., as determined based on the radiomic features, etc.) such as low-risk of progression or high-risk of progression; a probability or confidence associated with a most likely outcome"). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the machine learning of Neumann to include the progression machine learning techniques of Madabhushi to predict progression risk categories (see Madabhushi at paragraph 27). As to dependent claims 2, 9, and 16, Neumann further teaches the progression machine learning model comprises a distributed gradient boosting machine learning model (Paragraph 32, "the updated machine-learning model may incorporate a gradient boosting machine-learning process"). As to dependent claims 3, 10, and 17, Drake further teaches training the progression machine learning model further comprises: generating a training data subset, a validation data subset, and a testing data subset from the model dataset (Paragraph 52, "training dataset". Paragraph 130 "validation training data set." Paragraph 59, "test data"). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the machine learning of Neumann to include the machine learning techniques of Drake to allow care program resources to be deployed to patients who may gain the greatest benefits (see Drake at paragraph 28). As to dependent claim 6, 13, and 20, Neumann further teaches classification is based at least in part on a plurality of criteria corresponding to the plurality of severity level labels (Paragraph 23, "a 'progression machine-learning model' is a machine-learning model to produce a progression phase output given pathogenic particle enumeration and/or severity index as inputs"). As to dependent claims 7 and 14, Drake further teaches the input data comprises feature data associated with the plurality of entities over a third time period (Paragraph 103, "risk of falling within a future specified time period, such as the next six months, the next year, the next two years, and so on"). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the machine learning of Neumann to include the machine learning techniques of Drake to allow care program resources to be deployed to patients who may gain the greatest benefits (see Drake at paragraph 28). As to dependent claim 22, 24, and 26 Madabhushi further teaches the second severity level label is a higher severity level than the first severity level label (Paragraph 76, "low, intermediate, and high risk categories"). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the machine learning of Neumann to include the progression machine learning techniques of Madabhushi to predict progression risk categories (see Madabhushi at paragraph 27). Claims 21, 23, and 25 are rejected under 35 U.S.C. § 103 as being unpatentable over Neumann in view of Drake, Madabhushi, and Das et al. (U.S. Pat. App. Pub. No. 2008/0263559, hereinafter Das). As to dependent claims 21, 23, and 25 Neumann does not appear to expressly teach determining, by the one or more processors, whether one or more computational resources should be allocated to a computing system based on the predictive output; and. Das teaches determining, by the one or more processors, whether one or more computational resources should be allocated to a computing system based on the predictive output (Paragraph 6, "resource allocations are subsequently determined"); and initiating, by the one or more processors, allocation of the one or more computational resources to the computing system based on the determination (Paragraph 6, "resource allocations are subsequently determined and executed based upon the dynamic resource-level utility information established". Executing the resource allocation reads on the claimed initiating). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the machine learning of Neumann to include the dynamic resource allocation techniques of Das to manage the demand rate which may vary dynamically and rapidly over many orders of magnitude (see Das at paragraph 4). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Casey R. Garner whose telephone number is 571-272-2467. The examiner can normally be reached Monday to Friday, 8am to 5pm, Eastern Time. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alexey Shmatov can be reached on 571-270-3428. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from Patent Center and the Private Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from Patent Center or Private PAIR. Status information for unpublished applications is available through Patent Center and Private PAIR to authorized users only. Should you have questions about access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form. /Casey R. Garner/Primary Examiner, Art Unit 2123
Read full office action

Prosecution Timeline

Nov 15, 2022
Application Filed
Aug 25, 2025
Non-Final Rejection — §101, §103
Nov 04, 2025
Examiner Interview Summary
Nov 04, 2025
Applicant Interview (Telephonic)
Nov 24, 2025
Response Filed
Feb 09, 2026
Final Rejection — §101, §103
Mar 27, 2026
Examiner Interview Summary
Mar 27, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596937
METHOD AND APPARATUS FOR ADAPTING MACHINE LEARNING TO CHANGES IN USER INTEREST
2y 5m to grant Granted Apr 07, 2026
Patent 12585994
ACCURATE AND EFFICIENT INFERENCE IN MULTI-DEVICE ENVIRONMENTS
2y 5m to grant Granted Mar 24, 2026
Patent 12579451
MINIMAL UNSATISFIABLE SET DETECTION APPARATUS, MINIMAL UNSATISFIABLE SET DETECTION METHOD, AND COMPUTER-READABLE RECORDING MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12572822
FLEXIBLE, PERSONALIZED STUDENT SUCCESS MODELING FOR INSTITUTIONS WITH COMPLEX TERM STRUCTURES AND COMPETENCY-BASED EDUCATION
2y 5m to grant Granted Mar 10, 2026
Patent 12573187
Self-Learning in Distributed Architecture for Enhancing Artificial Neural Network
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
70%
Grant Probability
87%
With Interview (+16.8%)
3y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 261 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month