Prosecution Insights
Last updated: April 19, 2026
Application No. 18/587,691

Method and System for Predicting Medical Diagnoses Using Machine Learning without Patient Intervention

Final Rejection §101
Filed
Feb 26, 2024
Examiner
PAULS, JOHN A
Art Unit
3683
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Elevance Health Inc.
OA Round
2 (Final)
49%
Grant Probability
Moderate
3-4
OA Rounds
3y 9m
To Grant
76%
With Interview

Examiner Intelligence

Grants 49% of resolved cases
49%
Career Allow Rate
404 granted / 829 resolved
-3.3% vs TC avg
Strong +28% interview lift
Without
With
+27.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 9m
Avg Prosecution
46 currently pending
Career history
875
Total Applications
across all art units

Statute-Specific Performance

§101
28.8%
-11.2% vs TC avg
§103
33.4%
-6.6% vs TC avg
§102
11.3%
-28.7% vs TC avg
§112
20.9%
-19.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 829 resolved cases

Office Action

§101
DETAILED ACTION Status of Claims This action is in reply to the communication filed on 11 November, 2025. Claims 1 and 15 have been amended. Claims 1 – 20 are currently pending and have been examined. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections – 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. The following rejection is formatted in accordance with MPEP 2106. Claim 1 is representative. Claim 1 recites: A method of predicting a medical diagnosis for a patient, independent of prior diagnosis obtained from interviewing or examining the patient, the method comprising: autonomously receiving, at a processor, claims data, clinical data and demographic data relating to the patient, from one or more network databases; determining, by the processor, from the claims data, whether a prediction target for the medical diagnosis is present; in response to a determination that the prediction target is present: inputting the prediction target and the clinical data into a machine learning model that is trained to predict diagnosis risk; determining, using the machine learning model, a diagnosis risk score; determining, by the processor, a care seeking propensity score, from the demographic data, wherein the care seeking propensity score is related to whether the patient is a member of a group with a propensity to seek care that is lower than a reference care seeking propensity score for other patients; weighting, by the processor, the diagnosis risk score by the care seeking propensity score to create a weighted diagnosis risk score; determining whether the weighted diagnosis risk score indicates a likelihood of the medical diagnosis; and in response to the determination that the weighted diagnosis risk score indicates a likelihood of the medical diagnosis, automatically transmitting, over a network, a recommendation for further evaluation to a digital device associated with the patient; wherein the machine learning model is trained using training data comprising historical claims data, historical clinical data, and historical demographic data, from a population of prior patients, and wherein the machine learning model is trained to detect correlation between medical diagnosis signals identified from the training data, and a positive result from a screening mechanism for likelihood of the medical diagnosis. Claim 15 recites a system that executes the steps of the method recited in Claim 1. Claims 1 – 20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e. a law of nature, a natural phenomenon, or an abstract idea), and does not include additional elements that either: 1) integrate the abstract idea into a practical application, or 2) that provide an inventive concept – i.e. element that amount to significantly more than the abstract idea. The Claims are directed to an abstract idea because, when considered as a whole, the plain focus of the claims is on an abstract idea. STEP 1 The claims are directed to a system and a method which are included in the statutory categories of invention. STEP 2A PRONG ONE The claims, as illustrated by Claim 1, recite limitations that encompass an abstract idea within the “mental processes” grouping – concepts performed in the human mind including observation, evaluation, judgment and opinion including: receiving claims data, clinical data and demographic data relating to the patient, from one or more network databases; determining, from the claims data, whether a prediction target for the medical diagnosis is present; in response to a determination that the prediction target is present: predict diagnosis risk; determining a diagnosis risk score; determining a care seeking propensity score, from the demographic data, wherein the care seeking propensity score is related to whether the patient is a member of a group with a propensity to seek care that is lower than a reference care seeking propensity score for other patients; weighting the diagnosis risk score by the care seeking propensity score to create a weighted diagnosis risk score; determining whether the weighted diagnosis risk score indicates a likelihood of the medical diagnosis; and in response to the determination that the weighted diagnosis risk score indicates a likelihood of the medical diagnosis, recommend further evaluation to the patient. The claims recite a method for screening or identifying a patient who should receive a further evaluation by predicting the risk of a medical diagnosis for the patient who has a “prediction target” in the claims data. For example, the method calculates a risk of depression in a patient with a claim for a future/upcoming medical/surgical procedure, and weights the score using a care seeking propensity score. The methods receives and analyzes data relating to the patient to determine a weighted diagnostic risk score, determines whether the weighted risk score indicates a likelihood of the medical diagnosis, and if so, recommends a further evaluation. Collecting information, including when limited to particular content, is within the realm of abstract ideas, and analyzing information by steps people go through in their minds, or by mathematical algorithms, without more, are mental processes within the abstract idea category (Electric Power Group v. Alstom S.A. (Fed Cir, 2015-1778, 8/1/2016). The specification discloses that data is received from the memory of data sources such as EMRs, claims data bases, etc. over a network. In addition to being a part of the abstract data collection and analysis process, receiving data from databases, over a network, is an extra-solution activity – i.e. data gathering. Determining that a prediction target is present in the claims data includes recognizing data related to a claim for physical and behavioral health issues that may signal a propensity for depression. The specification provides no technical details as to how this function is performed. The broadest reasonable interpretation of this feature includes observing the presence of claim data for a particular health issue – i.e. an ordinary mental process. Creating a diagnostic risk score is a process that can be performed mentally, as indicated by the specification. For example, it is currently known to administer clinically accepted questionnaire for the diagnosis of depression, and to determine risk factors using human intelligence (@ 0003). Similarly, the broadest reasonable interpretation of determining a care seeking propensity score, a process for which he specification provides no technical details, includes making mental judgements about different segments of the population (i.e. by age or financial wellbeing, etc.) Weighting the risk score with the propensity score is a simple mathematical formula or relationship that can be performed mentally. Determining that the weighted score indicates the likelihood of the diagnosis is another judgement that, under the broadest reasonable interpretation, can be made mentally. The specification does not disclose any technical details as to how this function is performed. Similarly, recommending further evaluation, except for generic computer implementation steps, can be performed mentally. As such, the claims recite an abstract idea within the mental process grouping. The claims, as illustrated by Claim 1, also recite limitations that encompass an abstract idea within the “certain methods of organizing human activity” grouping – managing personal behavior or relationships or interactions between people including social activities, teaching, and following rules or instructions. The claims recite a method for screening or identifying a patient who should receive a further evaluation by predicting the risk of a medical diagnosis for the patient who has a “prediction target” in the claims data. For example, the system calculates a risk of depression in a patient with a claim for a future/upcoming medical/surgical procedure, and weights the score using a care seeking propensity score. The system receives and analyzes data relating to the patient to determine a weighted diagnostic risk score, determines whether the weighted risk score indicates a likelihood of the medical diagnosis, and if so, recommends a further evaluation. However, identifying patients at risk for depression and recommending evaluation is process that merely organizes this human activity. This type of activity, i.e. includes conduct that would normally occur when managing a patient’s particular disease, medical condition or state. As such, the claims recite an abstract idea within the certain methods of organizing human activity grouping. STEP 2A PRONG TWO The claims recite limitations that include additional elements beyond those that encompass the abstract idea above including: a processor; one or more network databases; inputting the prediction target and the clinical data into a machine learning model that is trained to predict diagnosis risk; determining, using the machine learning model, a diagnosis risk score; automatically transmitting, over a network, a recommendation to a digital device associated with the patient; wherein the machine learning model is trained using training data comprising historical claims data, historical clinical data, and historical demographic data, from a population of prior patients, and wherein the machine learning model is trained to detect correlation between medical diagnosis signals identified from the training data, and a positive result from a screening mechanism for likelihood of the medical diagnosis. However, these additional elements do not integrate the abstract idea into a practical application of that idea in accordance with the MPEP. (see MPEP 2106.05) The processor, network database, and machine learning model are recited at a high level of generality such that it amounts to no more than instructions to apply the abstract idea using a generic computer components. These elements merely add instructions to implement the abstract idea on a computer, and generally link the abstract idea to a particular technological environment. The machine learning model is disclosed as being purely generic. The historical training data is also generically disclosed. In particular, the claims replace the knowledge and experience of a mental health specialist by applying established methods of machine learning to an abstract diagnostic process in a new data environment – i.e. applying a trained model to the clinical data. The specification teaches that the machine learning model may be trained to predict diagnostic risk using the clinical data; and using a machine learning model that is generically described. Machine learning limitations reciting broad, functionally described, well-known techniques executed by generic and conventional computing devices does not provide a practical application of the abstract diagnostic process. “Today we hold only that patents that do no more than claim the application of generic machine learning to new data environments, without disclosing improvements to the machine learning models to be applied, are patent ineligible under §101.” (Recentive Analytics, Inc. v. Fox Corp. (Fed. Cir. 2025)). Similarly, transmitting the results of the abstract process, such as a recommendation, does not improve the computer itself, or any other technology, nor does transmitting results provide a meaningful limitation beyond generally linking the abstract idea to a particular technological environment. A general purpose computer that applies a judicial exception by use of conventional computer functions, as is the case here, does not qualify as a particular machine, nor does the recitation of a generic computer impose meaningful limits in the claimed process. (see Ultramercial, Inc. v. Hulu, LLC, 772 F.3d 709, 716-17 (Fed. Cir. 2014)). As such, the additional elements recited in the claim do not integrate the abstract diagnostic process into a practical application of that process. STEP 2B The additional elements identified above do not amount to significantly more than the abstract diagnostic process. Generically training a machine learning model is a conventional technique. The specification discloses these techniques is at a high level of generality indicating that they are well-known in the art, a fact for which Examiner takes Official Notice. Receiving and transmitting information, for example over a network, is a well-understood, routine and conventional computer function – i.e. receiving or transmitting data over a network as in Symantec, TLI, OIP and buySAFE. The additional structural elements or combination of elements in the claims, other than the abstract idea per se, amount to no more than a recitation of generic computer structure (i.e. a processor, one or more network databases, a digital device). Each of the above components are disclosed in the specification as being purely conventional and/or known in the industry. Because the specification describes these additional elements in general terms, without describing particulars, Examiner concludes that the claim limitations may be broadly, but reasonably construed, as reciting well-understood, routine and conventional computer components and techniques. The specification describes the elements in a manner that indicates that they are sufficiently well-known that the specification does not need to describe the particulars in order to satisfy U.S.C. 112. Considered as an ordered combination the limitations recited in the claims add nothing that is not already present when the steps are considered individually. As such, the additional elements recited in the claim do not provide significantly more than the abstract diagnostic process, or an inventive concept. The dependent claims add additional features including: those that merely serve to further narrow the abstract idea above such as: further limiting the prediction target to a particular type (Claim 2, 3, 16, 17); further limiting the type of screening mechanism (Claim 4, 5, 18, 19); further limiting the type of demographic, clinical data (Claim 8 – 13); those that recite additional abstract ideas such as: performing sentiment analysis (Claim 7); deriving clinical data from an EMR (Claim 14); those that recite well-understood, routine and conventional activity or computer functions such as: using the screening result to further train the model in a generic way (Claim 6, 20); those that recite insignificant extra-solution activities such as: performing the screening mechanism (i.e. data gathering) (Claim 6, 20); or those that are an ancillary part of the abstract idea. The limitations recited in the dependent claims, in combination with those recited in the independent claims add nothing that integrates the abstract idea into a practical application, or that amounts to significantly more. These elements merely narrow the abstract idea, recite additional abstract ideas, or append conventional activity to the abstract process. As such, the additional element do not integrate the abstract idea into a practical application, or provide an inventive concept that transforms the claims into a patent eligible invention. The apparatus claims are no different from the method claims in substance. “The equivalence of the method, system and media claims is readily apparent.” “The only difference between the claims is the form in which they were drafted.” (Bancorp). The method claims recite the abstract idea implemented on a generic computer, while the apparatus claims recite generic computer components configured to implement the same idea. Specifically, Claims 15 – 20 merely add the generic hardware noted above that nearly every computer will include. The apparatus claim’s requirement that the same method be performed with a programmed computer does not alter the method’s patentability under U.S.C. 101 (In re Grams). Therefore, the claims are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter. Response to Arguments Applicant’s arguments, filed 11 November, 2025, with respect to the U.S.C. §112 rejections have been fully considered and are persuasive. The rejections have been withdrawn. Applicant’s arguments, with respect to the U.S.C. §103 rejections have been fully considered and are persuasive. The rejections have been withdrawn. Examiner agrees that the art of record fails to teach weighting a diagnostic risk score, which indicates the likelihood of a medical diagnosis resulting from a future/upcoming medical/surgical procedure (i.e. a prediction target), by a care seeking propensity score to create a weighted diagnostic score. Applicant's arguments with respect to the U.S.C. §101 rejection have been fully considered but they are not persuasive. The U.S.C. §101 Rejection Applicant argues that the claims are drawn to “a technical solution to a technical problem”. In particular, Applicant asserts that the claim require the claimed analysis “to be performed by a processor”, and that “the analysis cannot practically be performed in the human mind” – “a human mind cannot practically “[determine], using the machine learning model, a diagnostic risk score”. This assertion is belied by the disclosure, indicating humans routinely administer a patient health questionnaire and determine risk factors. Here, Examiner agrees that a human cannot perform a machine learning analysis. However, as noted in the rejection above, the machine learning model, and the processor it runs on, are purely generic “apply it” limitations. Applicant further assert that the claims are integrated into a practical application – a concrete improvement in the prediction of diagnosis. However, improving a diagnostic process merely improves the abstract idea itself, and not any technology or technological process. Applicant asserts that the Office provides no evidence that the claim elements are well understood, routine or conventional. Notably, Applicant includes limitations that encompass the abstract idea (determine, determine, determine) and not “additional elements” that should be considered in Step 2B. (See Berkheimer) “If the claim’s only ‘inventive concept’ is the application of an abstract idea using conventional and well-understood techniques, the claim has not been transformed into a patent-eligible application of an abstract idea (Berkheimer). CONCLUSION The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. “Propensity to seek healthcare in different healthcare systems: analysis of patient data in 34 countries”; van Loenen et al.; BMC Health Service Research; 9 October, 2015 – discloses determining a propensity to seek care using a questionnaire. “Type of Multimorbidity and Propensity to Seek Care among Elderly Medicare”; Garg et al.; J Health Dispar Res Pract.; 2017 – discloses using data from the Medicare Current Beneficiary Survey (MCBS), including claims and demographics to determine a propensity to seek care. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry of a general nature or relating to the status of this application or concerning this communication or earlier communications from the Examiner should be directed to John A. Pauls whose telephone number is (571) 270-5557. The Examiner can normally be reached on Mon. - Fri. 8:00 - 5:00 Eastern. If attempts to reach the examiner by telephone are unsuccessful, the Examiner’s supervisor, Robert Morgan can be reached at (571) 272-6773. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://portal.uspto.gov/external/portal/pair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866.217.9197. Official replies to this Office action may now be submitted electronically by registered users of the EFS-Web system. Information on EFS-Web tools is available on the Internet at: http://www.uspto.gov/patents/process/file/efs/guidance/index.jsp. An EFS-Web Quick-Start Guide is available at: http://www.uspto.gov/ebc/portal/efs/quick-start.pdf. Alternatively, official replies to this Office action may still be submitted by any one of fax, mail, or hand delivery. Faxed replies should be directed to the central fax at (571) 273-8300. Mailed replies should be addressed to “Commissioner for Patents, PO Box 1450, Alexandria, VA 22313-1450.” Hand delivered replies should be delivered to the “Customer Service Window, Randolph Building, 401 Dulany Street, Alexandria, VA 22314.” /JOHN A PAULS/Primary Examiner, Art Unit 3683 Date: 7 January, 2026
Read full office action

Prosecution Timeline

Feb 26, 2024
Application Filed
May 25, 2025
Non-Final Rejection — §101
Nov 11, 2025
Response Filed
Jan 07, 2026
Final Rejection — §101 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586676
IMAGE INTERPRETATION MODEL DEVELOPMENT
2y 5m to grant Granted Mar 24, 2026
Patent 12586668
System and Method for Patient Care Improvement
2y 5m to grant Granted Mar 24, 2026
Patent 12567483
AUTOMATED LABELING OF USER SENSOR DATA
2y 5m to grant Granted Mar 03, 2026
Patent 12548670
EMERGENCY MANAGEMENT SYSTEM
2y 5m to grant Granted Feb 10, 2026
Patent 12548664
ADAPTIVE CONTROL OF MEDICAL DEVICES BASED ON CLINICIAN INTERACTIONS
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
49%
Grant Probability
76%
With Interview (+27.5%)
3y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 829 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month