Prosecution Insights
Last updated: April 19, 2026
Application No. 18/468,593

NATURAL LANGUAGE BASED CLINICAL RECOMMENDATION SYSTEM

Non-Final OA §101
Filed
Sep 15, 2023
Examiner
KOLOSOWSKI-GAGER, KATHERINE
Art Unit
3687
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
GE Precision Healthcare LLC
OA Round
1 (Non-Final)
26%
Grant Probability
At Risk
1-2
OA Rounds
4y 3m
To Grant
60%
With Interview

Examiner Intelligence

Grants only 26% of cases
26%
Career Allow Rate
95 granted / 358 resolved
-25.5% vs TC avg
Strong +34% interview lift
Without
With
+33.6%
Interview Lift
resolved cases with interview
Typical timeline
4y 3m
Avg Prosecution
54 currently pending
Career history
412
Total Applications
across all art units

Statute-Specific Performance

§101
35.0%
-5.0% vs TC avg
§103
33.9%
-6.1% vs TC avg
§102
14.5%
-25.5% vs TC avg
§112
12.5%
-27.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 358 resolved cases

Office Action

§101
DETAILED ACTION This action is in reference to the communication filed on 10 NOV 2025. Applicant elects claims 1-13. Claims 1-13 pending and examined. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-13 rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. As explained below, the claim(s) are directed to an abstract idea without significantly more. Step One: Is the Claim directed to a process, machine, manufacture or composition of matter? YES With respect to claim ( s ) 1-13 the independent claim ( s ) 1 recite (s) method, i.e. a process, which is a statutory category of invention. Step 2A – Prong One: Is the claim directed to a law of nature, a natural phenomenon (product of nature) or an abstract idea? YES With respect to claim ( s ) 1-13 the independent claim ( s ) (claims 13 ) is/ are directed, in part, to : A method for a clinical recommendation system, the method comprising: generating an automated response to a natural language query received from a care provider in real time using a first, large language model (LLM) of the clinical recommendation system, the automated response including a recommended treatment of a patient, the LLM trained on a first set of training data generated by applying a second, recommendation model to anonymized patient data, the recommendation model based on clinical guidelines ; verifying the automated response outputted by the LLM using a third, prediction model; and in response to the automated response being verified, storing the automated response in a patient decision workflow of the patient, the patient decision workflow including a record of clinical recommendations made with respect to the patient using the clinical recommendation system, and displaying the automated response on a display device of the clinical recommendation system These claim elements are considered to be abstract ideas because they are directed to a mental process, in that the claims ensconce concepts performed in the human mind including observation, evaluation, judgment, and opinion functions. Generating a response from a real time query, about a treatment, verifying a response, and storing the response in a patient workflow, are all examples of such concepts. If a claim limitation, under its broadest reasonable interpretation, covers a concept performed in the human mind, then it/they falls/ fall into the “mental processes” category. The claims are further directed to mathematical concepts – i.e. mathematical relationships, formulas, equations, and/or calculations . The use of the first, second, and third model to create and verify the generated response are examples of mathematical concepts. If a claim limitation, under its broadest reasonable interpretation, covers mathematical relationships, formulas, equations, and/or calculations, then it/they falls/ fall into the “mathematical processes” category. Accordingly, the claim recites an abstract idea. Step 2A – Prong Two : Does the claim recite additional elements that integrate the judicial exception into a practical application? NO. This judicial exception is not integrated into a practical application. In particular, the claim (s) recite ( s ) additional element – claim 1 recites a “trained LLM” as well as the use of a display to perform the claim steps , and further at least implies the presence of a computer via the use of a display and the “automation” of the responses. Examiner notes that the display of data, on a display, as well as any sending/receiving is often found to be analogous to adding insignificant extra solution activity to the judicial exception(s) identified (see MPEP 2106.05g). Examiner finds the LLM itself recited at a high level of generality and as such amount to no more than adding the words “apply it” to the judicial exception, or mere instructions to implement the abstract idea on a computer/via the LLM, or merely uses the computer/LLM as a tool to perform the abstract idea (see MPEP 2106.05f), or generally links the use of the judicial exception to a particular technological field of use/computing environment (see MPEP 2106.05h). Examiner finds no improvement to the functioning of the computer or any other technology or technical field in the elements identified above as claimed (see MPEP 2106.05a), nor any other application or use of the judicial exception in some meaningful way beyond a general like between the use of the judicial exception to a particular technological environment (see MPEP 2106.05e). Accordingly, this /these additional element (s) do ( es ) not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Step 2B : Does the claim recite additional elements that amount to significantly more than the judicial exception? NO. The independent claim ( s ) is/ are additionally directed to claim elements such as : claim 1 recites a “trained LLM” as well as the use of a display to perform the claim steps , and further at least implies the presence of a computer via the use of a display and the “automation” of the responses. When considered individually, the above identified claim elements only contribute generic recitations of technical elements to the claims. It is readily apparent, for example, that the claim is not directed to any specific improvements of these elements. Examiner looks to Applicant’s specification in : [0020] The NLP model may be a private implementation of a commercial, open source, or in-house large language model (LLM), such as OpenAI's GPT, which may be trained to process the natural language queries and provide responses based on the recommendation model. The LLM may be trained using a combination of supervised learning, with ground truth data generated manually based on the recommendation model, and reinforcement learning, where the LLM is optimized using a reward model. After the LLM is trained and optimized, the clinical recommendation system may receive a natural language query from a care provider (e.g., a question about a course of action regarding a patient), and submit the natural language query to the LLM as a prompt. The LLM may output a response to the prompt, which may include the suggested treatment. [0023] In some embodiments, at least a portion of clinical recommendation system 102 is disposed at a device (e.g., workstation, edge device, server, etc.) communicably coupled to one or more healthcare and/or hospital networks and computer systems via wired and/or wireless connections , and can receive or access medical data (including patient data) stored in the one or more healthcare and/or hospital computer systems. [0030] Display device 134 may include one or more display devices utilizing virtually any type of technology. In some embodiments, display device 134 may comprise a computer monitor. Display device 134 may be combined with processor 104 , non-transitory memory 106 , and/or user input device 132 in a shared enclosure, or may be peripheral display devices and may comprise a monitor, touchscreen, projector, or other display device known in the art, which may enable a user to view responses to queries submitted to clinical recommendation system 102 , and/or interact with various data stored in non-transitory memory 106 . These passages, as well as others, makes it clear that the invention is not directed to a technical improvement. When the claims are considered individually and as a whole, the additional elements noted above, appear to merely apply the abstract concept to a technical environment in a very general sense – i.e. a generic computer receives information from another generic computer, processes the information and then sends information back. The most significant elements of the claims, that is the elements that really outline the inventive elements of the claims, are set forth in the elements identified as an abstract idea. The fact that the generic computing devices are facilitating the abstract concept is not enough to confer statutory subject matter eligibility. As per dependent claims 2-13 : Dependent claims 2, 3, 5, 7 9-12, 13 are not directed to any additional abstract ideas, but do at least nominally recite non abstract elements such as specific types of recommendation/prediction models, in the already addressed LLM as noted above, as well as the use of a microphone or other audio device. With regard to the decision tree/knowledge graph/Bayesian/CNN elements in claims 2, 3, Examiner finds that similarly to the LLM identified above, these are at best an application of the technology to the abstract idea(s) identified above, or merely using the modeling as a tool or generally linking the use of the judicial exception to the modeling environment. No improvement or other application is found. As such, there is no finding of significantly more in claims 2, 3. Examiner notes the analysis for claims 5, 8, 9-12 reaches the same conclusion as per claim 1 above. With regard to the microphone in claim 13, Examiner makes reference to paragraphs [044-045, 089-090] which clearly disclose that the microphone is at best “applied” to the abstract idea, rather than improved in any way. Examiner finds no evidence of a meaningful limitation beyond generally linking the microphone to the abstract idea. As such this element does not amount to significantly more. Dependent claims 4, 6, 8 are not directed any additional abstract ideas and are also not directed to any additional non-abstract claim elements. Rather, these claims offer further descriptive limitations of elements found in the independent claims and addressed above – such as sources of the data used therein . While these descriptive elements may provide further helpful context for the claimed invention these elements do not serve to confer subject matter eligibility to the invention since their individual and combined significance is still not heavier than the abstract concepts at the core of the claimed invention. Non-Obvious Subject Matter Claims 1-13 are believed to be free of the prior art. The closest prior art is believed to be: Neumann ( US 20210183515 A1 ) – Neumann teaches an AI platform including an expert repository as well as best practices/clinical guidelines consideration in the training sets for responses. “First” advisory input includes the query from a medical professional, and a response is returned, which may include for example potential treatments. The AI platform in Neumann is trained using a plurality of training data, including a training set that generates an algorithm for the model, and the training data may be linked in a variety of ways. Most relevant, the reference provides for a “first guess” of an answer, which includes providing a ranking of answers which may be further refined by machine learning, and the answer may be corrected via a second process including human intervention. Yellowlees et al ( US 20250054623 A1) – Yellowlees teaches a machine supported method of diagnosing or treating medical disorders, including training the AI model based on data generated in prior sessions and across a population. The AI modeling may also be trained based on prior patients of a clinic, a practitioner, or patients who were previously treated for the same disease(s). The clinician in Yellowlees uses the AI support to supplement visits with the patient , and the patient’s past visits or interactions are stored and used to inform the training of other models . Bell et al (US 20240404702 A1 ) – Bell uses LLMs in a diagnosis/treatment plan, including generating an answer to a query, and generally teaches de-identifying the patient data prior to a training process, as well as considering or ranking multiple treatment options in the LLM suggested treatment plan. The training in Bell is completed in several phases with the use of classifiers, or a clinician can select from an existing pre-trained model. Blum et al ( US 20250068667 A1) – Blum teaches receiving a query of an LLM as trained on a large dataset, processing the query using the LLM itself, and teaches that LLM can “hallucinate” or provide incorrect or made up information, and therefore teaches that a verification process is used to confirm the LLM is providing a correct and accurate response in a security environment. Zheng et al: “Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena” – Zheng teaches using an additional LLM to “check” or verify the response outputted by a separate LLM in a chatbot environment, including repeated interactions and verification steps. However, Examiner finds that the noted references taken separately and/or in combination do not fairly teach the interaction of the three models as claimed, including the idea that the LLM is trained on specific data as annotated by a separate model, yet verified by a third model. Examiner finds that the combination itself is not clearly suggested in any combination of the cited references, as the claim has a specific ordered combination of the use/function of the models and the specific data to be considered in the use/training therein. While elements of the claim are found in the cited prior art, Examiner does not believe the combination itself as claimed would have been obvious to one of ordinary skill. The Examiner hereby asserts that the totality of the evidence neither anticipates nor renders obvious the particular combination of elements as claimed. That is, the Examiner emphasizes the claims as a whole and hereby asserts that the totality of the evidence fails to set forth, either explicitly or implicitly, an appropriate rationale for combining or otherwise modifying the available prior art to arrive at the claimed invention. The combination of features as claimed would not be obvious to one of ordinary skill in the art because any combination of the evidence at hand to reach the combination of features as claimed would require a substantial reconstruction of Applicant’s claimed invention relying on improper hindsight bias. Response to Arguments Applicant’s remarks as filed on 10 NOV 2025 are noted. Examiner does not find them to be persuasive. Examiner respectfully submits the verification step in group I vs. Group II does not function in the same way. In group I the verification is a required conditional step, in Group II it is auxiliary at best. Examiner respectfully submits a search burden may exist even if the inventions are of the same classification. As noted in the requirement mailed 9/10/25, Examiner does not find the inventions to be obvious variants and instead finds that the claimed inventions recite(d) sub combinations usable together. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT KATHERINE KOLOSOWSKI-GAGER whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)270-5920 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT Monday - Friday . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT Mamon Obeid can be reached at FILLIN "SPE Phone?" \* MERGEFORMAT 571-270-1813 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. FILLIN "Examiner Stamp" \* MERGEFORMAT /KATHERINE . KOLOSOWSKI-GAGER/ Primary Examiner Art Unit 3687 /KATHERINE KOLOSOWSKI-GAGER/ Primary Examiner, Art Unit 3687
Read full office action

Prosecution Timeline

Sep 15, 2023
Application Filed
Mar 04, 2026
Non-Final Rejection — §101 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12499467
PREDICTING THE EFFECTIVENESS OF A MARKETING CAMPAIGN PRIOR TO DEPLOYMENT
2y 5m to grant Granted Dec 16, 2025
Patent 12462273
SYSTEM AND METHOD FOR USING DEVICE DISCOVERY TO PROVIDE ADVERTISING SERVICES
2y 5m to grant Granted Nov 04, 2025
Patent 12462938
MACHINE-LEARNING MODEL FOR GENERATING HEMOPHILIA PERTINENT PREDICTIONS USING SENSOR DATA
2y 5m to grant Granted Nov 04, 2025
Patent 12444507
BAYESIAN CAUSAL INFERENCE MODELS FOR HEALTHCARE TREATMENT USING REAL WORLD PATIENT DATA
2y 5m to grant Granted Oct 14, 2025
Patent 12437315
SYSTEMS AND METHODS FOR DYNAMICALLY DETERMINING EVENT CONTENT ITEMS
2y 5m to grant Granted Oct 07, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
26%
Grant Probability
60%
With Interview (+33.6%)
4y 3m
Median Time to Grant
Low
PTA Risk
Based on 358 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month