Prosecution Insights
Last updated: April 19, 2026
Application No. 18/746,884

AUTONOMOUS GENERATION OF ACCURATE HEALTHCARE SUMMARIES

Final Rejection §101§103
Filed
Jun 18, 2024
Examiner
EVANS, ASHLEY ELIZABETH
Art Unit
3687
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
NEC Laboratories America Inc.
OA Round
2 (Final)
9%
Grant Probability
At Risk
3-4
OA Rounds
2y 9m
To Grant
40%
With Interview

Examiner Intelligence

Grants only 9% of cases
9%
Career Allow Rate
4 granted / 46 resolved
-43.3% vs TC avg
Strong +31% interview lift
Without
With
+31.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
46 currently pending
Career history
92
Total Applications
across all art units

Statute-Specific Performance

§101
36.7%
-3.3% vs TC avg
§103
39.1%
-0.9% vs TC avg
§102
16.7%
-23.3% vs TC avg
§112
7.2%
-32.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 46 resolved cases

Office Action

§101 §103
DETAILED ACTION Acknowledgements This office action is in response to the claims filed October 08, 2025. Claims 1-20 are pending Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment(s) Claims 1-20 are pending. The claims have overcome the 112(b) rejection. Claim Rejection - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected to under 35 U.S.C 101 as not being directed to eligible subject matter the grounds set out in detail below: Independent Claims 1, 8, and 15: Eligibility Step 1 (does the subject matter fall within a statutory category?): Independent claim 1 falls within the statutory category of method. Independent Claim 8 falls within the statutory category of machine. Independent Claim 15 falls within the statutory category of article of manufacture. Eligibility Step 2A-1 (does the claim recite an abstract idea, law of nature, or natural phenomenon?): Independent claims 1, 8, and 15 claimed invention are directed to a judicial exception. The claim elements in the independent claims (claim 1 being representative) which set forth the abstract idea are: autonomous generation of accurate healthcare summaries, comprising: generating relevant healthcare questions based on a preceding context determined from ground truth summaries obtained from a healthcare data record; filtering unanswerable questions from the relevant healthcare questions by querving decision criteria generating answers to the relevant healthcare questions that utilizes extracted healthcare data from a healthcare data record to obtain predicted healthcare answers; synthesizing, complete sentences by filling missing text from the predicted healthcare answers and the relevant healthcare questions to obtain healthcare summary sentences that avoids factually incorrect information by ensuring the healthcare summary sentences are grounded in information from the healthcare data record; and generating a healthcare technical report autonomously from the healthcare summary sentences to assist with a decision making of a healthcare professional. which falls within “certain methods of organizing human activity” as following rules or instructions to generate a healthcare summary report to assist a healthcare professional. See MPEP § 2106.04(a)(2). Eligibility Step 2A-2 (does the claim recite additional elements that integrate the judicial exception into a practical application?): For Independent Claims 1, 8, and 15 this judicial exception is not integrated into a practical application. In Claim 1 the additional elements are: a computer a fine-tuner transformer model an extractive question answering model artificial intelligence (AI) Examiner takes the applicable considerations stated in MPEP 2106.04 (d) and analyzes them below in light of the instant applications disclosure and claim elements as a whole. The additional element, a computer, is recited as executing the abstract idea and is stated as general purpose computer tools (see instant app. para. [0075]) or equivalent to apply the abstract idea as “apply-it” The additional element, a fine-tuner transformer model, is stated as a tool or equivalent to apply the abstract idea as “apply-it” to predict data The additional element, an extractive question answering model, is stated as a tool or equivalent to apply the abstract idea as “apply-it” to analyze data The additional element, artificial intelligence (AI), is generally linking the abstract idea to artificial intelligence environment In Claim 8 the additional elements not already recited in the independent claim 1 are: a memory; and one or more processor devices in communication with the memory Examiner takes the applicable considerations stated in MPEP 2106.04 (d) and analyzes them below in light of the instant applications disclosure and claim elements as a whole. The additional elements, a memory; and one or more processor devices in communication with the memory, (see instant app. para. [0052] for e.g.) is performing the abstract idea and stated as general purpose computer tools or equivalent to apply the abstract idea as “apply-it” In Claim 15 the additional elements not already recited in the independent claim 1 are: A computer with a non-transitory computer program product comprising a computer-readable storage medium including program code Examiner takes the applicable considerations stated in MPEP 2106.04 (d) and analyzes them below in light of the instant applications disclosure and claim elements as a whole. The additional element, A computer with a non-transitory computer program product comprising a computer-readable storage medium including program code, is performing the abstract idea and stated as general purpose computer tools or equivalent to apply the abstract idea as “apply-it” (see instant app. para. [0075] for e.g.) Accordingly, claims 1, 8, and 15 do not integrate the abstract idea into a practical application. Eligibility Step 2B (Does the claim amount to significantly more?): The independent claims 1, 8, and 15 do not include additional elements that are sufficient to amount to significantly more than the judicial exception because as analyzed above in step 2A prong 2 above, these additional elements, whether viewed individually or as an ordered combination, amount to no more than applying the abstract idea thus insufficient to provide “significantly more”. Therefore, the claims do not amount to significantly more and the claims are ineligible. Dependent Claims 2-7, 9-14, and 16-20: Eligibility Step 1 (does the subject matter fall within a statutory category?): The dependent claims 2-7 fall within the statutory category of method. The dependent claims 9-14 fall within the statutory category of machine. The dependent claim 16-20 falls within the statutory category of article of manufacture. Eligibility Step 2A-1 (does the claim recite an abstract idea, law of nature, or natural phenomenon?): Dependent claims 2-7, 9-14, and 16-20 claimed invention are directed to a judicial exception. Dependent claims 2-7, 9-14, and 16-20 continue to limit the abstract idea in the independent claims by (1) further limiting prompts to patients, (2) further limiting us of context and ground truths, (3) further limiting prediction of questions and answers, and (4) further limiting pairing questions and answers thus, inheriting the same abstract idea which falls within “certain methods of organizing human activity” as following rules or instructions to generate a healthcare summary report to assist a healthcare professional. See MPEP § 2106.04(a)(2). Eligibility Step 2A-2 (does the claim recite additional elements that integrate the judicial exception into a practical application?): In Claims 2-7, 9-14, and 16-20 this judicial exception is not integrated into a practical application. In Claims 2-7, 9-14, and 16-20 the additional elements not already recited in the independent claims are: A trained AI assistant Examiner takes the applicable considerations stated in MPEP 2106.04 (d) and analyzes them below in light of the instant applications disclosure and claim elements as a whole. The additional elements, A trained AI assistant, is generally linking the abstract idea to artificial intelligence environment Eligibility Step 2B (Does the claim amount to significantly more?): Dependent claims 2-7, 9-14, and 16-20 do not include additional elements that are sufficient to amount to significantly more than the judicial exception because as analyzed above in step 2A prong 2 above, these additional elements, whether viewed individually or as an ordered combination, amount to no more than generally linking and thus insufficient to provide “significantly more”. Therefore, the claims do not amount to significantly more and the claims are ineligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Independent claims 1, 8, and 15 as well as Dependent claims 3-6, 10-13, and 17-20 are rejected under 35 U.S.C.103 as being unpatentable over Morse et. al (hereinafter Morse) (US20250201420A1) in view of Sharma et. al (hereinafter Sharma) (US20240095455Al) in view of Gaur et. al (hereinafter Gaur) (US20230061906A1) As per claim 1, Morse teaches: A computer-implemented method for autonomous generation of accurate healthcare summaries, comprising: ([0025] The following embodiments comprise an inventive digital health software platform designed to support mental health assessment, treatment, and outcomes tracking. At the core of the platform, Artificial Intelligence (AI) is leveraged to allow patients to verbally describe their conditions in natural language; such conversations are converted to insightful notes and summaries that provide clinicians with unprecedented valuable patient insights and risk scores.” Also see [0048]) Determined from ground truth summaries obtained from a healthcare data record (e.g. see [0085] and see ([0030] discloses, “These LLMs are trained on vast amounts of human generated content and consider billions of parameters ( aka, variables) in their computations. Thereby, these models are very adept at predicting the most probable word(s) humans would produce next in a sequence of words, and thus, can generate content that is practically indistinguishable from content generated by humans. These LLMs can also be "fine-tuned", a process in which a custom model is produced from a base model that slightly modifies the original model's internal mathematical formulas ( or weights) to predict a very different set of words. Considering the previous example, a fine-tuning training set could contain many training examples of "the cat was sleeping". As such, the resultant fine-tuned model may examine the string "the cat was" and predict that "sleeping" is far more probable than "purring". And see [0031] discloses, “Through the process of "fine-tuning", the base behavior of a LLM can be modified by having expert humans generate training examples for specific purposes. Many of the original behaviors of the LLM will remain intact (such as its ability to create conjunctions or pick the proper pronouns), but, for example, a model can be tuned to generate fictional prose, clinical diagnosis information, or even computer code. After fine-tuning, the natural behavior of a base model will be altered, and the result will be more similar to the human generated training material, creating the ability to modify a general-purpose LLM for very specialized purposes.” And see [0160] discloses, “The correlation data is then used to inform future versions of the logic engine rules via using the user interface of embodiments of the Aiberry invention. When a user group provides their first baseline screening and uses a version of the recommendation engine, they form a unique starting cohort; the long-term outcomes of the subsequent cohorts are compared to the previous cohorts' outcomes validate the effectiveness of each iteration of the recommendation engines.”) generating relevant healthcare questions based on a preceding context …[…]…by employing a fine-tuned transformer model; (see “Capability 3-Incorporating Summary Insights to Personalize the Screening Questions” and see [0071] discloses, “This capability describes Aiberry' s ability to utilize the LLM generated screening summary and insights to personalize the Botberry screening questions for each individual user.” And see [0072] discloses, “To improve the screening experience, an embodiment utilizes information that was shared by the patient in past screenings and incorporates that past screening information into the questions that are being asked to make them more personalized or relatable to the patient and/or the clinician.” And see [0073] discloses, “The LLM screening notes generator enables a user to capture and store such information as structured data in key-value storage rather than as plain text. By making such information readily accessible in, for example, a JSON document stored in a database, it can then be referred to in a subsequent screening, as FIG. 16 illustrates.” And see [0074] discloses, “Each of the key insights from the LLM output can be attributed to a specific screening domain, for example "social support". When the Aiberry system selects the questions to be asked in the screening, based on the specific domain, an algorithm will check if relevant insight for this domain exists from a previous screening by accessing the matching domain node in the previous notes' structured data.” And see [0075] discloses, “If a relevant insight for this domain does not exist from a previous screening ( such that there are no matching domain nodes in the previous notes' structured data), then the algorithm can use the generic notion of the screening question (e.g. "Tell me about interaction with family or friends"). Alternatively, if the algorithm detects a previous insight, then the screening process augments the question with generative AI to incorporate that insight in the question such that it will make it more contextual and personalized. For example, if the "social support" insight value is "boyfriend" then the question Botberry may ask could be "How is your relationship with your boyfriend?"” and see [0076] discloses, “One or more embodiments includes a LORA finetuned version of a commercially-friendly LLM model (such as Facebook Llama 3.2), which creates these personalized questions and accounts for proper casing and plurals. In one exemplary embodiment, Aiberry fine-tuned this LLM by using a proprietary set of instruction prompts and paired completions to guide the model to generate these questions. In alternate embodiments, different proprietary or public instruction prompts and paired completions may be utilized to guide the model. The training set consists of the following:” and see [0077]-[0079]) ….[…]… synthesizing, with artificial intelligence (AI), complete sentences…[…]…and the relevant healthcare questions to obtain healthcare summary sentences…[…]… by ensuring the healthcare summary sentences are grounded in information from the healthcare data record; (see [0038] discloses, “One embodiment of the invention includes a library of LLM summarizers that can be trained and tuned each to a well-defined purpose by simply pairing the same input data (such as clinical transcripts) with an intentionally crafted response written in the context of the desired purpose. For example, the sentence "My energy levels have been up and down" may be summarized to a practitioner as "The patient's energy levels have been fluctuating with no significant risk of an energy disorder", while the patient notes model could summarize their response as "Varying levels of energy". These summaries could come from the same base model that was fine-tuned into two different models that produce unique and disparate summaries from the same input transcript.” and see [0035] discloses, “Due to training on this clinically-relevant data, the resultant fine-tuned LLM will associate much higher probabilities for those clinical words that form useful and relevant sentences for this context, resulting in emergent clinical diagnostic behavior that is strictly unique to Aiberry's custom clinically-trained models.” And see [0075]-[0084] and [0096] which discloses synthesizing various relevant questions to obtain health care summary sentences) and generating a healthcare technical report autonomously from the healthcare summary sentences to assist with a decision making of a healthcare professional. ([0051] discloses, “This capability describes the Aiberry invention's ability to accept a mental health screening transcript as an input and then use a custom LLM to generate key insights and summary notes that will help the clinician gain insight into the patient.” And see [0064] discloses, “From a process flow perspective, the summary can be automatically created as part of the screening process and becomes available to the clinician as soon as the screening score becomes available.”) However, Morse does not teach the underlined portions: filtering unanswerable questions from the relevant healthcare questions by querving decision criteria of a technical knowledge database of the fine-tuned transformer model; generating answers to the relevant healthcare questions by employing an extractive question answering model that utilizes extracted healthcare data from a healthcare data record to obtain predicted healthcare answers; ….[…]… synthesizing, with artificial intelligence (AI) including the extractive question answering model, complete sentences by filling missing text from the predicted healthcare answers and the relevant healthcare questions to obtain healthcare summary sentences …[…]… However, Sharma does teach the underlined portions : generating answers to the relevant healthcare questions by employing an extractive question answering model that utilizes extracted healthcare data from a healthcare data record to obtain predicted healthcare answers;…[…]… including the extractive question answering model ….[…]… synthesizing, with artificial intelligence (AI) including the extractive question answering model, complete sentences by filling missing text from the predicted healthcare answers and the relevant healthcare questions to obtain healthcare summary sentences that avoids factually incorrect information …[…]… (see [0055] and see [0012] and see [0037] discloses, “Framework 200 may additionally include a pretrained language model component (not shown). A pretrained language model component may be a deep learning model that is trained on the training dataset 302 and/or one or more other dataset (e.g., a reading comprehension question and answer dataset like SQuAD).” And see [0038] discloses, “and automated QA systems: Automated QA systems that try to answer user-defined questions automatically by looking at the input text.” And see [0006] discloses, “The multi-modal end to end learning system may poll documents stored in a secure cloud-based electronic medical record system via a task scheduler on a periodic basis. The polled documents may be converted to text and scrubbed (i.e., cleaned and sanitized) for protected health information before being processed. Documents that are in image format are converted to text using an optical character recognition model and both the image and the text within the image are separately stored. Documents that are text-based have the text extracted and stored. In both instances, the text gleaned from the document is cleaned and sanitized, and then fed as context to a language model that has been fine-tuned for extractive question-answering. In addition to the cleaned and sanitized text data, a prompt or question that is either provided on-the-fly (i.e., in real-time) by a clinician as part of a search or is pre-determined for specific needs-is also fed as input to the extractive QA language model. In return, the extractive QA language model outputs an answer to a user device highlighting part ofthe document/ image wherein the answer was found and a confidence score quantifying the likelihood of the answer being correct. Subsequently, a user (e.g., a clinician) operating the user device may provide feedback regarding the answer that was provided and said feedback may be used to fine-tune the extractive QA language model.” And see [0051] and [0052]) It would be obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Morse’s teachings of automating notes and summaries using NLP and machine learning as previously cited with Sharma’s explicit teachings of predicting answers to clinical notes and validating information with users as previously cited, the motivation being, Morse is concerned with making it easier to screen for health conditions and monitor the efficacy of treatments for patients (see e.g. Morse [0004]), therefore it would be obvious to one of ordinary skill that Sharma’s explicit use of predicted healthcare answers would further increase the accuracy of generating summaries and clinical notes and decrease the use of resources for monitoring the efficacy of treatments for patients by improving the preemptive conditions or indicators of patients and further needs while continuing to verify accuracy with users. Furthermore, Morse’ already uses LLM modeling making the integration of specific QA modeling in Sharma predictable. However, Sharma also does not teach the underlined portions: filtering unanswerable questions from the relevant healthcare questions by querving decision criteria of a technical knowledge database of the fine-tuned transformer model; ….[…]… synthesizing, with artificial intelligence (AI) including the extractive question answering model, complete sentences by filling missing text from the predicted healthcare answers and the relevant healthcare questions to obtain healthcare summary sentences that avoids factually incorrect information …[…]… However, Gaur does teach the underlined portions: filtering unanswerable questions from the relevant healthcare questions by querving decision criteria of a technical knowledge database of the fine-tuned transformer model; ([0067] discloses, “FIG. 4 illustrates an example of automatically generating information-gathering questions 400 using a system having the example architecture described in connection with FIG. 2. User query 102 includes a title and description of a topic pertaining generally to economics and specifically to inflation and employment. Neural parser 210 of PPE 200 performs a consistency parsing of user query 102, and based on the parsing, phrase extractor 212 of PPE 200 extracts noun phrases 404. SQE 204 determines which entities of a knowledge graph accessed from knowledge database(s) 118 (e.g., ConceptNet or other indexed knowledge database) are semantically related to noun phrases 404. The semantically related entities (represented as ovals) are extracted and used by SQE 204 to generate query sub-graph 214. KPR 206 retrieves passages a, b, c, and d from passages databases 120 (e.g., WikiNews, Wikipedia, web documents, and the like). Retrieved passages a, b, c, and d are extracted by KPR 206 based on their matching with phrases 412 corresponding to paths of query sub-graph 214. CQG 208, based on phrases 412 and passages a, b, c, and d, generates information gathering questions 104. Information-gathering questions 104 comprise a set of diverse, non-redundant conversational questions that are coherent-both contextually and semantically-with user query 102.” And see [0070] discloses, “FIG. 6 illustrates certain operative features of system 100 using answerability evaluator 108 shown in FIG. 1 as an optional component. Illustratively, system 100 communicatively couples with answerability evaluator 108 (e.g., over a data communications network via wired or wireless connections), which is separate from the other components of system 100. In other arrangements, however, answerability evaluator 108 can be integrated in a single device (e.g., computer, server) along with each of the other components of system 100. Operatively, answerability evaluator 108 is configured to perform multiple functions with respect to IGQs 102 generated by system 100. Answerability evaluator determines which information-gathering questions among question sequence 600 generated by system 100 can be answered using information given in a user's query (answerable questions 602) and one or more which cannot be (unanswered questions 604). Answerability evaluator 108 can be configured to actuate system 100 to generate and convey to a user (e.g., via a user device) a prompt. The prompt can request the user to provide conversation cues (e.g., keywords) based on which the system can generate one or more questions that substitute for the one or more unanswered questions 604.” And see [0080]-[0081]) It would be obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Morse’s teachings of automating notes and summaries using NLP and machine learning as previously cited and Sharma’s explicit teachings of predicting answers to clinical notes and validating information with users as previously cited with Gaur’s teachings of filtering unanswerable questions and ground truth summaries, the motivation being, Morse is concerned with making it easier to screen for health conditions and monitor the efficacy of treatments for patients (see e.g. Morse [0004]), therefore it would be obvious to one of ordinary skill that Sharma’s and Guar’s explicit use of predicted answers and filtered questions would further increase the accuracy of generating summaries and clinical notes and decrease the use of resources for monitoring the efficacy of treatments for patients by improving the preemptive conditions or indicators of patients and further needs while continuing to verify accuracy with users. Furthermore, Morse’ already uses LLM modeling making the integration of specific QA modeling in Sharma and generative modeling in Gaur predictable. As per claim 3, Morse further teaches: The computer-implemented method of claim 1, further comprising fine-tuning a transformer model with ground truth summaries, the preceding context, and subsequent questions to obtain a fine-tuned transformer model. ([0030] discloses, “These LLMs are trained on vast amounts of human generated content and consider billions of parameters ( aka, variables) in their computations. Thereby, these models are very adept at predicting the most probable word(s) humans would produce next in a sequence of words, and thus, can generate content that is practically indistinguishable from content generated by humans. These LLMs can also be "fine-tuned", a process in which a custom model is produced from a base model that slightly modifies the original model's internal mathematical formulas ( or weights) to predict a very different set of words. Considering the previous example, a fine-tuning training set could contain many training examples of "the cat was sleeping". As such, the resultant fine-tuned model may examine the string "the cat was" and predict that "sleeping" is far more probable than "purring". And see [0031] discloses, “Through the process of "fine-tuning", the base behavior of a LLM can be modified by having expert humans generate training examples for specific purposes. Many of the original behaviors of the LLM will remain intact (such as its ability to create conjunctions or pick the proper pronouns), but, for example, a model can be tuned to generate fictional prose, clinical diagnosis information, or even computer code. After fine-tuning, the natural behavior of a base model will be altered, and the result will be more similar to the human generated training material, creating the ability to modify a general-purpose LLM for very specialized purposes.” And see [0160] discloses, “The correlation data is then used to inform future versions of the logic engine rules via using the user interface of embodiments of the Aiberry invention. When a user group provides their first baseline screening and uses a version of the recommendation engine, they form a unique starting cohort; the long-term outcomes of the subsequent cohorts are compared to the previous cohorts' outcomes validate the effectiveness of each iteration of the recommendation engines.”) As per claim 4, Morse further teaches: The computer-implemented method of claim 3, wherein fine-tuning the transformer model further comprises converting the ground truth summaries into the subsequent questions by employing a question generative model. (see [0053]-[0070] which discloses the steps to fine tuning and creating the custom LLM generative model and see [0075]-[0079] which discloses the steps to utilizing ground truth expert summaries to tune the model for subsequent questions) As per claim 5, Morse further teaches: The computer-implemented method of claim 4, wherein converting the ground truth summaries into the subsequent questions further comprises utilizing question templates to construct the subsequent questions from extracted entities. ([0077] 1. An "instruction" prompt built from live clinical follow-up transcripts, which tells the LLM what the purpose of the question is ( e.g. "To assess the levels of social support"), what the original generic question is ( e.g. "Tell me about interaction with family or friends."), what their previous answer was as recorded in the notes JSON, ( e.g. "boyfriend"), and then instruct the LLM to formulate an ideal personalized question using the provided data intended.” And see [0078] discloses, “2. A "completion" question that considers the instructions information, preferably written by a trained clinician using the Aiberry user interface, forms a private, proprietary dataset unique to Ai berry. The question is written by the certified clinician as a hypothetical ideal question to ask the patient in a follow-up assessment to elicit the most useful and relevant response from that specific patient.”/ examiner interprets as one of ordinary skill in the art would understand under BRI The instruction prompt for ideal personalized questions as the evolution of the disclosed utilization of original generic questions with these questions being the question templates) As per claim 6, Morse does not teach: The computer-implemented method of claim 1, further comprising prompting the predicted healthcare answers and unanswerable questions to a decision-making entity to obtain confirmed answers. However, Sharma does teach: The computer-implemented method of claim 1, further comprising prompting the predicted healthcare answers and unanswerable questions to a decision-making entity to obtain confirmed answers. ([0055] discloses, “At 414 server system 104 fine-tunes the natural language model based on feedback received from the user device. For example, in response to receiving an answer, a user operating user device( s) 102 may provide feedback regarding the provided answer via a region on the interactive GUI being displayed on user device(s) 102. In one instance, the feedback may be an acknowledgement that the answer is correct. In another instance, the feedback may be an indication that there may be a more accurate answer in another document. In another instance, the feedback provided by the clinician may indicate that the answer is incorrect. Notably, the feedback may be in the form of a text (e.g., a series of words or sentences, or data), numbers, formulas, and/or chemical compositions, entered by the user operating user device(s) 102. In another instance, the user may provide the feedback using speech and one or more speech recognition techniques are implemented to interpret the feedback. In one or more of the instances above, the feedback is transmitted to server system 104 and leveraged by the fine-tuner 218 to refine the language model implemented by language model component 216.” and see [0041] discloses, “The language model component 216 may leverage both the context from the sanitized text 214 and a user's question as input. In turn, language model component 216 may refer back to the context and make predictions about where the answer is inside the one or more documents. In furtherance of identifying a span (i.e., a section and/or passage, which may be visibly highlighted when presented to a user) of a document where the answer is located, the language model component 216 may generate a confidence score associated with the prediction that the provided answer and identified span of text is accurate. Notably, the extractive QA language model implemented by language model component 216 may identify multiple documents or passages within documents that may be relevant as a potential answer. The predictions made by the extractive QA language model (implemented by language model component 216) may be evaluated via one or more evaluation models, such as exact match (EM) (i.e., measures the percentage of predictions that match any one of the ground truth answers exactly), Fl (i.e., the weighted average of Precision and Recall), span-Fl, and/or span-EM, which generate scores for each prediction. The Fl and EM metrics measure the number of overlapping tokens between the predicted answers and the ground truth answers. Fl may be calculated as follows: Precision x Recall Fl= 2 x -----Precision + Recall.” [0042] discloses, “Wherein precision is the ratio of the number of shared words to the total number of words in the prediction, and recall is the ratio of the number of shared words to the total number of words in the ground truth.” And see [0043] discloses, “EM may be determined by evaluating whether characters of the model's prediction exactly match the characters of one of the true answers. In the event that the characters of the prediction match the characters of the true answers, then EM=l; and if there are no matches, EM=0.” And see [0044] discloses, “Accordingly, the language model may assign a score to each document and/or passage, and the passage with the highest score may be returned as an answer to a user. Alternatively, if the language model does not find an answer, the language model may return notification indicative of such to a user.”) It would be obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Morse’s teachings with Sharma’s teachings for the same reasons given for claim 1. As per claim 20, Morse does not teach: The non-transitory computer program product of claim 17, further comprising training a question generative model in reverse with an extractive question answering dataset to predict questions from sentences that answer them. However, Sharma does teach: The non-transitory computer program product of claim 17, further comprising training a question generative model in reverse with an extractive question answering dataset to predict questions from sentences that answer them. ([0037] discloses, “Framework 200 may additionally include a pretrained language model component (not shown). A pretrained language model component may be a deep learning model that is trained on the training dataset 302 and/or one or more other dataset (e.g., a reading comprehension question and answer dataset like SQuAD).” It would be obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Morse’s teachings with Sharma’s teachings for the same reasons given for claim 1. As per claims 8 and 10-13 they are system claims which repeat the same limitations of claims 1, 3-6 the corresponding method claims, as a collection of elements as opposed to a series of process steps. Since the teachings of Morse and Sharma and Gaur as well as the motivation to combine disclose the underlying process steps that constitute the methods of claims 1, 3-6 it is respectfully submitted that they provide the underlying structural elements that perform the steps as well. As such, the limitations of claims 8 and 10-13 are rejected for the same reasons given above for claims 1, 3-6. As per claims 15, 17-19 they are article of manufacture claims which repeat the same limitations of claims 1, 3-5 the corresponding method claims, as a collection of executable instructions stored on machine readable media as opposed to a series of process steps. Since the teachings of Morse and Sharma and Gaur as well as the motivation to combine disclose the underlying process steps that constitute the method of claims 15, 17-19 it is respectfully submitted that they likewise disclose the executable instructions that perform the steps as well. As such, the limitations of claim 15, 17-19 are rejected for the same reasons given above for claims 1, 3-5. Dependent claims 2, 9, and 16 are rejected under 35 U.S.C.103 as being unpatentable over Morse et. al (hereinafter Morse) (U20250201420Al) in view of Sharma et. al (hereinafter Sharma) (US20240095455Al), in further view of Gaur et. al (hereinafter Gaur) (US20230061906A1) and in even further view of ALAM (US20230317274A1) As per claim 2, Morse and Sharma and Gaur do not teach explicitly: The computer-implemented method of claim 1, further comprising employing an AI assistant trained with extracted healthcare data and corresponding textual prompts for a patient to assist with the decision making of a healthcare professional in generating a medical summary of the patient based on the healthcare summary sentences. However, ALAM does teach: The computer-implemented method of claim 1, further comprising employing an AI assistant trained with extracted healthcare data and corresponding textual prompts for a patient to assist with the decision making of a healthcare professional in generating a medical summary of the patient based on the healthcare summary sentences. ([0045] discloses, “Referring to back to FIG. 3, the method 300 begins at block 302 where the AI assistant device 110 begins building/training the model 405 by transmitting conversation questions to the patient. In some examples, the AI assistant device 110 transmits the conversation questions at regular time intervals in order to build a baseline or initial condition for the patient 120. For example, the device 110 transmits the conversation questions every morning, every other day, more frequently, or less frequently depending on the needs of the patient 120 and the training status of the model 405. At block 304, the Al assistant device 110 captures audio from the environment 400 and detects utterances from the first audio for the patient 120 at block 306 as shown in FIG. 4A.”) It would be obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Morse’s and Sharma’s and Gaur’s teachings with ALAM’s teachings for the same reasons given for claim 1 as ALAM is utilizing machine learning in the form of an AI assistant to also track patients condition overtime and it would be predictable to understand the various learning models in ALAM could be the learning models previously cited in Morse and Sharma. As per claim 9 it is a system claims which repeat the same limitations of claim 2 the corresponding method claims, as a collection of elements as opposed to a series of process steps. Since the teachings of Morse, Sharma, Gaur and ALAM as well as the motivation to combine disclose the underlying process steps that constitute the methods of claim 2 it is respectfully submitted that they provide the underlying structural elements that perform the steps as well. As such, the limitations of claim 9 are rejected for the same reasons given above for claims 2. As per claim 16 it is an article of manufacture claims which repeat the same limitations of claims 2 the corresponding method claims, as a collection of executable instructions stored on machine readable media as opposed to a series of process steps. Since the teachings of Morse, Sharma, and ALAM as well as the motivation to combine disclose the underlying process steps that constitute the method of claim 2 it is respectfully submitted that they likewise disclose the executable instructions that perform the steps as well. As such, the limitations of claim 16 are rejected for the same reasons given above for claim 2. Dependent claims 7 and 14 are rejected under 35 U.S.C.103 as being unpatentable over Morse et. al (hereinafter Morse) (U20250201420Al) in view of Sharma et. al (hereinafter Sharma) (US20240095455Al) and in further view of Gaur et. al (hereinafter Gaur) (US20230061906A1)and in even further view of Devarakonda et. al (hereinafter Devarakonda) (US20180196921A1) As per claim 7, Morse and Sharma and Gaur do not teach explicitly: The computer-implemented method of claim 1, further comprising training the extractive answer model to pair answer contexts that involve abbreviations with questions that use spelled out words. However, Devarakonda does teach: The computer-implemented method of claim 1, further comprising training the extractive answer model to pair answer contexts that involve abbreviations with questions that use spelled out words. (see Fig. 4. And see [0022] discloses, “The illustrative embodiments provide a mechanism that splits the problem of abbreviation expansion into abbreviation detection and detected abbreviation expansion . Abbreviation detection uses a collection of detectors , such as lookup in vocabularies and rule - based detection , and aggregates the results . An automatic expansion mechanism processes the resulting abbreviation using a machine learn ing algorithm , which uses features based on the frequency of occurrence of the abbreviation term ( e . g . , " pt " ) and also the frequency of the expansion term ( e . g . , " patient ” ) in elec tronic medical records and contextual features surrounding the occurrence of the term ( i . e . , the abbreviation ) in the EMRs . [ 0023 ] The embodiments are described below with refer ence to a question answering ( QA ) system ;”) It would be obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Morse’s and Sharma’s and Gaur’s teachings with Devarakonda’s teachings for the same reasons given for claim 1 as Devarakonda is utilizing machine learning to improve tracking/analyzing of human conditions overtime and it would be predictable to understand the NLP and QA models ([see 0098]) in Devarakonda are predictable variations of those previously cited in Morse and Sharma. As per claim 14 it is a system claims which repeat the same limitations of claim 7 the corresponding method claims, as a collection of elements as opposed to a series of process steps. Since the teachings of Morse, Sharma, Gaur, and Devarakonda as well as the motivation to combine disclose the underlying process steps that constitute the methods of claim 7 it is respectfully submitted that they provide the underlying structural elements that perform the steps as well. As such, the limitations of claim 14 are rejected for the same reasons given above for claims 7. Response to Arguments Regarding 35 U.S.C § 101 Rejection The applicant argues on pages 1-6 of the submitted remarks that the amended claims 1-20 under 35 U.S.C § 101 are eligible for the following (examiner notes arguments have been shortened to key points for response): Applicant states assuming, arguendo, that present embodiments include abstract ideas, which Applicant respectfully submits they do not, the present embodiments integrate the abstract ideas into a practical application. The present embodiments improve the functioning of artificial intelligence models by preventing artificial intelligence models from generating factually incorrect information. MPEP 2106.05(a) states that a claim integrates a judicial exception into a practical application when it reflects technical improvements to technology or to a technical field which are discussed in the specification. Here, the present embodiments improve the functioning of artificial intelligence models by preventing artificial intelligence models from generating factually incorrect information, such as hallucinations, which include generated text that are not grounded in information pertaining to a given task. The prevalent issue of generating factually incorrect information with artificial intelligence models are described in at least paragraph [0017] of the Specification:[0017]: Prior art solutions impute the most likely description of a particular condition by generating summaries directly without first generating relevant healthcare questions which may not be grounded in information pertaining to a given task. For example, a doctor can be presented with a patient exhibiting symptoms of tuberculosis that is also relevant to pneumonia. Generating summaries for both illnesses without asking relevant healthcare questions can lead to confusion and would be detrimental to the patient's health. Additionally, without asking relevant healthcare questions, the generated summary can overlook a patient's predispositions (e.g., hypertension predisposition, diabetes predisposition, etc.) and other inherited conditions that can be included in the patient's healthcare record. (Emphasis added). The methodology of how the present embodiments resolve this issue is described in at least paragraph [0018] of the Specification: [0018] The present embodiments can improve accuracy of predicted healthcare reports and summaries by first generating relevant healthcare questions and predicting answers to the relevant healthcare questions by employing an extractive question answering model. Additionally, the present embodiments can also interact with a decision-making entity (e.g., doctor) to autonomously generate summaries that can be inferred from the data which can be validated by the doctor. This methodology is reflected in the claims recited in at least amended claim 1. One skilled in the art would reasonably conclude that based on at least paragraphs [0017]-[0018], the present embodiments are resolving the prevalent issue of generating factually incorrect information, such as hallucinations, with AI models. Additionally, one skilled in the art would readily appreciate that preventing artificial intelligence models from generating factually incorrect information improves the functionality of computer or technology as such improvement is rooted in computer technology. Thus, the present embodiments improve the functioning of artificial intelligence models by preventing artificial intelligence models from generating factually incorrect information. Thus, claim 1 satisfies the requirements under 35 U.S.C. § 101 and recite patentable subject matter. Independent claim 8 and 15 include similar subject matter to claim 1, and has been similarly rejected by the Examiner. As such, it is respectfully asserted that claim 8 and 15 satisfy the requirements of 35 U.S.C. § 101 at least due to the same reasons set forth above with regard to claim 1. Claims 2-7, 9-14, and 16-20 depend directly or indirectly from one of claims 1, 8, or 15, and thus, include all the elements of claims 1, 8, or 15. Accordingly, claims 2-7, 9-14, and 16-20 satisfy the requirements of 35 U.S.C. § 101 at least due to their respective dependencies from one of claims 1, 8, or 15. Thus, reconsideration and withdrawal of the rejections are respectfully requested. Examiner appreciates applicant’s arguments but does not find them persuasive. The MPEP states The Alice/Mayo two-part test is the only test that should be used to evaluate the eligibility of claims under examination. While the machine-or-transformation test is an important clue to eligibility, it should not be used as a separate test for eligibility. Instead it should be considered as part of the "integration" determination or "significantly more" determination articulated in the Alice/Mayo test. Bilski v. Kappos, 561 U.S. 593, 605, 95 USPQ2d 1001, 1007 (2010). See MPEP § 2106.04(d) for more information about evaluating whether a claim reciting a judicial exception is integrated into a practical application and MPEP § 2106.05(b) and MPEP § 2106.05(c) for more information about how the machine-or-transformation test fits into the Alice/Mayo two-part framework. The enumerated groupings of abstract ideas are defined as: 1) Mathematical concepts – mathematical relationships, mathematical formulas or equations, mathematical calculations (see MPEP § 2106.04(a)(2), subsection I); (Mathematical Calculations - A claim that recites a mathematical calculation, when the claim is given its broadest reasonable interpretation in light of the specification, will be considered as falling within the "mathematical concepts" grouping. A mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number, e.g., performing an arithmetic operation such as exponentiation. There is no particular word or set of words that indicates a claim recites a mathematical calculation. That is, a claim does not have to recite the word "calculating" in order to be considered a mathematical calculation. For example, a step of "determining" a variable or number using mathematical methods or "performing" a mathematical operation may also be considered mathematical calculations when the broadest reasonable interpretation of the claim in light of the specification encompasses a mathematical calculation.) 2) Certain methods of organizing human activity – fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions) (see MPEP § 2106.04(a)(2), subsection II); and 3) Mental processes – concepts performed in the human mind (including an observation, evaluation, judgment, opinion) (see MPEP § 2106.04(a)(2), subsection III). Examiners should determine whether a claim recites an abstract idea by (1) identifying the specific limitation(s) in the claim under examination that the examiner believes recites an abstract idea, and (2) determining whether the identified limitations(s) fall within at least one of the groupings of abstract ideas listed above. Furthermore, the MPEP state in 2106.04(d), “Examiners evaluate integration into a practical application by: (1) identifying whether there are any additional elements recited in the claim beyond the judicial exception(s); and (2) evaluating those additional elements individually and in combination to determine whether they integrate the exception into a practical applications.” Therefore, respectfully, examiner disagrees with the applicant as the claim limitations must be reviewed in light of the specification and the specification cannot be read into the claims. The positively recited in claim 1 (as representative) are directed to a judicial exception (i.e. certain methods of organizing human activity) following rules or instructions to generate a healthcare summary report to assist a healthcare professional in making a decision. This is abstract in substance as the physician should follow a process to review data at his or her disposal to improve decision making already for a patient. Additionally, the judicial exception (abstract idea) cannot integrate itself into a practical application but identification of any additional elements recited in the claim can be evaluated to determine if the additional elements integrate the exception into a practical application. The claims additional elements are not recited as being an improvement to a technology field or a technology confined to the computer environment in which the claims recite. The claims do not recite improvements to the machine learning and the remainder of the claim and examiner cannot read the specification into the claim. The claims are applying machine learning models to assist decision making of doctors and generate summaries which are validated by a doctor. The claim construction does not recite improvements to the machine learning models accuracy or functioning of the machine learning models but rather applies the models to improve the abstract idea of doctor summaries which are then validated by doctors to support the doctor in a health decision. Examiner maintains the claims are directed to an abstract idea and do not integrate into a practical application. Therefore, they also do not amount to significantly more. Examiner maintains the rejection under 35 U.S.C 101. Response to Arguments Regarding 35 U.S.C § 103 Rejection Applicant’s arguments on remarks pages 6-10 with respect to claim 1, 8 , and 15 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Remainder of arguments for the dependent claims are conclusory in manner and did not provide specific rebuttal to examiners rejections. Thus examiner maintains the 35 U.S.C § 103 Rejection. Prior Art Cited But Not Relied Upon Yuan et. al (hereinafter Yuan) (US20230095180A1) An approach is provided for optimizing a feedback-type question answering process. A training set is constructed to detect missing information of a question. A natural language generation model is trained using the missing information. The natural language generation model is executed to generate a rhetorical question. A response to the rhetorical question is combined with the question to generate an input to a language processor. A new question is generated. The new question is applied to a document library. A final answer is generated. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Ashley Elizabeth Evans whose telephone number is (571) 270-0110. The examiner can normally be reached Monday – Friday 8:00 AM – 5:00 PM. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mamon Obeid can be reached on (571) 270-1813. The fax phone number for the organization where this application or proceeding is assigned 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center. Should you have questions on access to the Patent Center, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). /ASHLEY ELIZABETH EVANS/Examiner, Art Unit 3687 /MAMON OBEID/Supervisory Patent Examiner, Art Unit 3687
Read full office action

Prosecution Timeline

Jun 18, 2024
Application Filed
Jul 22, 2025
Non-Final Rejection — §101, §103
Sep 24, 2025
Interview Requested
Oct 02, 2025
Examiner Interview Summary
Oct 02, 2025
Applicant Interview (Telephonic)
Oct 08, 2025
Response Filed
Feb 01, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12518860
APPARATUS AND METHOD FOR CALCULATING AN OPTIMUM MEDICATION DOSE
2y 5m to grant Granted Jan 06, 2026
Patent 12505921
PLATFORM FOR ROUTING CLINICAL DATA
2y 5m to grant Granted Dec 23, 2025
Patent 12488864
APPARATUSES AND METHODS FOR ADAPTIVELY CONTROLLING CRYOABLATION SYSTEMS
2y 5m to grant Granted Dec 02, 2025
Patent 12062438
METHOD AND SYSTEM FOR AUTOMATING STANDARD API SPECIFICATION FOR DATA DISTRIBUTION BETWEEN HETEROGENEOUS SYSTEMS
2y 5m to grant Granted Aug 13, 2024
Patent 12027273
INTERACTIVE GRAPHICAL SYSTEM FOR ESTIMATING BODY MEASUREMENTS
2y 5m to grant Granted Jul 02, 2024
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
9%
Grant Probability
40%
With Interview (+31.0%)
2y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 46 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month