Prosecution Insights
Last updated: April 19, 2026
Application No. 18/646,948

IDENTIFICATION AND ANALYTICS OF DIAGNOSIS INDICATORS WITH NARRATIVE NOTES

Final Rejection §101§103§112
Filed
Apr 26, 2024
Examiner
TIEDEMAN, JASON S
Art Unit
3683
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Core Solutions Inc.
OA Round
2 (Final)
29%
Grant Probability
At Risk
3-4
OA Rounds
4y 0m
To Grant
64%
With Interview

Examiner Intelligence

Grants only 29% of cases
29%
Career Allow Rate
101 granted / 343 resolved
-22.6% vs TC avg
Strong +35% interview lift
Without
With
+34.8%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
31 currently pending
Career history
374
Total Applications
across all art units

Statute-Specific Performance

§101
32.5%
-7.5% vs TC avg
§103
29.6%
-10.4% vs TC avg
§102
9.4%
-30.6% vs TC avg
§112
22.8%
-17.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 343 resolved cases

Office Action

§101 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Response to Amendment In the amendment dated 20 February 2026, the following occurred: Claims 1, 17, and 20. Claims 1-20 are pending. Priority This application claims priority to U.S. Provisional Patent Application No. 63/462,851 dated 28 April 2023. Subject Matter Free of Prior Art The cited prior art of record fails to expressly teach or suggest, either alone or in combination, the features found within the dependent claims 6, 9, 14, 16, 18, and 19. In particular, the cited prior art of record fails to expressly teach or suggest the combination of: Claim 6, 18 – the recitation of (Claim 6 being representative): cross-referencing each of the plurality of diagnosis indicators with a diagnosis-symptom database comprising diagnoses and diagnosis indicators associated with each diagnosis; identifying, for each of the plurality of diagnosis indicators, the at least one possible patient diagnosis based on the cross-referencing; and counting a number of occurrences for each of the at least one possible patient diagnosis. Claim 9 – the recitation of: selecting one of the plurality of diagnosis indicators for analysis; calculating a first score for the selected diagnosis indicator based on context within the narrative note; receiving a plurality of previous scores associated with the selected diagnosis indicator and associated with a plurality of previous patient visits; and trending the plurality of previous scores and the first score for the selected diagnosis indicator, wherein the visual representation comprises a graphical representation resulting from the trending that facilitates clinical decision making. Claim 14, 16, 19 – the recitation of (claim 14 being representative): scoring each of the plurality of diagnosis indicators based on a respective context within the narrative note; cross-referencing each of the plurality of diagnosis indicators with a severity database; identifying, for each of the plurality of diagnosis indicators, a respective weight based on the cross-referencing; and calculating, for each of the plurality of diagnosis indicators, a respective weighted score by multiplying a respective score for the plurality of diagnosis indicators by a respective weight. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Claims 1, 17, and 20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 The claim recites a method, computer-readable storage medium (“CRM”), and system for diagnosis indicator identification, which are within a statutory category. Step 2A1 The limitations of: Claim 1, 17 receiving a narrative note describing a first patient visit with a first patient, wherein the narrative note comprises unstructured data; applying a […] algorithm to the unstructured data of the narrative note to identify a plurality of diagnosis indicators, wherein the […] algorithm is trained […] to identify a plurality of diagnosis indicators, wherein applying the […] algorithm to the unstructured data of the narrative note to identify the plurality of diagnosis indicators comprises: preprocessing raw text of the unstructured data to generate clean text [comprising words or sentences]; generating predictions, for one or more words or sentences in the narrative note, of a likelihood that such one or more words or sentences is a diagnosis indicator; performing at least one analysis based on information regarding the plurality of diagnosis indicators to thereby identify at least one possible patient diagnosis; storing the plurality of diagnosis indicators with the at least one possible patient diagnosis; and upon request, and outputting a visual representation of results of the at least one analysis […] for display […], wherein the result is associated with the at least one possible patient diagnosis and the visual representation facilitates one or more of clinical decision making and health engagement outreach Claim 20 implement a procedure to identify diagnosis indicators; train a […] model to identify diagnosis indicators; receive a narrative note comprising unstructured data describing a first patient visit with a first patient; identify, using the model, a plurality of diagnosis indicators in the narrative note, wherein the plurality of diagnosis indicators comprise exact matches for a symptom and synonym that express the same concept, wherein the […] model is trained at least in part to use one or more natural language processing algorithms to analyze the unstructured data; output the narrative note and the plurality of diagnosis indicators; receive at least one indication that at least one of the plurality of diagnosis indicators are one of correct or incorrect; retrain the […] model based on the at least one indication; determine that an accuracy threshold is exceeded for the […] model as a result of the retraining; and deploy the […] model such that the […] model is accessible […] to a plurality of end users , as drafted, is a process that, under the broadest reasonable interpretation, covers certain methods of organizing human activity (i.e., managing personal behavior including following rules or instructions) but for recitation of generic computer components. The claims encompass a series of rules or instructions for a person or persons to follow, with or without the aid of a computer, to identify diagnosis indicators in unstructured data and use those indicators for behavioral health diagnostics (see Spec. Para. 0002, 0003 describing identify diagnosis indicators in unstructured data (i.e., narrative notes) as a human activity) in the manner described in the identified abstract idea, supra. The rules or instructions for Claims 1 and 17 are the claimed steps of “receiving …applying …generating …performing …storing …and outputting” as indicated supra. The rules or instructions for Claim 20 are the claimed steps of “receive …identify …output …receive …determine …and deploy” as indicated supra. Other than reciting generic computer components (discussed infra), the claimed invention amounts to managing personal behavior or interaction between people. If a claim limitation, under its broadest reasonable interpretation, covers managing personal behavior or interactions between people but for the recitation of generic computer components, then it falls within the “certain methods of organizing human activity” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. The Examiner notes that Claim 1 and 17 recite “the machine learning algorithm is trained” and Claim 20 recites “train a machine learning model to identify diagnosis indicators” and “retrain the machine learning model based on the at least one [received] indication [of correctness].” The type of training utilized by the claimed invention is not claimed or exclusively defined by the Applicant. As such, the Examiner is required to analyze the training step given the broadest reasonable interpretation. The training of the ML is considered to be part of the abstract idea because they fall under data manipulations that humans perform (i.e., fitting a model to data) and thus are interpreted to be part of the abstraction—in this case, the rules or instructions that fall under Certain Methods of Organizing Human Activity. See, e.g., Recentive Analytics, Inc. v. Fox Corp., No. 2023-2437 at 12 (Fed. Cir. April 18, 2025) (finding that “[i]terative training using selected training material…are incident to the very nature of machine learning.”). Step 2A1 This judicial exception is not integrated into a practical application. In particular, the claim recites the additional element of a (Claim 1) method implemented by a data processing system, (Claim 17) a computer product stored on a CRM, or (Claim 20) system comprising a processing device and memory that implements the identified abstract idea(s). These computer elements are not described by the applicant and are recited at a high-level of generality (i.e., a generic computers or components thereof) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. The claims further recite the following additional elements: an end-user device optionally having a display (Claims 1, 17, 20) one or more networks (Claims 1, 17, 20) using a trained machine learning algorithm/model (Claims 1, 17, 20) use/using natural language processing algorithms by segmenting the clean text into a plurality of tokens, predicting parts of speech for the plurality of tokens by applying a part-of-speech, tagging component configured to assign, for each token, a part-of-speech tag selected from a part-of-speech tag set, parsing the plurality of tokens to convert the tokens to machine language to analyze syntax of the clean text (Claim 1, 17) a relational database (Claims 1, 17, 20) an API server / API (Claim 17, 20) a plurality of end user devices one or more other networks The end-user device having a display, one or more networks, relational database, API, API server, plurality of end user devices, and one or more other networks all merely generally link the abstract idea to a particular technological environment or field of use. MPEP 2106.04(d)(I) indicates that generally linking an abstract idea to a particular technological environment or field of use cannot provide a practical application. The additional element of using a trained machine learning algorithm/model to generate prediction that one or more words/sentences in a diagnostic note as a diagnosis indicator represents mere instructions to implement the abstract idea on a generic computer. See, e.g., analysis of Example 47, Claim 2. Implementing an abstract idea using a generic computer or components thereof does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. See also Recentive Analytics, Inc. v. Fox Corp., No. 2023-2437 at 10 (Fed. Cir. April 18, 2025) (finding that claims that do no more than apply established methods of machine learning to a new data environment are ineligible). The “use/using natural language processing algorithms by segmenting the clean text into a plurality of tokens, predicting parts of speech for the plurality of tokens by applying a part-of-speech, tagging component configured to assign, for each token, a part-of-speech tag selected from a part-of-speech tag set, parsing the plurality of tokens to convert the tokens to machine language to analyze syntax of the clean text” represents the definition of how Natural Language Processing (“NPL”) operates (see prior art rejection). The claim therefore recites training a Machine Learning model to operate as NPL. Thus, these features generally link the abstract idea to a particular technological environment or field of use, i.e., NLP. MPEP 2106.04(d)(I) indicates that generally linking an abstract idea to a particular technological environment or field of use cannot provide a practical application. Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application. Step 2B The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of using a (Claim 1) method implemented by a data processing system, (Claim 17) a computer product stored on a CRM, or (Claim 20) system comprising a processing device and memory to perform the noted steps amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept (“significantly more”). Also, as discussed above with respect to integration of the abstract idea into a practical application, the additional elements of an end-user device optionally having a display (Claims 1, 17, 20) one or more networks (Claims 1, 17, 20) use/using natural language processing algorithms by segmenting the clean text into a plurality of tokens, predicting parts of speech for the plurality of tokens by applying a part-of-speech, tagging component configured to assign, for each token, a part-of-speech tag selected from a part-of-speech tag set, parsing the plurality of tokens to convert the tokens to machine language to analyze syntax of the clean text (Claim 1, 17) a relational database (Claims 1, 17, 20) an API server / API (Claim 17, 20) a plurality of end user devices one or more other networks where determined to either generally link the claimed invention to a particular technological environment or field of use or represent implementation of the abstract idea instructions to implement the abstract idea using a generic computer or components thereof. These additional elements have been reevaluated under the “significantly more” analysis and determined to be insufficient to provide significantly more. MPEP 2106.05(A) indicates that generally linking an abstract idea to a particular technological environment or field of use cannot provide significantly more. Further and for completeness, the Examiner notes that the prior art of record indicates that NLP is well-understood, routine and conventional in the art. See US11,587,652 to Willis at Col. 12, Lns. 44-51; US20200066391 to Sachdeva et al. at 0189; US 20180225259 to Alba et al. at 0027. The additional element(s) of using a trained machine learning algorithm/model (Claims 1, 17, 20) was determined to represent mere instructions to implement the abstract idea on a generic computer. MPEP 2106.05(I) indicates that mere instructions to implement the abstract idea on a generic computer cannot provide significantly more. As such the claim is not patent eligible. Claims 2-16, 18, and 19 are similarly rejected because they either further define/narrow the abstract idea and/or do not further limit the claim to a practical application or provide as inventive concept such that the claims are subject matter eligible even when considered individually or as an ordered combination. Claim(s) 2 merely describe(s) the diagnosis indicators, which further defines the abstract idea. Claim(s) 3 merely describe(s)providing data, obtaining data, and retraining the model, which further defines the abstract idea. The additional elements of an end user and one or more networks are analyzed the same as the independent claims. Claim(s) 4 merely describe(s) how the training occurs, which further defines the abstract idea. Claim(s) 5 merely describe(s) how the ML algorithm is deployed, which further defines the abstract idea. The additional element of an API server is analyzed in the same manner as that presented with respect to Claim 20. Claim(s) 6, 9, 10, 11 merely describe(s) additional diagnosis indicator analysis, which further defines the abstract idea. Claim(s) 7 merely describe(s) clustering subsets of diagnoses, which further defines the abstract idea. Claim(s) 8 merely describe(s) how subsets are identified, which further defines the abstract idea. Claim(s) 12 merely describe(s) the scores, which further defines the abstract idea. The claim recites the additional elements of “a plurality of end user devices” which are analyzed in the same manner as the end user device of Claim 1. Claim(s) 13 merely describe(s) calculating a score from cross-reference data, which further defines the abstract idea. The severity database is interpreted to be part of the computer of Claim 1. Claim(s) 14 merely describe(s) calculating a weighted score from diagnosis indicators in the note and cross-referenced data, which further defines the abstract idea. The severity database is interpreted to be part of the computer of Claim 1. Claim(s) 15 merely describe(s) using the weighted score data and displaying a trend, which further defines the abstract idea. Claim(s) 16, 19 merely describe(s) additional data analysis such as scoring diagnosis data, cross-referencing the diagnosis indicators, calculating a weighted score, normalizing, and identifying a subset of patients, which further defines the abstract idea. Claim 19 recites the additional elements of “a plurality of end user devices” which are analyzed in the same manner as the end user device of Claim 1. Claim(s) 18 merely describe(s) cross-referencing the diagnosis indicators and displaying possible diagnosis in a cluster, which further defines the abstract idea. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claim 20 is rejected for lack of adequate written description. Claim 20 recites functional steps for which the Applicant has not adequately described the steps in sufficient detail for one of ordinary skill in the art to conclude that the Applicant had possession of the invention at the time of filing. This is a new matter rejection. Specifically, the claim recites “wherein the plurality of diagnosis indicators comprise exact matches for a symptom and synonyms that express the same concept....” Per interpretation (1) described in the 112(b) rejection, infra, the as-filed disclosure does not provide support for the plurality of diagnostic indicators comprising both symptoms and synonyms of the symptoms. Specification Para. 0070, which is the only place “synonym” appears states: PNG media_image1.png 856 1400 media_image1.png Greyscale PNG media_image2.png 696 1422 media_image2.png Greyscale As can be seen, Para. 0070 describes a synonym as an alternative (“or”) to a symptom. Para. 0074 bolters this assessment where it says that “symptoms as well as synonyms” are identified in the note. As such, the use of “and” in relation to the symptom and synonym thereof lacks written description. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Claim 20 recites “wherein the plurality of diagnosis indicators comprise exact matches for a symptom and synonyms that express the same concept....” The claim is indefinite because it is unclear whether the diagnosis indicators are in reference to one symptom (and, potentially, synonyms thereof) or whether the diagnosis indicators are in reference to multiple symptoms (and, potentially, synonyms thereof). The claim can be read either way. The claim is also indefinite because it is unclear whether a single diagnosis indicator of the plurality of diagnosis indicators requires: (1) both a symptom and a synonym thereof to meet the claim, i.e., a first indicator is a symptom and a synonym of the symptom or whether (2) a symptom or a synonym thereof meets the claim, i.e., a first indicator is a symptom and a second indicator is a synonym of a/the symptom (either the same symptom as the first indicator or a different symptom). For the purposes of examination, the Examiner interprets the claim to require that a diagnostic indicator comprises a symptom or a synonym thereof. See also 112(a) rejection, supra. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. §§ 102 and 103 (or as subject to pre-AIA 35 U.S.C. §§ 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-5, 17, and 20 is/are rejected under 35 U.S.C. § 103 as being unpatentable over Kailasam et al. (U.S. Pre-Grant Patent Publication No. 2023/0046045) as evidenced by Stryker (“What is NLP?”) in view of Gnanasambandam et al. (hereinafter “Gnan;” U.S. Pre-Grant Patent Publication No. 2024/0087700). Note: The Examiner notes that due to the limitations of the USPTO’s tools, the provided copy of “What is NLP?” has portions of the text cut off. Applicant is directed to https://www.ibm.com/think/topics/natural-language-processing should they require access to the compete document. REGARDING CLAIM 1 Kailasam teaches the claimed computer implemented method for diagnosis indicator identification and analytics in a data processing system comprising a processing device and a memory comprising instructions which are executed by the processing device, the method comprising: [Para. 0017, 0025, 0026 teaches a remote server that implements the disclosed functionality.] receiving, from an end user device via one or more networks, a narrative note describing a first patient visit with a first patient, wherein the narrative note comprises unstructured data; [Para. 0025, 0026 teaches a remote server receives a document from a caregiver computer. Para. 0022 teaches that they are connected via a network (the internet). Para. 0013, 0014, 0044 teaches receiving an unstructured clinician note document about a clinician encounter with a patient (a first patient visit).] applying a […] algorithm to the unstructured data of the narrative note to identify a plurality of diagnosis indicators, [Para. 0014, 0046, 0048, 0053 teaches parsing the document (note) using NLP techniques (an algorithm) to identify diagnoses (diagnostic indicators).] wherein the […algorithm uses…] natural language processing to identify the plurality of diagnosis indicators; [Para. 0014, 0046, 0048, 0053 teaches parsing the document using NLP techniques (an algorithm) to extract a plurality of medical condition information (a plurality of diagnosis indicators) to find a diagnosis.] wherein applying the […] algorithm to the unstructured data of the narrative note to identify the plurality of diagnosis indicators comprises: preprocessing raw text of the unstructured data to generate clean text; [Para. 0014 teaches that NLP is performed on the unstructured text to parse and extract discrete clinical elements. As evidenced by Stryker at Pg. 8, 9, converting unstructured text so it can be used the preprocessing step is how NLP (disclosed by Kailasam) operates.] segmenting the clean text into a plurality of tokens; [Para. 0014 teaches that NLP is performed on the unstructured text. As evidenced by Stryker at Pg. 8, the segmenting of text into tokens is how NLP (disclosed by Kailasam) operates.] predicting parts of speech for the plurality of tokens by applying a part-of-speech tagging component configured to assign, for each token, a part-of-speech tag selected from a part-of-speech tag set; [Para. 0014 teaches that NLP is performed on the unstructured text. As evidenced by Stryker at Pg. 8, 9 the POS tagging using a tag set (i.e., Natural Language Toolkit (NLTK), or cTAKES of Para. 0046) is how NLP (disclosed by Kailasam) operates.] parsing the plurality of tokens to convert the tokens to machine language to analyze syntax of the clean text; and [Para. 0014 teaches that NLP is performed on the unstructured text. As evidenced by Stryker at Pg. 4, 5, the performance of syntactic analysis (which by definition is parsing) to the processed (i.e., tokenized, cleaned) text is how NLP (disclosed by Kailasam) operates.] generating predictions, for one or more words or sentences in the narrative note, of a likelihood that such one or more words or sentences is a diagnosis indicator; [Para. 0046 teach that the NLP is used to extract elements including diagnosis (a diagnostic indicator) from the note. This is necessarily a prediction that the portion of the note is a diagnosis. The Examiner notes that there is no claimed indication as to how the predictions/likelihood are generated and further notes that likelihood that is predicted is never used within the claim.] performing at least one analysis based on information regarding the plurality of diagnosis indicators to thereby identify at least one possible patient diagnosis; [Para. 0063, 0065 teaches that the medical condition information is used to verify a diagnosis for the medical condition (at least one analysis).] store the plurality of diagnosis indicators a result of the at least one analysis […]; and [Para. 0034, 0046 teaches that the extracted information from the analysis is stored in association the electronic document in a data store.] upon electronic request, outputting a visual representation of results of the at least one analysis via the one or more networks and to the end user device for display on a display device of the end user device, [Para. 0015, 0068, Claim 21, 30 teaches that the verified clinical condition (diagnosis) is provided to the user. This occurs in real-time with the entry of the unstructured clinician note document and is thus interpreted to be displayed on the user’s device. Because the features of Claims 21, 30 are implemented by a computer, this means that the provision of the notification is necessarily based on an electronic request from one portion of the computer to another to send the notification, there being no indication as to what provides the request.] wherein the result is associated with the at least one possible patient diagnosis and the visual representation facilitates one or more of clinical decision making and health engagement outreach. [Para. 0068, Claim 21, 30 teaches that the verified clinical condition (diagnosis) is “associated” with the patient, i.e., it is data related to the patient. The Examiner notes that “the visual representation facilitates…” is an intended use of the displayed diagnosis and is not required to occur.] Kailasam may not explicitly teach that the data is stored in a relational database management system; however, the limitation claims information/labels that do not result in a manipulative difference between the information/labels of the prior art and the functionally of the claimed method. The function taught by the prior art would be performed the same regardless of whether the information/labels was substituted with nothing. Because Kailasam teaches that the data is stored in a storage, substituting the information/labels of the claimed invention (a relational database management system) for the information/labels of the prior art (a data store) would be an obvious substitution of one known element for another, producing predictable results. Therefore, would have been prima facie obvious to one of ordinary skill in the art at the time of filing to have substituted the information/labels applied to the storage location of the prior art with any other information/labels because the results would have been predictable. The Examiner notes that there is no functionality associated with the relational database management system, i.e., it is merely a label applied to the storage location, and thus it represents non-functional, descriptive information. Kailasam may not explicitly teach applying a machine learning algorithm to the unstructured data of the narrative note to identify a plurality of diagnosis indicators, wherein the machine learning model is trained at least in part to use natural language processing to identify the plurality of diagnosis indicators Gnan at Fig. 1, Para. 0077, 0081, 0082, 0125 teaches that it was known in the art of computerized healthcare, at the time of filing, to apply knowledge graphs created by machine learning models to patient data extracted using natural language analysis applying a machine learning algorithm to the unstructured data of the narrative note to identify a plurality of diagnosis indicators, wherein the machine learning model is trained at least in part to use natural language processing to identify the plurality of diagnosis indicators […] wherein applying the machine learning algorithm to the unstructured data [Gnan at Fig. 1, Para. 0077, 0081, 0082 teaches a cognitive intelligence platform that extracts concepts from patient data including patient notes (one of which is interpreted to correspond to the clinical note of Kailasam) by natural language analysis (natural language processing). Gnan at Para. 0077, 0081 teaches that patient notes (the clinical note of Kailasam) are compared to one or more knowledge graphs (machine learning algorithms). Gnan at Para. 0125, 0126 teaches that machine learning is used to generate the knowledge graphs, thus the knowledge graphs are machine learning algorithms.] Therefore, it would have been prima facie obvious to one of ordinary skill in the art of computerized healthcare, at the time of filing, to modify the clinical data processing system of Kailasam to apply knowledge graphs created by machine learning models to patient data extracted using natural language analysis as taught by Gnan, with the motivation of improving health outcomes which reduces costs (see Gnan at Para. 0002). REGARDING CLAIM 2 Kailasam/Gnan teaches the claimed computer implemented of Claim 1. Kailasam/Gnan further teaches wherein the plurality of diagnosis indicators comprise one or more of a symptom or a phrase, [Kailasam at Para. 0014, 0044 teaches that a clinical note is processed. Gnan at Para. 0081 teaches that the patient note includes a description of patient symptoms.] wherein the phrase describes one of a personal experience or an external experience of the first patient. [The Examiner notes that “a phrase” was optional in the previous limitation. Because that option was not taken, the phrase is not required and the Examiner declines to address this limitation.] Motivation to combine the teaching of Kailasam and Gnan is the same as that presented with respect to Claim 1, which is reiterated here. REGARDING CLAIM 3 Kailasam/Gnan teaches the claimed computer implemented of Claim 1. Kailasam/Gnan further teaches providing, to the end user device via the one or more networks, the […data…] and the plurality of diagnosis indicators; [Gnan at Fig. 0086 teaches that congified data is presented on a computing device of a physician. Gnan at Para. 0085, 0087 teaches that the congified data is created by “instilling intelligence into the unstructured data using the knowledge graph and the logical structure” and includes conclusions (diagnoses).] obtaining, from the end user device, at least one indication that at least one of the plurality of diagnosis indicators are one of correct or incorrect; and [Gnan at Fig. 23, Para. 0087 teaches that the physician enters feedback pertaining to whether the diagnosis (at least one diagnosis indicator) is accurate.] retraining the machine learning algorithm based on the at least one indication. [Gnan at Para. 0087, 0128 teaches that the feedback is used to update the machine learning model(s) and thus the knowledge graphs.] Motivation to combine the teaching of Kailasam and Gnan is the same as that presented with respect to Claim 1, which is reiterated here. Kailasam/Gnan may not explicitly teach providing the narrative note to the physician; however, it would have been prima facie obvious to one of ordinary skill in the art at the time of filing to combine the outputting of patient-related information for use in providing feedback of Gnan with the patient note(s) of Gnan since the combination is merely simple substitution of one known element for another producing a predictable result (KSR rationale B). Since each individual element and its function are shown in the prior art, the difference between the claimed subject matter and the prior art rests not on any individual element or function but in the very combination itself—that is, in the substitution of the patient (note(s) data for the cognified data displayed on the provider device. Thus, the simple substitution of one known element for another producing a predictable result renders the claim obvious. REGARDING CLAIM 4 Kailasam/Gnan teaches the claimed computer implemented of Claims 1 and 3. Kailasam/Gnan may not explicitly teach further comprising repeating the receiving, applying, performing, outputting, providing, obtaining, and retraining until an accuracy threshold for the machine learning algorithm is exceeded. [Gnan at Para. 0124 teaches generating the machine learning models by training, testing, and validating the model. Gnan at Para. 0082, 0086, 0128 teaches that feedback as to the accuracy of the model output is used to update the model that is then used to generate a diagnosis. Gnan at Para. 0346, 0347 teaches that a closed-loop feedback system is implemented, which the Examiner interprets as repeating the steps described previously in Gnan. The utilization of feedback data to update (retrain) the model necessarily involves meeting (exceeding) an accuracy threshold (a p-value) as would be understood by a person having skill in the art.] REGARDING CLAIM 5 Kailasam/Gnan teaches the claimed computer implemented of Claims 1, 3, and 4. Kailasam/Gnan may not explicitly teach further comprising deploying the machine learning algorithm on an application programming interface (API) server such that the machine learning algorithm is accessible via a provided API to a plurality of end user devices via another one or more networks. [Gnan at Fig. 1, Para. 0094, 0095, 0133, 0366 teaches utilization of an API to access the cognitive intelligence platform via a user device. Further, a person having skill in the art would understand that different systems that interface with one another (i.e., the server and smartphone of Gnan) require an API; APIs are the industry-standard method for interfacing two disparate computer systems.] REGARDING CLAIM(S) 17 Claim(s) 17 is/are analogous to Claim(s) 1, thus Claim(s) 17 is/are similarly analyzed and rejected in a manner consistent with the rejection of Claim(s) 1. Kailasam/Gnan further teaches network and an application programming interface (API) provided to the end user device, [Gnan at Fig. 1, Para. 0094, 0095, 0133, 0366 teaches utilization of an API to access the cognitive intelligence platform via a user device. Further, a person having skill in the art would understand that different systems that interface with one another (i.e., the server and smartphone of Gnan) require an API; APIs are the industry-standard method for interfacing two disparate computer systems.] REGARDING CLAIM(S) 20 Claim(s) 20 is/are analogous to Claim(s) 1, 4, and 5, thus Claim(s) 20 is/are similarly analyzed and rejected in a manner consistent with the rejection of Claim(s) 1, 4, and 5. As best understood by the Examiner, Kailasam/Gnan further teaches wherein the plurality of diagnosis indicators comprise exact matches for a symptom and synonyms that express the same concept, [Per the 112(b) interpretation, Kailasam at Para. 0049 teaches that a clinical condition (i.e., diagnosis) is identified using one or more clinical ontologies that provide contextual relationships between particular clinical conditions and clinical concepts that are disclosed to be symptoms. The symptoms of the ontology are interpreted as exact matches.] Response to Arguments Claim Objections Regarding the objection(s) to Claim 1, the Applicant has amended the claims to overcome the basis/bases of objection. Drawings Regarding the drawing objection(s), the Applicant has submitted replacement drawings which have alleviated several drawing issues; however, additional issues remain. As such, the objection is maintained. Rejection under 35 U.S.C. § 101 Regarding the rejection of Claims 1-20, the Examiner has considered the Applicant’s arguments; however, the arguments are not persuasive. Applicant argues: Each claimed step is performed by a data processing system, not a human, and no instructions or rules are provided for a human to perform. […] Further, while claim 1 [and claim 20] recites that output visual representation can be used by a human to facilitate clinical decision making and health engagement outreach, this is not an instruction or a rule for a human. Regarding (a), the Examiner respectfully disagrees. Multiple CAFC decisions that the Office has characterized as Certain Method of Organizing Human Activity did not actively recite a person or persons performing the steps of the claims (see, e.g., EPG, TLI communications, Ultramercial). Because whether a human is required to perform the step of the claim is not a requirement for claims to encompass certain method of organizing human activity, this argument is not persuasive. Further, MPEP 2106. 04(a)(2)(II) states that a claimed invention is directed to certain methods of organizing human activity if the identified claim elements contain limitations that encompass fundamental economic principles or practices, commercial or legal interactions, or managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions). The Examiner submits that the identified claim elements represent a series of rules or instructions for a person or persons to follow, with or without the aid of a computer, to (to paraphrase) identify and analyze diagnosis indicators in narrative notes (see Spec. Para. 0002). Applicant has not pointed to anything in the claims that fall outside of this characterization. Because the claim elements fall under a series of rules or instructions that a person or persons would follow to identify and analyze diagnosis indicators in narrative notes, the claimed invention is directed to an abstract idea. Both Example 39 and claim 20 recite two iterations of training a machine learning algorithm. Regarding (b), the Examiner respectfully disagrees that Applicant’s claims are similar to Example 39. The claims in Example 39 were found to not be directed to any of the enumerated types of abstract ideas and were thus eligible under step 2A – Prong 1 of the Alice Corp. test for subject matter eligibility. MPEP 2106.04(a)(1) states that “examiners should keep in mind that while all inventions at some level embody, use, reflect, rest upon, or apply laws of nature, natural phenomenon, or abstract ideas, not all claims recite an abstract idea” (internal quotations omitted). Example 39 is a hypothetical illustration of this principle. The training, use, and subsequent retraining of the Neural Network model in Example 39 are all functions that are outside of the ambit of an abstract idea (see MPEP 2106.04(a)(1)(vii)). And, while there may be an abstraction present in the collection of data, the remainder of the claim (all the additional elements of the claim) are purely directed to improvements in training Neural Network to detect faces. This is contrasted with Applicant’s claimed invention that recites the abstract idea of (to paraphrase) identifying and analyzing diagnosis indicators in narrative notes and that represents Certain Methods of Organizing Human activity as described in the basis of rejection. The additional element(s) of training and/or using a trained machine learning model represent the use of machine learning as a tool that is applied to the abstract idea to achieve the described result. Because there is an identified abstract idea present in Applicant’s claims and the additional elements of training and/or using a machine learning model are merely additional elements ancillary to this identified abstract idea, the recitation of machine learning is insufficient to provide a practical application and the claims are not subject matter eligible. See, e.g., Recentive Analytics, Inc. v. Fox Corp., No. 2023-2437 at 10 (Fed. Cir. April 18, 2025) (finding that claims that do no more than apply established methods of machine learning to a new data environment are ineligible). As applied here, claim 20 does not recite a relationship between variables or numbers and thus does not recite any mathematical relationships. Regarding (c), the Examiner respectfully submits that the entirety of the abstraction of Claim 20 was not characterized as a mathematical concept. It was characterized as Certain Methods of Organizing Human Activity. What was characterized as the creation of mathematical interrelationships (i.e., a mathematical concept) between data was the training/retraining of the machine learning model. There is no training/retraining of machine learning that is not the creation of mathematical interrelationships between data. If Applicant has invented some new type of training that is not the creation of mathematical interrelationships between data, a written description rejection would be in order because this currently does not exist. In any event, the training/retaining of the ML model has been reconsidered based on updated training and is currently interpreted to be part of the rule or instructions of Certain Methods of Organizing Human Activity. Either way, it constitutes part of the abstraction. [Any Abstract idea present in the claims is] integrated into a practical application because the claimed invention transforms data from unstructured clinical notes "into meaningful results that can be used by clinicians and industry participants." Regarding (d), the Examiner respectfully disagrees. Per MPEP 2106.05(c), “mere manipulation of basic mathematical constructs i.e., the paradigmatic abstract idea, has not been deemed a transformation” (internal quotations omitted). Put another way, transforming data from one type to another does provide a practical application. Further, the claim transforms the unstructured data into results that may not be used by clinicians and industry participants. There is no indication in the claims that the outputted information is used at all; it just exists. This is further complicated by the fact that conventional natural language processing techniques are not capable of identifying what symptoms a patient is having and what diagnoses are trending. Id. at [0056]. Regarding (e), the Examiner respectfully submits that Para. 0056 of the Specification (and the PgPub) does not say this at all. Further, all the problems described with clinical notes are not problems caused by the computer, they are nontechnical problems for which the computer is being used as a tool to make the analysis faster. Per MPEP 2106.05(f)(2), “claiming the improved speed or efficiency inherent with applying the abstract idea on a computer” does not integrate a judicial exception into a practical application or provide an inventive concept. Intellectual Ventures I LLC v. Capital One Bank (USA), 792 F.3d 1363, 1367, 115 USPQ2d 1636, 1639 (Fed. Cir. 2015). In these ways, the disclosed technology enables new capabilities not previously possible. Regarding (e), the Examiner respectfully submits that there is no indications that the computer could not be programmed to perform the noted functions per McRO. What Applicant’s arguments describe is an improvement to the abstract idea. The abstract idea cannot provide the improvement. See MPEP 2106.05(a) which states “It is important to note, the judicial exception alone cannot provide the improvement.” An improved abstract idea is still an abstract idea. Amended claim 1 recites, in detail, how the unstructured data is processed and how it is stored. It is not simply "certain methods of organizing human activity" as alleged in the Office Action. Regarding (e), the Examiner respectfully submits that the storage of the data is part of the abstract idea. The various NLP steps have been analyzed as additional elements and, as indicated in the prior art rejection, represent the definition of how NLP is performed. Thus, they generally link the claimed invention to a particular technological environment for field of use. And, for completeness, the prior art of record indicated that NLP is well-understood, routine, and conventional in the art. Rejection under 35 U.S.C. § 103 Regarding the rejection of Claims 1-5, 7, 8, 10-13, 15, 17, and 20, the Examiner has considered the Applicant’s arguments; however, the arguments are not persuasive. Applicant argues: In particular, while Kailasam et al. may disclose identifying clinical concepts in clinical notes, it is silent with respect to generating predictions indicating a likelihood that words in sentences represent a diagnosis indicator. Regarding (a), the Examiner respectfully disagrees. There is no claimed indication as to what “predications indicating a likelihood” entails. By outputting the diagnoses (diagnosis indicators), the analysis of Kailasam is necessarily indicating that the portion of the note analyzed has a likelihood, and thus a prediction, that the diagnosis in present in the portion analyzed, i.e., 100%. The Examiner further notes that the “predictions” are never used in the remainder of the claim. This may be an area where the Applicant may wish to expand upon to further define the claim. Applicant respectfully submits that the cited references fail to teach or suggest [the identifying step including the diagnosis indicators comprising exact matches for symptoms and synonyms]. Regarding (b), the Examiner respectfully disagrees for the reasons noted in the basis of rejection. Conclusion Prior art made of record though not relied upon in the present basis of rejection are noted in the attached PTO 892 and include: Garcia Santa et al. (U.S. Pre-Grant Patent Publication No. 2020/0118683) which discloses using a semantically annotated knowledge graph of medical terms used in standard codes to assist medical personnel in making a diagnosis. Riskin et al. (U.S. Pre-Grant Patent Publication No. 2014/0181128) which discloses a NLP processing system that transforms a data set into a plurality of contexts in order to identify patterns and relationships. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JASON S TIEDEMAN whose telephone number is (571)272-4594. The examiner can normally be reached 7:00am-4:00pm, off alternate Fridays. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Robert Morgan can be reached at 571-272-6773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JASON S TIEDEMAN/Primary Examiner, Art Unit 3683
Read full office action

Prosecution Timeline

Apr 26, 2024
Application Filed
Aug 21, 2025
Non-Final Rejection — §101, §103, §112
Feb 20, 2026
Response Filed
Mar 20, 2026
Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592304
Rules-Based Processing of Structured Data
2y 5m to grant Granted Mar 31, 2026
Patent 12558476
INFUSION PUMP ADMINISTRATION SYSTEM
2y 5m to grant Granted Feb 24, 2026
Patent 12562254
SURGICAL DATA SPECIALTY HARMONIZATION FOR TRAINING MACHINE LEARNING MODELS
2y 5m to grant Granted Feb 24, 2026
Patent 12561657
SYSTEMS AND METHODS FOR ALLOCATING RESOURCES VIA INFORMATION TECHNOLOGY INFRASTRUCTURE
2y 5m to grant Granted Feb 24, 2026
Patent 12531156
METHOD FOR ADVANCED ALGORITHM SUPPORT
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
29%
Grant Probability
64%
With Interview (+34.8%)
4y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 343 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month