Prosecution Insights
Last updated: April 19, 2026
Application No. 18/797,060

AUTOMATED IDENTIFICATION OF THE DIAGNOSTIC CRITERIA IN NATURAL LANGUAGE DESCRIPTIONS OF PATIENT BEHAVIOR FOR COMBINING INTO A TRANSPARENT DIAGNOSTIC DECISION

Final Rejection §101§103
Filed
Aug 07, 2024
Examiner
MPAMUGO, CHINYERE
Art Unit
3685
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Arizona Board of Regents
OA Round
2 (Final)
27%
Grant Probability
At Risk
3-4
OA Rounds
4y 0m
To Grant
54%
With Interview

Examiner Intelligence

Grants only 27% of cases
27%
Career Allow Rate
88 granted / 328 resolved
-25.2% vs TC avg
Strong +27% interview lift
Without
With
+27.2%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
42 currently pending
Career history
370
Total Applications
across all art units

Statute-Specific Performance

§101
43.0%
+3.0% vs TC avg
§103
33.8%
-6.2% vs TC avg
§102
13.9%
-26.1% vs TC avg
§112
7.4%
-32.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 328 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims In the response filed December 1, 2025, Applicant amended claims 1, 5, 9, 11, 15, and 19. Claims 6-8 and 16-18 were canceled. Claims 1-5, 9-15, 19, and 20 are pending in the current application. Response to Arguments Applicant’s arguments with respect to the rejection under 35 U.S.C. 102 have been considered and are considered moot in view of new ground of rejection. Applicant’s arguments with respect to the rejection under 35 U.S.C. 101 have been considered and are not persuasive. First, Applicant asserts that the claims focus on a specific asserted improvement in computer capabilities by overcoming specific technical deficiencies in prior art machine learning-enabled diagnostic systems like Shriberg. Examiner respectfully disagrees. In Recentive Analytics, Inc., the claims recited conventional machine learning models without specific improvements to the technology itself. The court noted that "iterative training," a claimed feature, was inherent to all machine learning models and thus did not confer eligibility. Additionally, applying machine learning to event scheduling, an activity predating computers, did not transform the abstract idea into a patent-eligible invention. Simply applying generic machine learning techniques to new data domains (e.g., TV scheduling) without improving the underlying technology is insufficient for patent eligibility. In this case, simply applying generic machine learning techniques to diagnosing medical conditions without improving the underlying technology is insufficient for patent eligibility and a mental process, i.e., observation of patient behavior with subsequent diagnosis based on said behavior. The rejection is maintained. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-5, 9-15, 19, and 20 are rejected under 35 U.S.C. 101 because the claims are not directed to patent eligible subject matter. Claims 1-5, 9-15, 19, and 20 do fall within at least one of the four categories of patent eligible subject matter because the claims recite a machine (i.e., system) and process (i.e., a method). Although claims 1-5, 9-15, 19, and 20 fall under at least one of the four statutory categories, it should be determined whether the claim wholly embraces a judicially recognized exception, which includes laws of nature, physical phenomena, and abstract ideas, or is it a particular practical application of a judicial exception (See MPEP 2106 I and II). Claims 1-5, 9-15, 19, and 20 are directed to a judicial exception (i.e., a law of nature, natural phenomenon, or abstract idea) without significantly more. Part I: Step 2A, Prong One: Identify the Abstract Idea Under step 2A, Prong One of the Alice framework, the claims are analyzed to determine if the claims are directed to a judicial exception. MPEP §2106.04(a). The determination consists of a) identifying the specific limitations in the claim that recite an abstract idea; and b) determining whether the identified limitations fall within at least one of the three subject matter groupings of abstract ideas (i.e., mathematical concepts, mental processes, and certain methods of organizing human activity). The identified limitations of independent claim 11 (representative of independent claim 1) recite: non-transitory computer readable storage media; and at least one hardware computer processor configured to: receive natural language descriptions of patient behavior; parse the natural language descriptions to identify individual sentences describing the patient behavior; determine, by a machine learning model trained using annotated natural language examples, whether each individual sentence is indicative of any of the predetermined diagnostic criteria used to diagnose the mental disorder or medical condition under the established medical guidelines; identify a final diagnostic label for a patient by determining whether the identified diagnostic criteria output the final diagnostic label, each of the identified diagnostic criteria, and each of the sentences identified as indicative of each of the identified diagnostic criteria via a graphical user interface; provide functionality, via the graphical user interface, for a medical practitioner to modify each of the identified diagnostic criteria; and identify a revised diagnostic label for the patient by determining whether the modified diagnostic criteria constitute the threshold number of predetermined diagnostic criteria for diagnosing the mental disorder or medical condition under established medical guidelines. The identified limitations, under their broadest reasonable interpretation, cover performance of the limitations in the mind (including observation, evaluation, judgement or opinion) but for the recitation of generic computer components. That is, other than reciting storage, processor, and machine learning model (interpreted as computer), nothing in the claim elements precludes the steps form practically being performed in the mind. For example, the identified limitations encompass a user (e.g., psychiatrist) reviewing sentences for diagnostic criteria and determining a diagnostic label to diagnose a mental disorder. The claim limitations fall within the Mental Processes groupings of abstract ideas. Thus, the claimed invention recites a judicial exception. Part I: Step 2A, prong two: additional elements that integrate the judicial exception into a practical application Under step 2A, Prong Two of the Alice framework, the claims are analyzed to determine whether the claims recite additional elements that integrate the judicial exception into a practical application. In particular, the claims are evaluated to determine if there are additional elements or a combination of elements that apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claims are more than a drafting effort designed to monopolize the judicial exception. This judicial exception is not integrated into a practical application. As a whole, the storage, processor, and machine learning model in the steps are recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer component. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Dependent claims 2-5, 9-10,12-15, 19, and 20, when analyzed as a whole are held to be patent ineligible under 35 U.S.C. 101 because the additional recited limitations fail to establish that the claims are not directed to an abstract idea. For instance, the dependent claims recite an expert determining whether a sentence falls into diagnostic criteria, sentences that are extracted (or obtained) from health records or social media content of the patient, using a medical practitioner to modify the diagnostic criteria, and labeling and training using a generic machine learning model. Since these claims are directed to an abstract idea, the Office must determine whether the remaining limitations “do significantly more” than describe the abstract idea. Part II. Determine whether any Element, or Combination, Amounts to“Significantly More” than the Abstract Idea itself Under Part II, the steps of claims, when considered individually and as an ordered combination, do not improve another technology or technical field, do not improve the functioning of the computer itself, and are not enough to qualify as "significantly more". For example, the steps require no more than a conventional computer to perform generic computer functions. As stated above in Prong Two, the storage, processor, and machine learning model in the steps are recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer component. Therefore, based on the two-part Mayo analysis, there are no meaningful limitations in the claim that transform the exception into a patent eligible application such that the claim amounts to significantly more than the exception itself. Claims 1-5, 9-15, 19, and 20, when considered individually and as an ordered combination, are rejected as ineligible subject matter under 35 U.S.C. 101. Dependent claims 2-5, 9-10,12-15, 19, and 20 when analyzed as a whole are held to be patent ineligible under 35 U.S.C. 101 because the additional claims do no recite significantly more than an abstract idea. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-5, 9-15, 19, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Shriberg et al. (US 20210110894 A1) in view of Eleftherou et al. (US 2019/0348178 A1). Regarding claims 1 and 11, Shriberg discloses a computer implemented method of identifying information potentially indicative of a mental disorder or medical condition that, under established medical guidelines, is diagnosed by determining whether a threshold number of predetermined diagnostic criteria describe patient behavior, the method comprising: receiving natural language descriptions of patient behavior (Paragraph [0340]: the NLP model 2015 is provided in greater detail. This system consumes the output from the Automatic Speech Recognition (ASR) system 2012 and performs post-processing on it via an ASR output post processor 2510, and [0305]: The ASR 2012 output includes a machine readable transcription of the speech portion of the audio data.); parsing the natural language descriptions to identify individual sentences describing the patient behavior (Paragraph [0343]: Statistical language model 2552 utilizes n-grams and pattern recognition within the ASR output to statistically match patterns and n-gram frequency); determining, by a machine learning model trained using annotated natural language examples, whether each individual sentence is indicative of any of the predetermined diagnostic criteria used to diagnose the mental disorder or medical condition under the established medical guidelines (Paragraph [0343]: particular sequences of words may be statistically indicative of depression. Likewise, particular vocabulary and word types used by a speaker may indicate depression or not having depression); identifying a final diagnostic label for a patient by determining whether the identified diagnostic criteriaThe output for each of these modelers 2020 is provided, individually, to a calibration, confidence, and desired descriptors module 2092. This module calibrates the outputs in order to produce scaled scores, as well as provides confidence measures for the scores. The desired descriptors module may assign human-readable labels to scores… if the NLP model 2015 classifies an individual as being not depressed, with a confidence of 0.56 (out of 0.00-1.00), but the acoustic model 2016 renders a depressed classification with a confidence of 0.97, in some cases the weight of a the models' outputs may be weighted such that the acoustic model 2016 is provided a greater weight ); providing a graphical user interface that displays the final diagnostic label, each of the identified diagnostic criteria, and each of the individual sentences identified as indicative of each of the identified diagnostic criteria (Paragraph [0171]: The systems described herein can output an electronic report identifying whether a patient is at risk of a mental or physiological condition. The electronic report can be configured to be displayed on a graphical user interface of a user's electronic device. The electronic report can include a quantification of the risk of the mental or physiological condition, e.g., a normalized score). Shriberg discloses the limitations above. Shriberg does not explicitly disclose: providing functionality, via the graphical user interface, for a medical practitioner to modify each of the identified diagnostic criteria; and identifying a revised diagnostic label for the patient by determining whether the modified diagnostic criteria constitute the threshold number of predetermined diagnostic criteria for diagnosing the mental disorder or medical condition under established medical guidelines. Eleftherou discloses providing functionality, via the graphical user interface, for a medical practitioner to modify each of the identified diagnostic criteria (Paragraph [0022]: the diagnosis system 102 is configured to ask questions of the medical professional (e.g., to discuss the additional data needed, etc.) who may then provide responses as interaction data 112. As stated, the diagnosis component 105 and/or the reasoning component 106 may continue to develop and revise the diagnosis, treatment plans, and additional observations needed as additional data for each patient is observed); and identifying a revised diagnostic label for the patient by determining whether the modified diagnostic criteria constitute the threshold number of predetermined diagnostic criteria for diagnosing the mental disorder or medical condition under established medical guidelines (Paragraph [0022]: The diagnosis component 105 and/or the reasoning component 106 generate candidate treatment plans that optimize the risk adjusted outcome for the patient, and identify the additional data observations (e.g., lab tests, patient state attributes, etc.) that are needed to reduce the uncertainty of the patient's state model, and improve the accuracy of the models ). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Shriberg to providing functionality, via the graphical user interface, for a medical practitioner to modify each of the identified diagnostic criteria; and identifying a revised diagnostic label for the patient by determining whether the modified diagnostic criteria constitute the threshold number of predetermined diagnostic criteria for diagnosing the mental disorder or medical condition under established medical guidelines as taught by Eleftherou. Shriberg discloses assessing a mental state of a subject in a single session or over multiple different sessions using an automated module (Shriberg Abstract). Using the medical diagnosis system with continuous learning and reasoning of Eleftherou would generate candidate treatment plans that optimize the risk adjusted outcome for the patient (Eleftherou Paragraph [0022]). Regarding claims 2 and 12, Shriberg discloses wherein some of the natural language examples are: extracted from electronic health records of patients diagnosed with the medical condition; sentences used by patients diagnosed with the medical condition; or sentences used to describe behavior or symptoms of patients diagnosed with the medical condition (Paragraph [0159]: The web server 240 and model server(s) 230 leverage user data 220 which is additionally populated by clinical and social data 210. The clinical data may include… electronic health record (EHR) systems ). Regarding claims 3 and 13, Shriberg discloses wherein the annotated natural language examples are labeled by an expert as being indicative of one or more of the diagnostic criteria or not indicative of any diagnostic criterion (Paragraph [0077]: The at least one model can be trained on speech data from a plurality of other test subjects who have a clinical determination of the mental condition. The clinical determinations may serve as labels for the speech data.). Regarding claims 4 and 14, Shriberg discloses wherein the individual sentences are extracted from electronic health records of the patient (Paragraph [0159]). Regarding claims 5 and 15, Shriberg discloses wherein the individual sentences are received via a user interface, an application programming interface, extracted from social media content shared by patient caregivers or patients, or extracted from videos or audio recordings of patients or patient caregivers (Paragraph [0153]: Social data server 106 may be a server computer system that makes social data of the patient, including social media posts, online purchases, searches, etc., available, e.g., to health screening or monitoring server). Regarding claims 9 and 19, Shriberg discloses further comprising: labeling the sentences identified by the machine learning model using the modified diagnostic criteria(Paragraph [0303]: Label data from the clients 260a-n is provided to a label data set 2021 in the user data 220. This may be stored in various databases 2023. Label data includes not only verified diagnosed patients, but inferred labels collected from particular user attributes or human annotation); and training the machine learning model using the sentences identified by the machine learning model and labeled using the modified diagnostic criteria (Paragraph [0304]: The training data filter 2001 may consume speech and video data and append label data 2021 to it to generate a training dataset. This training dataset is provided to model training server(s) 2030 for the generation of a set of machine learned models). Regarding claims 10 and 20, Shriberg discloses wherein the machine learning model comprises a bidirectional gated recurrent unit (BiGRU) model, a hybrid bidirectional long short-term memory (BiLSTM-H) model, a multilabel BiLSTM (BiLSTM-M) model, or a large language model (LLM) (Paragraph [0286]). Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHINYERE MPAMUGO whose telephone number is (571)272-8853. The examiner can normally be reached Monday-Friday, 9am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kambiz Abdi can be reached at (571) 272-6702. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHINYERE MPAMUGO/Primary Examiner, Art Unit 3685
Read full office action

Prosecution Timeline

Aug 07, 2024
Application Filed
Aug 23, 2025
Non-Final Rejection — §101, §103
Dec 01, 2025
Response Filed
Jan 10, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586024
DIGITAL TWIN BASED SYSTEMS AND METHODS FOR BUSINESS CONTINUITY PLAN AND SAFE RETURN TO WORKPLACE
2y 5m to grant Granted Mar 24, 2026
Patent 12579550
METHOD AND SYSTEM FOR EMERGENT DATA PROCESSING
2y 5m to grant Granted Mar 17, 2026
Patent 12562241
SYSTEM AND METHOD FOR DETECTING ISSUES IN CLINICAL STUDY SITE AND SUBJECT COMPLIANCE
2y 5m to grant Granted Feb 24, 2026
Patent 12537073
GENETIC MODEL VALIDATION METHODS
2y 5m to grant Granted Jan 27, 2026
Patent 12537081
INTERVERTEBRAL CAGE WITH INTEGRATED TRANSMITTER
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
27%
Grant Probability
54%
With Interview (+27.2%)
4y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 328 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month