Prosecution Insights
Last updated: April 19, 2026
Application No. 17/448,614

MENTAL HEALTH PLATFORM

Final Rejection §101§103
Filed
Sep 23, 2021
Examiner
LEE, ANDREW ELDRIDGE
Art Unit
3684
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Precise Behavioral Inc.
OA Round
4 (Final)
18%
Grant Probability
At Risk
5-6
OA Rounds
4y 7m
To Grant
51%
With Interview

Examiner Intelligence

Grants only 18% of cases
18%
Career Allow Rate
23 granted / 130 resolved
-34.3% vs TC avg
Strong +34% interview lift
Without
With
+33.5%
Interview Lift
resolved cases with interview
Typical timeline
4y 7m
Avg Prosecution
41 currently pending
Career history
171
Total Applications
across all art units

Statute-Specific Performance

§101
38.9%
-1.1% vs TC avg
§103
40.8%
+0.8% vs TC avg
§102
4.7%
-35.3% vs TC avg
§112
12.7%
-27.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 130 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION In the response filed on 13 May 2025, claims 1, 8 and 15 have been amended. Now claims 1-20 are pending. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Claims 1, 8 and 15 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recite method, system and non-transitory computer readable medium (CRM) for performing the limitations of: Claim 8, which is representative of claims 1 and 15 generating a plurality of sets of training data, the plurality of sets of training data comprising portions of journal entries and inputs to mental health questionnaires corresponding to a plurality of patients; generating a prediction model to generate a health score of a patient, the health score indicative of a current mental health of the patient, by: injecting tags into the portions of the journal entries that signal semantic tone and sentiment of each portion of each journal entry; and [… generating …], based on modified portions of the journal entries and the inputs to the mental health questionnaires, a relationship between the journal entries, the inputs, and a mental health of the patient; [… obtaining …] a first plurality of inputs from a target patient, the first plurality of inputs comprising a first input in a first format, the first input comprising first target journal entries, and a second input in a second format, the second input comprising first target responses to the mental health questionnaires; building a time series representation of the [… obtained …] first plurality of inputs, wherein the first target journal entries comprise a current target journal entry and previous target journal entries, and wherein the first target responses comprise a current target response and previous target responses; analyzing the time series representation by performing […] processing on the time series representation, wherein the […] processing comprises tagging portions of the current target journal entry and the current target response with semantic tone and sentiment indicators in a context of the previous target journal entries and the previous target responses; and generating, via the prediction model, a target health score for the target patient based on the analyzing of the time series representation; [… obtaining …] a second plurality of inputs from the target patient after the first plurality of inputs, the second plurality of inputs comprising: a third input comprising a second target journal entry submitted after the first target journal entry, and a fourth input comprising at least one second target response to the mental health questionnaires submitted after the first target responses; analyzing the second target journal entry and the second target response using […] processing to tag portions of the second target journal entry and the second target response with semantic tone and sentiment indicators in a context of the first target journal entries and the first target responses; and generating, via the prediction model, an updated target health score for the target patient based on the analyzing of the second target journal entry and the second target response. , as drafted, is a process that under its broadest reasonable interpretation, covers a method of organizing human activity (i.e., managing personal behavior including following rules or instructions). That is, by a human user interacting with a computer system (claim 1), a computer system comprising a processor and memory (claim 8) and a CRM and a processor (claim 15), the claimed invention amounts to managing personal behavior or interaction between people, the Examiner notes as stated in as stated in 2106.04(a)(2), “certain activity between a person and a computer… may fall within the “certain methods of organizing human activity” grouping”. For example, by a human user interacting with a computer system (claim 1), a computer system comprising a processor and memory (claim 8) and a CRM and a processor (claim 15), the claim encompasses collection of data, organization of the collected data to generate a model, organization of data by analysis using the organized model to determine and output for a human user a target health score and to allow for further human interaction to update the organization and output of collected data. If a claim limitation, under its broadest reasonable interpretation, covers managing personal behavior or interactions between people but for the recitation of generic computer components, then it falls within the “method of organizing human activity” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. This judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements of a computer system (claim 1), a computer system comprising a processor and memory (claim 8) and a CRM and a processor (claim 15), which implements the identified abstract idea. The computer system (claim 1), a computer system comprising a processor and memory (claim 8) and a CRM and a processor (claim 15) is recited at a high-level of geniality (i.e., general purpose computers; see Applicant’s Specification Figure 1 and paragraph [0030]) such that it amounts no more than mere instructions to apply the exception using generic computer components. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. The claims recite the additional element of “learning…”, “receiving…”, “performing natural language processing…”. The “learning…” is recited at a high-level of generality (i.e., training a generic off-the-shelf machine learning model to make predictions) and amounts to merely linking of the abstract idea to particular technological environment. The “receiving…” steps are recited at a high-level of generality (i.e., as a general means of receiving/transmitting data) and amounts to the mere transmission and/or receipt of data, which is a form of extra-solution activity. The “performing natural language processing…” is recited at a high-level of generality (i.e., extracting data from text/language) and amounts to merely linking of the abstract idea to particular technological environment. Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application. The claim is directed to an abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of a computer system (claim 1), a computer system comprising a processor and memory (claim 8) and a CRM and a processor (claim 15), to perform the noted steps amounts to no more than mere instructions to apply the exception using generic hardware components. Mere instructions to apply an exception using a generic hardware component cannot provide an inventive concept (“significantly more”). Also, as discussed above with respect to integration of the abstract idea into a practical application, the additional elements of “learning…”, “receiving…”, “performing natural language processing…”. The “learning…” steps have been re-evaluated under the “significantly more” analysis and determined to amount to be well-understood, routine, and conventional elements/functions. As described in 2020/0152304 (Chang): paragraphs [0040]-[0042]; 2019/0311035 (Chhaya): paragraph [0020]; learning via a model is well-understood, routine, and conventional.. The “receiving…” steps have been re-evaluated under the “significantly more” analysis and determined to amount to be well-understood, routine, and conventional elements/functions. As described in MPEP 2106.0S(d)(II)(i) "Receiving or transmitting data over a network" is well-understood, routine, and conventional. The “performing natural language processing…” steps have been re-evaluated under the “significantly more” analysis and determined to amount to be well-understood, routine, and conventional elements/functions. As described in 2020/0152304 (Chang): paragraphs [0042]; 2019/0311035 (Chhaya): paragraph [0036]; extracting data to analyze via natural language processing (NLP) is well-understood, routine, and conventional. Well-understood, routine, and conventional elements/functions cannot provide “significantly more.” As such the claim is not patent eligible. Claims 2-7, 9-14 and 16-20 are similarly rejected because either further define the abstract idea and/or do not further limit the claim to a practical application or provide as inventive concept such that the claims are subject matter eligible. Claims 2, 9 and 16 recite annotating the data via clinicians, but does not recite any additional elements and therefore cannot provide a practical application and/or significantly more. Claims 3, 5-6, 10, 12-13, 17 and 19-20 recite the additional element of various devices, however use of various devices are recited at a high-level of geniality (i.e., general purpose computers; see Applicant’s Specification Figure 1 and paragraph [0030]) such that it amounts no more than mere instructions to apply the exception using generic computer components. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of the various devices to perform the noted steps amounts to no more than mere instructions to apply the exception using generic hardware components. Mere instructions to apply an exception using a generic hardware component cannot provide an inventive concept (“significantly more”). Claims 4, 11 and 18 recite the additional element of a neural network, however use of a neural network is recited at a high-level of generality (i.e., training a generic off-the-shelf neural network model to make predictions) and amounts to merely linking of the abstract idea to particular technological environment. Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application. The claim is directed to an abstract idea. Also, as discussed above with respect to integration of the abstract idea into a practical application, the additional element was considered to be generally linking the abstract idea to a particular technological environment. This has been re-evaluated under the “significantly more” analysis and determined to amount to be well-understood, routine, and conventional elements/functions. As described in 2020/0152304 (Chang): paragraphs [0040]; 2019/0311035 (Chhaya): paragraph [0060]; training and use of a neural network model is well-understood, routine, and conventional. Well-understood, routine, and conventional elements/functions cannot provide “significantly more.” As such the claim is not patent eligible. Claims 7 and 14 recite the responses in the journal entry, but does not recite any additional elements and therefore cannot provide a practical application and/or significantly more. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Pub. No. 2020/0152304 (hereafter “Chang”), in view of U.S. Patent Pub. No. 2019/0311035 (hereafter “Chhaya”), further in view of U.S. Patent Pub. No. 2018/0005645 (hereafter “Khaleghi”), and further in view of U.S. Patent Pub. No. 2020/0090812 (hereafter “Condie”). Regarding (Currently Amended) claim 1, Chang teaches a method (Chang: paragraph [0002], “a system, apparatus and method for generating personalized and effective mental health therapies and recommendations through intelligent processing of data related to voice-journaling”), comprising: --generating, by a computing system, […] training data, […] training data comprising portions of journal entries and inputs to mental health [… questions …] corresponding to a plurality of patients (Chang: paragraph [0007], “the system includes a processor, a memory communicatively coupled to the processor”, paragraph [0042], “utilize datasets using natural language tools, and detect the emotions expressed… machine learning models can be trained on a dataset to extract”, paragraph [0044], “derive the source of stress, anxiety, and depression with further questions”, paragraph [0055]-[0057], “the user may be prompted to enter basic biographical info (birthdate, location, etc.) but may also be prompted to enter additional such as mental health and/or medical history”, paragraph [0064], “the user may be prompted to rate their feelings prior to the journal entry or may label their emotional responses afterward”, paragraph [0086]-[0087], “utilize data extracted from various user's journal entry data to better train machine learning models which can more accurately identify data within captured voice data and may offer more user-approved recommendations based on past individual and/or group feedback… receive a plurality of journal entries that can span any number of users or any amount of time (block 605). The various journal entries can be utilized to generate a plurality of journal entry data… generate aggregated journal entry data”); --generating, by the computing system, a prediction model to generate a health score of a patient, the health score indicative of a current mental health of the patient (Chang: paragraph [0042], “machine learning models can be trained on a dataset”, paragraph [0058], “Machine learning data can comprise specific algorithms related to generating sentiment scores, topic classifications, emotional classifications, and/or clinical diagnoses”. The Examiner notes that “to generate a health score of a patient” is an intended use of the generation of the prediction model that is not required to occur. This feature has been fully considered by the Examiner; however, the limitation does not provide patentable distinction over the cited prior art because it is an intended use or result of the generation of the prediction model), by: --injecting tags into the portions of the journal entries that signal semantic tone and sentiment of each portion of each journal entry (Chang: paragraph [0007], “generate a sentiment analysis”, paragraph [0042], “Machine learning models for analyzing voice-based journal entries can utilize datasets using natural language tools, and detect the emotions expressed in the journal entry… emotions can be categorized”, paragraph [0053], “generate textural data 262, which subsequently be used to generate analysis data 265 such as a sentiment score. Analyzer data 223 may also process the textual data 262 along with other available data sources to generate emotional classifications and/or topic classifications”, paragraphs [0065]-[0067], “Voice marker data can include any data derived from analyzing the voice data 261 for stressors, inflections, tone, or any other vocal characteristic that may be indicative of mental state… emotional classification data may be a plurality of emotional tags that can be associated with the journal entry data 260”, paragraph [0080], “generate an emotional classification that may label at least one portion of the journal entry with an emotional classification”); and --learning, based on modified portions of the journal entries and the inputs to the mental health [… questions …], a relationship Figure 6, paragraph [0040], “a system for utilizing neural-network and other machine learning models configured for recording and analyzing voice journal entries. Embodiments of the system can analyze the raw text and audio from natural conversation to discover speech patterns indicative of depression or other mental health conditions”, paragraph [0064], “contextual data 263 comprises any supplemental data that can be generated and associated with the captured voice data… processing their voice data 261 and other subsequent data”, paragraph [087], “determine patterns, matches, and/or trends within the various journal entry data”); --receiving, by the computing system, a first plurality of inputs from a target patient, the first plurality of inputs comprising: a first input in a first format, the first input comprising first target journal entries, and […] first target responses to the mental health [… questions …] (Chang: Figure 6, paragraph [0007], “receive a plurality of voice journal entries from a user the received voice journal entries comprise at least voice data and contextual data”, paragraph [0044], “questions like cognitive and/or dialectical behavioral”, paragraph [0051], “Journaling logic 221 may work with the user interface logic 225 to provide a user with one or more tools and/or prompts to facilitate recording of a voice-based journal entry.”, paragraph [0064], “the user may be prompted to rate their feelings prior to the journal entry or may label their emotional responses afterward”, paragraph [0088], “As a result of receiving new journal entries, more journal entry data is generated”); building, by the computing system, a […] representation of the received first plurality of inputs, wherein the first target journal entries comprise a current target journal entry and previous target journal entries, and wherein the first target responses comprise a current target response and previous target responses (Chang: Figures 2, 6, paragraph [0057], “User data may also comprise historical trends, and/or habits relating to voice-based journaling including dates of previous journal entries… User data may also comprise historical trends, and/or habits relating to voice-based journaling including dates of previous journal entries”, paragraphs [0060]-[0061], “journaling data 250 comprises a plurality of journal entry data 2601-260N which may be configured such that each journal data entry 260 includes data relating to a specific journal entry… journal entry data 260 can exist within a journal data store 250 and can be unique to each journal entry that is captured by a user”, paragraph [0075], “provide a voice-based journal entry by talking with a personal listening computing device 140, and finally add further journal entries”, paragraphs [0086]-[0088], “generate aggregated journal entry data… Processing may be done to determine patterns, matches, and/or trends within the various journal entry data”. The Examiner notes as seen in Fig. 2A, the aggregated journal data is linked in a representation and this journal data is interpreted comprise a current response that is added to previous journal entries, which teaches what a representation under the broadest reasonable interpretation); --analyzing, by the computing system, the […] representation by: performing natural language processing on the […] representation, wherein the natural language processing comprises tagging portions of the current target journal entry and the current target response with semantic tone and sentiment indicators in a context of the previous target journal entries and the previous target responses (Chang: Figure 6, paragraph [0007], “an analyzer logic configured to extract textual data from the plurality of voice journal entries, generate a sentiment analysis score based on the textual data, generate an emotional classification score based on the voice data, the textual data and the contextual data”, paragraphs [0040]-[0042], “discover speech patterns indicative of depression or other mental health conditions… information gets contextualized to a user… Machine learning models for analyzing voice-based journal entries can utilize datasets using natural language tools, and detect the emotions expressed in the journal entry… emotions can be categorized”, paragraph [0053], “generate textural data 262, which subsequently be used to generate analysis data 265 such as a sentiment score. Analyzer data 223 may also process the textual data 262 along with other available data sources to generate emotional classifications and/or topic classifications”, paragraph [0080], “Contextual data may be generated from data… contextual data may comprise… voice marker data associated with the voice data. Utilizing all available data sources, the process 400 may generate an emotional classification that may label at least one portion of the journal entry with an emotional classification”, paragraphs [0086]-[0088], “utilizing the aggregated journal entry data is to update at least one of the machine learning models… process the newly generated journal entry data with at least one of the updated machine learning training models”. The Examiner notes use of an updated model that has been updated to include previous journal entries reads on the broadest reasonable interpretation of “in a context of” language); --generating, by the computing system via the prediction model, a target health score for the target patient based on the analyzing of the […] representation (Chang: Fig. 2, paragraph [0007], “an analyzer logic configured to extract textual data from the plurality of voice journal entries, generate a sentiment analysis score based on the textual data, generate an emotional classification score based on the voice data, the textual data and the contextual data”); receiving, by the computing system, a second plurality of inputs from the target patient after the first plurality of inputs, the second plurality of inputs comprising: a third input comprising a second target journal entry submitted after the first target journal entry, and a fourth input comprising at least one second target response to the mental health [… questions …] submitted after the first target responses (Chang: Figure 6, paragraph [0007], “receive a plurality of voice journal entries from a user the received voice journal entries comprise at least voice data and contextual data”, paragraph [0044], “questions like cognitive and/or dialectical behavioral”, paragraph [0051], “Journaling logic 221 may work with the user interface logic 225 to provide a user with one or more tools and/or prompts to facilitate recording of a voice-based journal entry.”, paragraph [0064], “the user may be prompted to rate their feelings prior to the journal entry or may label their emotional responses afterward”, paragraph [0088], “receive additional journal entries (block 650). These new journal entries may be from the same user or from a new user. As a result of receiving new journal entries, more journal entry data is generated (block 660)”); analyzing, by the computing system, the second target journal entry and the second target response by: performing natural language processing on the second target journal entry and the second target response, wherein the natural language processing comprises tagging portions of the second target journal entry and the second target response with semantic tone and sentiment indicators in a context of the first target journal entries and the first target responses (Chang: Figure 6, paragraph [0007], “an analyzer logic configured to extract textual data from the plurality of voice journal entries, generate a sentiment analysis score based on the textual data, generate an emotional classification score based on the voice data, the textual data and the contextual data”, paragraphs [0040]-[0042], “discover speech patterns indicative of depression or other mental health conditions… information gets contextualized to a user… Machine learning models for analyzing voice-based journal entries can utilize datasets using natural language tools, and detect the emotions expressed in the journal entry… emotions can be categorized”, paragraph [0053], “generate textural data 262, which subsequently be used to generate analysis data 265 such as a sentiment score. Analyzer data 223 may also process the textual data 262 along with other available data sources to generate emotional classifications and/or topic classifications”, paragraph [0080], “Contextual data may be generated from data… contextual data may comprise… voice marker data associated with the voice data. Utilizing all available data sources, the process 400 may generate an emotional classification that may label at least one portion of the journal entry with an emotional classification”, paragraphs [0086]-[0088], “utilizing the aggregated journal entry data is to update at least one of the machine learning models… process the newly generated journal entry data with at least one of the updated machine learning training models”. The Examiner notes use of an updated model that has been updated to include previous journal entries reads on the broadest reasonable interpretation of “in a context of” language); and generating, by the computing system via the prediction model, an updated target health score for the target patient based on the analyzing of the second target journal entry and the second target response (Chang: Fig. 2, paragraph [0007], “an analyzer logic configured to extract textual data from the plurality of voice journal entries, generate a sentiment analysis score based on the textual data, generate an emotional classification score based on the voice data, the textual data and the contextual data”, paragraphs [0086]-[0088], “utilizing the aggregated journal entry data is to update at least one of the machine learning models… process the newly generated journal entry data with at least one of the updated machine learning training models”). Chang may not explicitly teach (underlined below for clarity): --generating, by a computing system, a plurality of sets of training data, the plurality of sets of training data comprising portions of journal entries and inputs to mental health [… questions …] corresponding to a plurality of patients; --receiving, by the computing system, a first plurality of inputs from a target patient, the first plurality of inputs comprising: a first input in a first format, the first input comprising first target journal entries, and a second input in a second format, the second input comprising first target responses to the mental health [… questions …]; Chhaya teaches generating, by a computing system, a plurality of sets of training data, the plurality of sets of training data comprising portions of journal entries and inputs to mental health [… questions …] corresponding to a plurality of patients (Chhaya: Figure 6, paragraph [0009], “three training samples generated from a labeled text communication”, paragraph [0016]-[0018], “predicting tone of interpersonal text communications… a training data collection phase, a training data labeling phase, a feature computation phase, and a model training phase… in the training data collection phase, a corpus of text communications is obtained from which to generate the labeled training data… The labeled text communications may serve as training samples (e.g., labeled training data) for training the respective models in the model training phase. Each training sample may include, for example, a text communication, and a label indicating a degree of a respective affective tone dimension conveyed by the contents (e.g., text data) of the text communication”, paragraph [0020], “a first model may be trained using a set of training samples… a second model may be trained using a set of training samples… a third model may be trained using a set of training samples”, paragraph [0059], “As shown in FIG. 6, three training samples 604a, 604b, and 604c are generated from a labeled text communication 602”); receiving, by the computing system, a first plurality of inputs from a target patient, the first plurality of inputs comprising: a first input in a first format, the first input comprising first target journal entries, and a second input in a second format, the second input comprising first target responses to the mental health [… questions …] (Chhaya: Fig. 5, paragraph [0024], “a text communication, tone may refer to an attitude of an author of the text communication toward a recipient or an audience of the text communication. In text communications, tone is generally conveyed through the choice of words (e.g., word usage, sentence formations, lexical content, etc.), or the viewpoint of the author regarding a particular subject”); One of ordinary skill in the art before the effective filing date would have found it obvious to include using generation and use of a plurality of training sets and a text (i.e., second) format as taught by Chhaya within the training and use of a data to generate a target health score for a patient as taught by Chang with the motivation of “improve the efficiency and accuracy in determining a tone of a text communication” (Chhaya: paragraph [0021]). Chang and Chhaya may not explicitly teach (underlined below for clarity): building, by the computing system, a time series representation of the received first plurality of inputs… analyzing, by the computing system, the time series representation by performing natural language processing on the time series representation… analyzing of the time series representation; Khaleghi teaches building, by the computing system, a time series representation of the received first plurality of inputs… analyzing, by the computing system, the time series representation by performing natural language processing on the time series representation… analyzing of the time series representation (Khaleghi: Figure 4, paragraphs [0040]-[0041], “Natural language processing may comprise the utilization of machine learning that analyzes patterns in data… entries provided via handwritten touch entry (e.g., via a stylus or user finger/digit) may be analyzed to identify user stress… stylus/finger pressure and inclination information may be received (e.g., via a wireless interface), stored and analyzed to identify user stress (e.g., pressure or inclination angle above a respective threshold may indicate stress)”, paragraph [0056], “natural language processing may be utilized to analyze and understand”, paragraphs [0087]-[0092], “FIG. 4R illustrates an example diary/chronology user interface… the diary will sequentially present dates on which an event occurred, and brief description of the event… include a timeline that begins at a certain point in time, such as a significant biological date (e.g. date of birth of the patient), and may indicate, in chronological order, significant biographical information… diary entries… the master combined user interface may be updated in real time in response to the receipt of new or updated biographical, medical, clinical, therapeutic, and/or diary data”. The Examiner notes that “to tag portions of each target journal entry with semantic tone and sentiment indicators in a context of the previous target journal entries” is an intended use of the analysis of the time series representation that is not required to occur. This feature has been fully considered by the Examiner; however, the limitation does not provide patentable distinction over the cited prior art because it is an intended use or result of the analysis of the time series representation); One of ordinary skill in the art before the effective filing date would find it obvious to include using a time series representation as taught by Khaleghi within the use of a representation of a plurality of subsequent journal entries as taught by Chang and Chhaya with the motivation of “improve the natural language processing software's ability to understand the entry” (Khaleghi: paragraph [0040]). Chang, Chhaya and Khaleghi may not explicitly teach (underlined below for clarity): […] inputs to mental health questionnaires corresponding to a plurality of patients; Condie teaches inputs to mental health questionnaires corresponding to a plurality of patients (Condie: paragraph [0036], “Data is gathered through a user's responses that are submitted through a brief questionnaire each day. The measurement questions are selected from a predetermined bank of questions and will be tailored to each consumer. Over time consumers will only be give questions that have been determined to be the most effective for them. Questions that are not deemed helpful will no longer be asked”); One of ordinary skill in the art before the effective filing date would have found it obvious to include using a questionnaire as taught by Condie within the use of questions and journaling to determine sentiment and tone as taught by Chang, Chhaya and Khaleghi with the motivation of “improve performance and increase efficiency… improve [… patients …] overall wellbeing” (Condie: paragraph [0006]). Regarding (Original) claim 2, Chang, Chhaya, Khaleghi and Condie teach the limitations of claim 1, and further teach encoding the portions of each journal entry with annotations from clinicians (Chang: paragraph [0064], “Manual classification data can include any data that the user generated or entered relating to classifying or describing the journal entry data 260. By way of example and not limitation, the user may be prompted to rate their feelings prior to the journal entry or may label their emotional responses afterward”; Chhaya: paragraph [0018], “Using a suitable crowdsourcing technique to label the corpus of text communications suitably ensures that the labeling across the aforementioned three dimensions of affective tone depends on human input. The labeled text communications may serve as training samples”, paragraph [0023], “when labeling an interpersonal text communication, the annotators (labelers) are able to provide their own subjective perceptions of the tone of the interpersonal text communication. This can provide labels that can serve as ground truth data for training models to predict the tone of interpersonal communications”, paragraph [0029], “text communications may be cleaned and provided to a suitable number of annotators with instructions to label each provided text communication”). The motivation to combine is the same as in claim 1, incorporated herein. Regarding (Original) claim 3, Chang, Chhaya, Khaleghi and Condie teach the limitations of claim 1, and further teach providing a clinician device with access to training results of the learning (Chang: paragraph [0080], “the process 400 can generate a user recommendation based on the available received and/or generated data sources (block 470). User recommendations may, in some embodiments, be generated by utilizing one or more machine learning methods. Generated user recommendations can be displayed on a user's voice-based journaling and therapy computing device”. The Examiner notes the computing device is able to access training results); and --receiving, from the clinician device, an adjustment to at least one of a weight or definition used by the prediction model (Chang: paragraph [0049], “the user may subsequently generate feedback data that can be transmitted back to the voice-based journaling servers 110 that may utilize the feedback data to further improve the modeling of various machine learning algorithms that can then be utilized to generate better therapy suggestions for future users”, paragraph [0059], “Data stored within the model data 243 may also comprise a plurality of weights”). The motivation to combine is the same as in claim 1, incorporated herein. Regarding (Original) claim 4, Chang, Chhaya, Khaleghi and Condie teach the limitations of claim 1, and further teach wherein the prediction model is a neural network (Chang: paragraph [0040], “Embodiments herein describe a system for utilizing neural-network and other machine learning models configured for recording and analyzing voice journal entries”). The motivation to combine is the same as in claim 1, incorporated herein. Regarding (Original) claim 5, Chang, Chhaya, Khaleghi and Condie teach the limitations of claim 1, and further teach prompting a patient device of the target patient to submit a target input to a mental health questionnaire; and based on the prompting, receiving, from the patient device, the target input from the mental health questionnaire (Chang: Figures 1, 6, paragraph [0007], “receive a plurality of voice journal entries from a user the received voice journal entries comprise at least voice data and contextual data”, paragraph [0044], “derive the source of stress, anxiety, and depression with further questions”, paragraph [0064], “the user may be prompted to rate their feelings prior to the journal entry or may label their emotional responses afterward”, paragraph [0088], “receive additional journal entries… These new journal entries may be from the same user or from a new user. As a result of receiving new journal entries, more journal entry data is generated”; Condie: paragraph [0036], “Data is gathered through a user's responses that are submitted through a brief questionnaire each day. The measurement questions are selected from a predetermined bank of questions and will be tailored to each consumer. Over time consumers will only be give questions that have been determined to be the most effective for them. Questions that are not deemed helpful will no longer be asked”). The motivation to combine is the same as in claim 1, incorporated herein. Regarding (Previously Presented) claim 6, Chang, Chhaya, Khaleghi and Condie teach the limitations of claim 1, and further teach prompting a patient device of the target patient to submit a target journal entry; and based on the prompting, receiving, from the patient device, the target journal entry (Chang: Figures 1, 6, paragraph [0007], “receive a plurality of voice journal entries from a user the received voice journal entries comprise at least voice data and contextual data”, paragraphs [0046]-[0047], “a plurality of devices that are configured to transmit and receive data related to providing, recording, and processing voice-based journal entries to generate a plurality of customized and responsive therapies”, paragraph [0064], “the user may be prompted to rate their feelings prior to the journal entry or may label their emotional responses afterward”, paragraph [0088], “receive additional journal entries… These new journal entries may be from the same user or from a new user. As a result of receiving new journal entries, more journal entry data is generated”). The motivation to combine is the same as in claim 1, incorporated herein. Regarding (Original) claim 7, Chang, Chhaya, Khaleghi and Condie teach the limitations of claim 1, and further teach wherein the target journal entry comprises one or more of a text based response, an audio based response, or an image based response (Chang: paragraph [0062], “voice data 261 comprises the raw audio data that is captured with a microphone or other recording device during the voice-based journaling process. This voice data 261 can comprise waveform data and can be formatted into any audio format desired based on the application and/or computing resources”). The motivation to combine is the same as in claim 1, incorporated herein. REGARDING CLAIM(S) 8 and 15 Claim(s) 8 and 15 is/are analogous to Claim(s) 1, thus Claim(s) 8 and 15 is/are similarly analyzed and rejected in a manner consistent with the rejection of Claim(s) 1. REGARDING CLAIM(S) 9-13 and 16-20 Claim(s) 9-13 and 16-20 is/are analogous to Claim(s) 2-6, thus Claim(s) 9-13 and 16-20 is/are similarly analyzed and rejected in a manner consistent with the rejection of Claim(s) 2-6. REGARDING CLAIM(S) 14 Claim(s) 14 is/are analogous to Claim(s) 7, thus Claim(s) 14 is/are similarly analyzed and rejected in a manner consistent with the rejection of Claim(s) 7. Response to Arguments Applicant's arguments filed 13 May 2025 have been fully considered but they are not persuasive. Applicants' arguments will be addressed herein below in the order in which they appear in the response filed on 13 May 2025. Rejections under 35 U.S.C. § 101 Regarding the rejection of claims 1-20, the Examiner has considered the Applicant's arguments but does not find them persuasive. Any arguments inadvertently not addressed are unpersuasive for at least the following reasons: Applicant argues: While the Office is correct in that the MPEP provides that "certain activity between a person and a computer ... may fall within the 'certain methods of organizing human activity' grouping… The claimed functionality is generally directed to a process of generating a prediction model that is trained to generate a health score of a patient, which cannot be construed as a social activity, a teaching, or following rules or instructions. The Office has failed to provide any argument that equates the claimed functionality to any of a social activity, teaching, or rule/instruction following… the limitations recite details regarding a training process for generating a prediction model and deploying the trained prediction model which includes operations regarding (1) how the training data set is generated; (2) how the training set is supplemented with tags to assist with the learning process; and (3) how the prediction model learns its target task using the supplemented data set. The limitations further recite building a time series representation of journal entries such that a current journal entry can be analyzed based on previous records of journal entries. Under any metric, this functionality certainly is beyond the scope of the identified sub-grouping. Accordingly, for this reason alone, the Applicant submits that the claims are subject matter eligible under Prong One… The Office likens the learning and the using natural language processing to merely linking of the "abstract idea" to a particular technological environment. This is incorrect…. Applicant submits that the claims provide improvements to a technology or a technical field. As provided in the as-filed Specification, the technical field is machine learning driven insights to mental health. See As-filed Specification, para. [0029]… Applicant submits that the claims add a specific limitation other than what is well-understood, routine, conventional activity in the field. The Examiner respectfully disagrees. It is respectfully submitted, the argued limitations amount to generic high-level organization of data being performed by the generic hardware components, which is not an additional element at best the organization and analyses of data is applying the abstract idea of a human looking at journal entries and making a health determination for a patient with a generic computer component, and which as stated in 2106.04(a)(2), “certain activity between a person and a computer… may fall within the “certain methods of organizing human activity” grouping”. The claims are directed toward organizing and analyzing data for a user to interact with in making determinations about health for a patient, by user interaction with various generic hardware components and is directed toward the certain method of organizing human activity grouping of abstract ideas. The claims do not recite any additional elements which provide a technical solution to a technical problem recited in Applicant’s specification and/or an improvement in the functionality of the computer. First, looking at paragraph [0029], no technical problems are described, the paragraph does not describe any problems rooted in computer hardware technology, at most the paragraph recites a statement of “Unlike conventional approaches…”, however the paragraph does not disclose any details one of ordinary skill in the art would find reciting any actual technical problems rooted in computer hardware technology. Additionally, Applicant argues “improvement at least as "the machine learning architecture”, however the argued limitations are organization of data, which is part of the abstract idea and generic off-the-shelf natural language processing which is used to generally link to a particular technological environment and is well-understood, routine and conventional. None of the additional elements recite any details, which describe technical solutions to technical problems recited in Applicant’s specification. The only additional element claimed in generating the prediction model is the “learning” step, such a generic “learning” step and model cannot improve the performance of a computer or recite a technical solution to a technical problem recited in the specification, at best Applicant’s specification is directed toward problems of “identifying and destigmatizing mental illness” (see Applicant’ specification paragraph [0003]), which is not a technical problem rooted in hardware technology, at best this is a doctor/patient interaction/diagnosis problem. Therefore, as the additional elements are recited at such a high-level of generality, such that they amount to extra-solution activity and/or generally linking to a particular technological environment and do not recite an improvement in the performance of the computer or a technical solution to a technical problem recited in the specification the claim cannot provide a practical application and the argument is unpersuasive. Rejections under 35 U.S.C. § 103 Regarding the rejection of claims 1-20, the Examiner has considered the applicant’s arguments; however, the arguments are not persuasive as addressed herein. The Examiner has attempted to address all of the arguments presented by the Applicant; however, any arguments inadvertently not addressed are not persuasive for at least the following reasons: Applicant argues: Independent claim 1 recites one or more elements not taught, disclosed, or suggested by the combination of Chang, Chhaya, and Khaleghi… Chang is generally directed to a system for providing therapies from voice-based journaling… Thus, Chang does not disclose a process in which a current journal entry is analyzed in the context of previous journal entries from the patient… Chhaya' s system similarly does not perform "building, by the computing system, a time series representation of the received first plurality of inputs… Further, Khaleghi fails to cure the deficiency of Chang… Khaleghi also provides that a diary entry to the electronic notebook can be characterized to identify the subject matter of that diary entry. The Examiner respectfully disagrees. It is respectfully submitted, Chang explicitly teaches collection and organization of a plurality of journal entries into a representation (see above but at least Figure 2, elements 250 and 260, paragraphs [0086]-[0088], showing the plurality of journal entries linked), while not explicitly recited as a time-series representation, Khaleghi explicitly teaches creating this (see above but at least Figure 4, paragraphs [0087]-[0092]), and would be prima facie obvious to include with the motiv
Read full office action

Prosecution Timeline

Sep 23, 2021
Application Filed
Jan 27, 2024
Non-Final Rejection — §101, §103
May 09, 2024
Interview Requested
May 16, 2024
Applicant Interview (Telephonic)
May 19, 2024
Examiner Interview Summary
Aug 01, 2024
Response Filed
Sep 13, 2024
Final Rejection — §101, §103
Dec 18, 2024
Request for Continued Examination
Dec 19, 2024
Response after Non-Final Action
Feb 07, 2025
Non-Final Rejection — §101, §103
Apr 16, 2025
Examiner Interview Summary
Apr 16, 2025
Applicant Interview (Telephonic)
May 13, 2025
Response Filed
Aug 30, 2025
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12542210
WEARABLE DEVICE AND COMPUTER ENABLED FEEDBACK FOR USER TASK ASSISTANCE
2y 5m to grant Granted Feb 03, 2026
Patent 12154077
USER INTERFACE FOR DISPLAYING PATIENT HISTORICAL DATA
2y 5m to grant Granted Nov 26, 2024
Patent 12040070
RADIOTHERAPY SYSTEM, DATA PROCESSING METHOD AND STORAGE MEDIUM
2y 5m to grant Granted Jul 16, 2024
Patent 12027251
SYSTEMS AND METHODS FOR MANAGING LARGE MEDICAL IMAGE DATA
2y 5m to grant Granted Jul 02, 2024
Patent 11942189
Drug Efficacy Prediction for Treatment of Genetic Disease
2y 5m to grant Granted Mar 26, 2024
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
18%
Grant Probability
51%
With Interview (+33.5%)
4y 7m
Median Time to Grant
High
PTA Risk
Based on 130 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month