Prosecution Insights
Last updated: April 19, 2026
Application No. 18/777,039

SYSTEM AND METHOD OF DELIVERING INTERACTIVE, PERSONALIZED COGNITIVE BEHAVIORAL INTERVENTIONS

Non-Final OA §101§103§112
Filed
Jul 18, 2024
Examiner
YIP, JACK
Art Unit
3715
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Snitkovsky Vadim
OA Round
1 (Non-Final)
33%
Grant Probability
At Risk
1-2
OA Rounds
4y 1m
To Grant
70%
With Interview

Examiner Intelligence

Grants only 33% of cases
33%
Career Allow Rate
229 granted / 702 resolved
-37.4% vs TC avg
Strong +38% interview lift
Without
With
+37.6%
Interview Lift
resolved cases with interview
Typical timeline
4y 1m
Avg Prosecution
51 currently pending
Career history
753
Total Applications
across all art units

Statute-Specific Performance

§101
22.8%
-17.2% vs TC avg
§103
42.4%
+2.4% vs TC avg
§102
15.0%
-25.0% vs TC avg
§112
12.4%
-27.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 702 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. Applicant has not complied with one or more conditions for receiving the benefit of an earlier filing date under 35 U.S.C. 119(e) as follows: The later-filed application must be an application for a patent for an invention which is also disclosed in the prior application (the parent or original nonprovisional application or provisional application). The disclosure of the invention in the parent application and in the later-filed application must be sufficient to comply with the requirements of 35 U.S.C. 112(a) or the first paragraph of pre-AIA 35 U.S.C. 112, except for the best mode requirement. See Transco Products, Inc. v. Performance Contracting, Inc., 38 F.3d 551, 32 USPQ2d 1077 (Fed. Cir. 1994). The disclosure of the prior-filed application, Provisional Application No. 63/514,249 (‘249), fails to provide adequate support or enablement in the manner provided by 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph for one or more claims of this application. Specifically, provisional application ‘249 fails to provide adequate support for the limitations in claim 1: at least one processor operative to execute computer instructions that: perform a pre-assessment, extract, prepare, and format the data to produce formatted data, generate with generative artificial intelligence a PCE from the formatted data and a set of generic cognitive and behavioral exercises; perform a post-assessment, and assess an impact of the PCE by comparing the post-assessment to the pre-assessment; an interactive display operative to interactively present the PCE, wherein the interactive display produces at least one medium selected from the group consisting of text, audio, video, image, and virtual reality; and at least one data storage unit operative to retrievably store the data and the generic cognitive and behavioral exercises. The current application is not entitled to the filing of the provisional application ‘249. Hence, effective filing of the instant application was Jul. 18, 2024. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1 – 16 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claims 1, 8 – 11 use the term “and/or”. The term is indefinite because it is unclear whether the one or both elements (i.e., a personalized cognitive and/or behavioral exercise (PCE)) between “and/or” should be part of the claimed invention. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1 – 16 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Step 1: Is the claimed invention a statutory category of invention? Claim 1 is directed to a system for automatically producing a personalized cognitive and/or behavioral exercise (Step 1, Yes). Step 2A, Prong 1: Does the claim recite an abstract idea? The limitation of steps: … perform a pre-assessment, extract, prepare, and format the data to produce formatted data, generate with generative artificial intelligence a PCE from the formatted data and a set of generic cognitive and behavioral exercises; perform a post-assessment, and assess an impact of the PCE by comparing the post-assessment to the pre-assessment; an interactive display operative to interactively present the PCE, wherein the interactive display produces at least one medium selected from the group consisting of text, audio, video, image, and virtual reality; and at least one data storage unit operative to retrievably store the data and the generic cognitive and behavioral exercises as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components (i.e., a user interface, at least one processor, an interactive display and at least one data storage). The claimed method akin to mental process of observations, evaluations, and judgements if a therapist. The mere nominal recitation of generic computer components performing these steps does not take the claim limitation outside of the mental processes grouping. Thus, the claim recites a mental process (Step 2A, Prong 1: yes). Step 2A, Prong 2: Does the claim recite additional elements that integrate the judicial exception into a practical application? Per the 2019 Revised Patent Subject Matter Eligibility Guidance, if a claim as a whole integrates the recited judicial exception into a practical application of that exception, a claim is not "directed to" a judicial exception. Alternatively, a claim that does not integrate a recited judicial exception into a practical application is directed to the exception. Evaluating whether a claim integrates an abstract idea into a practical application is performed by a) identifying whether there are any additional elements recited in the claim beyond the abstract idea, and b) evaluating those additional elements individual and in combination to determine whether they integrate the abstract idea into a practical application, using one or more of the considerations laid out by the Supreme Court and the Federal Circuit. Exemplary considerations indicative that an additional element (or combination of elements) may have or has not been integrated into a practical application are set forth in the 2019 PEG With respect to the instant claims, Claim 1 recites the additional elements of: a user interface, at least one processor, an interactive display and at least one data storage. It is particularly noted that the use of at least one processor "as a tool" to perform an abstract method and steps for perform a pre/post assessment and an interactive display and data storage that only amount to extra solution activity are indicated in the 2019 PEG as examples that an additional element has not been integrated into a practical application. Even in combination, the recited additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits, such as an improvement to a computing system, on practicing the abstract idea (STEP 2A, Prong 2: NO). Step 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? Claim 1 recites the additional elements of: a user interface, at least one processor, an interactive display and at least one data storage set forth above for Step 2A, Prong 2. Regarding these limitations: Applicant's specification describes these features in generic manner "… User Device, i.e., a computing device such as a smartphone, tablet, or computer as the primary interface. In some embodiments, the user device may be provided as multiple independent devices that operate together to prompt and ingest user data and to deliver personalized cognitive exercises. The user device has hardware and software to receive user inputs and display outputs, including audio, video and text … The user device can output multimodal content, including Cognitive Personalized Exercises via audio, video, and VR.” in the Applicant’s published application, para. [0022]). There is no indication in the Specification that Applicants have achieved an advancement or improvement in computer for diagnosing anxiety. Dependent claims 2 - 16 inherit the deficiencies of their respective parent claims through their dependencies and do not recite additional limitations sufficient to direct the claims to more than the claimed abstract idea, and are thus rejected for the same reasons. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-3,7,10-16 are rejected under 35 U.S.C. 103 as being unpatentable over Darcy et al. (US 2023/0215544 A1) in view of Rollwage et al. (US 2024/0404514 A1). Re claim 1: Darcy teaches 1. A system for automatically producing a personalized cognitive and/or behavioral exercise (PCE) (Darcy, Abstract; [0002], “therapy that is personalized to a user’s needs”), comprising: a user interface operative to collect data regarding a user's mental state, including user-reported mood selections and/or biometric data (Darcy, Abstract, “Tracked mood”; [0031], “therapy to a user, such as to monitor, diagnose, and/or treat mental health disorders”; [0075], “User device 102 can include any combination of input/output (I/0) devices that may be suitable for interacting with the system, such as a keyboard, a mouse, a display, a touchscreen, a microphone, a speaker, an inertial measurement unit (IMU), a haptic feedback device, or other such devices”; [0104], “a question asking "how are you doing today, on a scale of 1 to 107'' the user may say "7,"”); at least one processor operative to execute computer instructions (Darcy, [0141] – [ 0142]) that: perform a pre-assessment (Darcy, fig. 4, 402 - “Identify therapy target”; 406 – “Receive first user input associated with the therapy target”), extract, prepare, and format the data to produce formatted data (Darcy, [0040], “inputs are often received in the form of text selected from a list (e.g., constrained text) or entered into a field (e.g., free text), although that need not always be the case. For example, in some cases, individuals can speak or dictate to the chat bot”; [0042]), generate with intelligence a PCE from the formatted data and a set of generic cognitive and behavioral exercises (Darcy, fig. 4, 414; [0060]; [0066], “a personalization model is trained using machine learning techniques, such as supervised learning or unsupervised learning”; [0012], “the provided therapy; determining therapy timing to be used for one or more subsequent therapy sessions associated with the therapy target, wherein determining the therapy timing is based at least in part on the trained assessment-based personalization model, and wherein the therapy timing is indicative of i) a frequency for applying one or more therapy tools; ii) a future time to apply the one or more therapy tools; or iii) a combination of i and ii; facilitating providing personalized therapy to the user using the determined therapy timing”; [0089]); perform a post-assessment, and assess an impact of the PCE by comparing the post-assessment to the pre-assessment (Darcy, fig. 4, 416 and 418, “Assessment Score”; [0061], “ the level of intensity of one or more assessment scores may dictate which tool is best able to address the therapy target (e.g., a user with a category assessment score for depression of 5 out of 100 may benefit most from a first type of therapy tool, whereas the user may benefit most from a second type of therapy tool if the user's category assessment score for depression is 55 out of 100). Further, some therapy tools may work best when the user is showing a particular assessment score trend. For example, a first therapy tool may have low effectivity when the user is just starting to show improvements in a particular assessment score, and thus a different tool may be used, but when the user starts showing stronger improvements in that assessment score, the therapy-providing system may instead provide the first therapy tool”); an interactive display operative to interactively present the PCE, wherein the interactive display produces at least one medium selected from the group consisting of text, audio, video, image, and virtual reality (Darcy, [0041], “provide personalized therapy in human-human interacts, such as text-based or audio-based communications between individuals locally or remotely”; [0077]; [0078], “present chatbot outputs (e.g., text, images, sounds, or other discernable outputs presented, such as via an output device like a screen, a speaker, a light, or the like) in response to receiving the chatbot outputs from the server(s) 106”); and at least one data storage unit operative to retrievably store the data and the generic cognitive and behavioral exercises (Darcy, [0052], “a single cohort-trained model can be used for all users of the therapy-providing system, although that need not always be the case. In some cases, the therapy-providing system can access a plurality of cohort-trained models and select a single cohort-trained model that fits an identified cohort of the user”; [0062], “the assessment-based personalization model can be trained to output therapy sequencing, which can be a sequence, or order, of two or more therapy tools to be applied to the user”; [0077], “cohort-based personalization models are stored in storage 108 of server(s)”; fig. 2, 202, “Access cohort-trained personalization model”; the single cohort-trained model is a generic model for a group of users). Darcy teaches an artificial intelligence chat-based tool, commonly known as a chatbot (Darcy, [0031]). Darcy does not explicitly disclose a generate with generative artificial intelligence a PCE from the formatted. Rollwage et al. (US 2024/0404514 A1) teaches a dialogue system, comprising: an input configured to receive input data relating to speech or text provided by a user; an output configured to provide output data relating to speech or text to a user (Rollwage, Abstract). Rollwage teaches generate with generative artificial intelligence a PCE (Rollwage, Abstract, “at least one trained language model”; [0367], “The language model is a large language model. The language model 21 is a generative model. The language model is a general language model”; [0593]; [0295], “Producing text and dialogue that is human-like has long been a challenge in artificial intelligence”; [0664], “The suggested interventions may further comprise one or more interventions from a pre-set treatment plan. In this way, the user may presented with interventions from a pre-set treatment plan as well as interventions identified as being useful based on the user conversation. During cognitive-behavioural therapy (CBT), a therapist may follow a pre-set treatment manual for a given mental health condition, which specifies roughly which step to take at each point during treatment”). Therefore, in view of Rollwage, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the system described in Darcy, by providing the generative AI as taught by Rollwage, since generative AI chatbot with amounts of training data make them understand quite varied forms of language and tone and they are robust in reacting to mistakes in the input text. Additionally, this allows them to respond in varied, and potentially highly nuanced ways, adapting their tone, and response, as well as not having to rely on formulaic responses. This can make them seem significantly more human-like and empathetic than other systems. This in turn makes them a key candidate to engage in deeper conversations with humans (Rollwage, [0008]). Re claim 2: 2. The system of claim 1, wherein the at least one processor is operative to dynamically refine generation of the PCE (Darcy, [0068], “automatically providing therapy to a user that is dynamically personalized to that user”). Re claim 3: 3. The system of claim 1, wherein the user interface is operative to receive the data directly from a user as text or voice-to-text (Darcy, [0040], “inputs are often received in the form of text selected from a list (e.g., constrained text) or entered into a field (e.g., free text), although that need not always be the case. For example, in some cases, individuals can speak or dictate to the chat bot”; [0042]). Re claim 7: 7. The system of claim 1, further comprising: an AI personalization module (Darcy, [0031]) operative to: analyze specific statements and contextual information inputted by a user (Darcy, [0039] – [0040]; [0042], “User input data (e.g., free text, constrained text, and others) can be collected either at the user's own instigation (e.g., the user opens the chatbot and specifically states their mood) or in response to a prompt from the therapy-providing system”), utilize natural language processing to extract themes, emotions, and personal details from user input (Darcy, [0039] – [0040]; [0042], “User input data (e.g., free text, constrained text, and others) can be collected either at the user's own instigation (e.g., the user opens the chatbot and specifically states their mood) or in response to a prompt from the therapy-providing system (e.g., during a chatbot session, the chatbot asks the user about their mood and the user responds). The user input data can be analyzed to generate a mood score. Mood scores can be an indication of a severity or intensity of one or more moods. In some cases, an overall mood score is used”; [0046], “Other moods can be used, and in some cases a user can provide free text to indicate a mood). Upon selecting one or more moods, that user input data can be used to generate (e.g., create and/or update) a mood score”), and dynamically modify a selected evidence-informed intervention template to create a highly personalized intervention (Darcy, fig. 2, 204, 206, “Access cohort-trained personalization model (e.g., mood-based model)”; [0012], “severity of a condition associated with the therapy target”; [0052], “a single cohort-trained model can be used for all users of the therapy-providing system, although that need not always be the case. In some cases, the therapy-providing system can access a plurality of cohort-trained models and select a single cohort-trained model that fits an identified cohort of the user”). Re claim 10: 10. The system of claim 1, further comprising an ongoing conversational interface operative to: simulate therapeutic dialogue with a user through advanced natural language processing and generation (Darcy, [0040], “Likewise, while a chatbot may generally provide prompts and/or otherwise communicate to the user via text, in some cases a chatbot can use a text-to-speech engine to read out responses”; [0069], “therapy chatbots”; [0074], “user input can be processed using natural language processing (NLP) techniques to attribute meaning to the provided input”), process spoken and/or written input via speech-to-text (STT) and text-to-speech (TTS) technologies (Darcy, [0040]; [0076]), utilize multimodal interfaces to present conversational responses (Darcy, [0078], “present chatbot outputs (e.g., text, images, sounds, or other discernable outputs presented, such as via an output device like a screen, a speaker, a light, or the like) in response to receiving the chatbot outputs from the server(s) 106”), dynamically refine the PCE based on immediate real-time user feedback (Darcy, [0036], “providing therapy includes actively engaging the user with a therapy tool, which can include providing prompts, receiving user input, providing responsive feedback, and/or otherwise engaging the user according to the directives of the therapy tool”; [0143], “real-time”; Rollwage, [0741]), and capture and analyze user emotions through the user-reported mood selections and/or sophisticated voice tone analysis and contextual understanding of user input (Darcy, fig. 5; Abstract). Re claim 11: 11. A method for automatically producing a personalized cognitive and/or behavioral exercise (PCE), comprising: providing the system of claim 1; administering the pre-assessment (Darcy, fig. 4, 402 - “Identify therapy target”; 406 – “Receive first user input associated with the therapy target”); prompting a user to submit the data regarding the user's mental state (Darcy, Abstract, “Tracked mood”; [0031], “therapy to a user, such as to monitor, diagnose, and/or treat mental health disorders”; [0075], “User device 102 can include any combination of input/output (I/0) devices that may be suitable for interacting with the system, such as a keyboard, a mouse, a display, a touchscreen, a microphone, a speaker, an inertial measurement unit (IMU), a haptic feedback device, or other such devices”; [0104], “a question asking "how are you doing today, on a scale of 1 to 107'' the user may say "7,"”); extracting, preparing, and formatting the data to produce the formatted data (Darcy, [0040], “inputs are often received in the form of text selected from a list (e.g., constrained text) or entered into a field (e.g., free text), although that need not always be the case. For example, in some cases, individuals can speak or dictate to the chat bot”; [0042]); saving the formatted data (Darcy, [0038]); generating the PCE from the formatted data and the set of generic cognitive exercises (Darcy, fig. 4, 414; [0060]; [0066], “a personalization model is trained using machine learning techniques, such as supervised learning or unsupervised learning”; [0012], “the provided therapy; determining therapy timing to be used for one or more subsequent therapy sessions associated with the therapy target, wherein determining the therapy timing is based at least in part on the trained assessment-based personalization model, and wherein the therapy timing is indicative of i) a frequency for applying one or more therapy tools; ii) a future time to apply the one or more therapy tools; or iii) a combination of i and ii; facilitating providing personalized therapy to the user using the determined therapy timing”; [0089]); interactively presenting the PCE to the user (Darcy, [0041], “provide personalized therapy in human-human interacts, such as text-based or audio-based communications between individuals locally or remotely”; [0077]; [0078], “present chatbot outputs (e.g., text, images, sounds, or other discernable outputs presented, such as via an output device like a screen, a speaker, a light, or the like) in response to receiving the chatbot outputs from the server(s) 106”); performing the post-assessment (Darcy, fig. 4, 416 and 418, “Assessment Score”; [0061]); and assessing the impact of the PCE by comparing the post-assessment to the pre-assessment (Darcy, fig. 4, 416 and 418, “Assessment Score”; [0061], “ the level of intensity of one or more assessment scores may dictate which tool is best able to address the therapy target (e.g., a user with a category assessment score for depression of 5 out of 100 may benefit most from a first type of therapy tool, whereas the user may benefit most from a second type of therapy tool if the user's category assessment score for depression is 55 out of 100). Further, some therapy tools may work best when the user is showing a particular assessment score trend. For example, a first therapy tool may have low effectivity when the user is just starting to show improvements in a particular assessment score, and thus a different tool may be used, but when the user starts showing stronger improvements in that assessment score, the therapy-providing system may instead provide the first therapy tool”). Re claim 12: 12. The method of claim 11, further comprising prompting the user for additional detail regarding the user's mental state, wherein the additional detail is quantifiable (Darcy, fig. 4, 416 and 418, “Assessment Score”; [0061], “ the level of intensity of one or more assessment scores may dictate which tool is best able to address the therapy target (e.g., a user with a category assessment score for depression of 5 out of 100 may benefit most from a first type of therapy tool, whereas the user may benefit most from a second type of therapy tool if the user's category assessment score for depression is 55 out of 100). Further, some therapy tools may work best when the user is showing a particular assessment score trend. For example, a first therapy tool may have low effectivity when the user is just starting to show improvements in a particular assessment score, and thus a different tool may be used, but when the user starts showing stronger improvements in that assessment score, the therapy-providing system may instead provide the first therapy tool”). Re claim 13: 13. The method of claim 11, further comprising generating a menu of PCEs from which the user selects the PCE to be interactively presented (Darcy, [0108], “Determining a therapy tool to use at block 316 can include selecting … rom user input, a therapy tool out of a set of possible therapy tools”). Re claim 14: 14. The method of claim 11, wherein the PCE is selected from the group consisting of tailored affirmations, action-driven behaviors, grounding techniques, coping strategies, methods for shifting one's thoughts, and any combination thereof (Darcy, [0049], “the level of intensity of one or more mood scores may dictate which tool is best able to address the mood (e.g., a user with a anger score of 5 out of 100 may benefit most from a cognitive restructuring exercise, whereas the user may benefit most from a controlled breathing exercise if the user's anger score is 55 out of 100)”; fig. 6). Re claim 15: 15. The method of claim 11, wherein when the assessing indicates the user's mental state has quantifiably improved between the pre-assessment and the post-assessment, the method further comprises displaying a message of encouragement (Darcy, fig. 7). Re claim 16: 16. The method of claim 11, wherein when the assessing indicates the user's mental state has not quantifiably improved between the pre-assessment and the post-assessment, the method further comprises displaying a menu of other PCEs (Darcy, [0108], “Determining a therapy tool to use at block 316 can include selecting … rom user input, a therapy tool out of a set of possible therapy tools”). Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Darcy and Rollwage as applied to claim 1 above, and further in view of Cai et al. (US 2008/0126729 A1). Re claim 4: Darcy does not explicitly disclose receiving authorization. Cai teaches a systems and method for storing information of a user within a medical information card and for controlling access to the information by a third party (Cai, Abstract). Cai teaches 4. The system of claim 1, wherein the user interface is operative to receive authorization to access the data from a secondary source (Cai, [0024], “Authorization information 142 is used to authenticate patient 150 and to authorize doctor 220 to access medical records 112 through external data system 230”). Therefore, in view of Darcy, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the system described in Darcy, by providing the authorization as taught by Cai, since in the United States, patient medical information must be protected with privacy controls to avoid disclosure of confidential patient information. Medical institutions and doctor's offices are not permitted to share a patient's medical information with other medical professionals (Cai, [0003]). Claims 5 – 6 are rejected under 35 U.S.C. 103 as being unpatentable over Darcy and Rollwage as applied to claim 1 above, and further in view of Lee (US 2024/0169218 A1). Re claims 5 – 6: Darcy teaches 6. and select an evidence-informed intervention template from a database of interventions (Darcy, fig. 2, 204, 206, “Access cohort-trained personalization model (e.g., mood-based model)”; [0012], “severity of a condition associated with the therapy target”; [0052], “a single cohort-trained model can be used for all users of the therapy-providing system, although that need not always be the case. In some cases, the therapy-providing system can access a plurality of cohort-trained models and select a single cohort-trained model that fits an identified cohort of the user”). Darcy does not explicitly disclose apply a set of criteria for multiple diagnostic anxiety subtypes, and determine a user's specific anxiety subtype and severity. Lee (US 2024/0169218 A1) teaches a system and method for concise assessment generation using machine learning (Lee, Abstract). Lee teaches a 5. The system of claim 1, further comprising an artificial intelligence (AI) anxiety detection module operative to: analyze collected user data using machine learning algorithms (Lee, Abstrat), apply a set of criteria for multiple diagnostic anxiety subtypes (Lee, [0046], “Since anxiety disorders can arise from different triggers, they be classified into subtypes”; [0057] – [0060]), and determine a user's specific anxiety subtype and severity (Lee, [0046], “Since anxiety disorders can arise from different triggers, they be classified into subtypes”; [0047], “quickly recognize the symptoms of anxiety and monitor them regularly”; [0053], “threshold scores for every severity level of anxiety (normal or no anxiety, mild, moderate, severe, and exceptional). A positive/negative anxiety status column is computed using the threshold score for the "moderate" category”). 6. The system of claim 1, further comprising a decision tree module operative to: receive an anxiety subtype and severity assessment (Lee, [0046], “Since anxiety disorders can arise from different triggers, they be classified into subtypes”; [0057] – [0060]; [0047], “quickly recognize the symptoms of anxiety and monitor them regularly”; [0053], “threshold scores for every severity level of anxiety (normal or no anxiety, mild, moderate, severe, and exceptional). A positive/negative anxiety status column is computed using the threshold score for the "moderate" category”), navigate a predefined decision tree structure based on an assessed anxiety subtype and severity (Lee, [0037], “determines each feature importance as the sum over the number of splits across all decision trees that include the feature”). Therefore, in view of Lee, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method described in Darcy, by diagnosing anxiety as taught by Lee, since anxiety is one of the most common mental health issues affecting the world today. Although having a moderated level of anxiety can be beneficial towards motivation, excessive amounts of anxiety and worry can be detrimental to one's day to day activities and productivity, and therefore be classified as a mental disorder, known as anxiety disorders (Lee, [0046]). Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Darcy and Rollwage as applied to claim 1 above, and further in view of Morrison et al. (US 2017/0353423 A1). Re claim 8: Darcy teaches 8. The system of claim 1, wherein the user interface, the at least one processor, and the interactive display are contained in or coupled to a user's device (Darcy, fig. 1) operative to: execute data processing, AI anxiety detection, and intervention selection processes locally on the user's device (Darcy, fig. 1), facilitate secure communication with an external large language model (LLM) over an internet connection while ensuring user data and results from the LLM communication are stored and processed locally on the user's device (Darcy, [0077], “cohort-based personalization models are stored in storage 108 of server(s)”; [0081]). Darcy does not explicitly disclose the system further comprises a privacy-preserving computation module operative to: ensure that computations involving the user data occur within a secure enclave on the user's device, and implement differential privacy techniques to protect user privacy in any aggregate statistics and/or logs generated by the system. Morrison (US 2017/0353423 A1) teaches a network-connected communication system is provided, via which individuals may engage in dialog with one or more dialog members. Morrison teaches the system further comprises a privacy-preserving computation module operative to: ensure that computations involving the user data occur within a secure enclave on the user's device, and implement differential privacy techniques to protect user privacy in any aggregate statistics and/or logs generated by the system (Morrison, [0154], “By providing advance knowledge and in some cases advance choice as described above, this privacy trust can be effectively created and managed both in cases of broad distribution and narrow. If a user knows with certainty in advance that a second view of their content, with personally identifiable information removed, will be available to all members of the public (whether all Internet users or all users of a registered system) in addition to the members of their dialog network, they can have confidence in their understanding of the degree of privacy to which they will receive. Similarly, if a dialog network of medical doctors knows in advance that any second view of their content with attribution removed will only be available to other practicing medical doctors within a restricted access community, or a known third party service restricted to medical doctors, they can interact and share insights with confidence that this information (with or without author attribution) won't be seen by their patients. Getting even more narrow, if members of a dialog network know a second view of their content may be made available to only an explicitly named list of other viewers, such as the names and email addresses of three employees at a medical equipment manufacturer, they may have the privacy trust necessary to openly share their insights”). Therefore, in view of Morrison, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method described in Darcy, by providing the privacy module as taught by Morrison, in order to remove personal identifiable information before sharing patient data with the medical community. Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Darcy and Rollwage as applied to claim 1 above, and further in view of Wu (US 2018/0150739 A1). Re claim 9: 9. The system of claim 1, further comprising a personalization engine operative to: utilize a large language model (LLM) trained on diverse datasets encompassing cognitive behavioral therapy (CBT) exercises, mental health scenarios, and user interaction patterns (Darcy, [0111], “training the mood-based personalization model can result in either an individual-trained model or a corpus-trained model”; [0099], “This post-therapy user input can be used at block 226 to further update the personalization model(s), such as to further update the individual-trained personalization model from block 206”; [0106]; [0125], “Blocks 402, 404, 406, 408, 410, 412, 414, 416, 418 can be repeated multiple times (e.g., over the course of days, weeks, months, or years) to provide additional training data to train the assessment-based personalization model at block 420 … training the personalization model can include setting a goal of achieving the largest improvement in an uncontrollable anxiety therapy target”; [0003]; [0049]; [0039], “identifying the situation; ii) identifying the thoughts and/or feelings) evoked from the situation”; [0137]), generate contextually relevant and highly personalized cognitive and/or behavioral exercises by analyzing user input data (Darcy, fig. 2, 204, 206, “Access cohort-trained personalization model (e.g., mood-based model)”; [0052], “a single cohort-trained model can be used for all users of the therapy-providing system, although that need not always be the case. In some cases, the therapy-providing system can access a plurality of cohort-trained models and select a single cohort-trained model that fits an identified cohort of the user”), incorporate situational context and emotional state derived from user data to tailor the cognitive and/or behavioral exercises (Darcy, [0007]; [0048]), and update and refine the LLM periodically by incorporating new datasets and user feedback (Darcy, [0111], “an individual-trained model or a corpus-trained model”; [0099], “update the personalization model(s), such as to further update the individual-trained personalization model from block 206”; [0106]; [0125], “Blocks 402, 404, 406, 408, 410, 412, 414, 416, 418 can be repeated multiple times (e.g., over the course of days, weeks, months, or years) to provide additional training data to train the assessment-based personalization model at block 420 … training the personalization model can include setting a goal of achieving the largest improvement in an uncontrollable anxiety therapy target”; [0003]; [0049]; [0039], “identifying the situation; ii) identifying the thoughts and/or feelings) evoked from the situation”; [0137]). Darcy does not explicitly disclose motivational interviewing techniques. Wu (US 2018/0150739 A1) teaches systems and methods for automatically interviewing a technical candidate (Wu, Abstract). Wu teaches motivational interviewing techniques (Wu, [0075], “an automated interview of a candidate 102 utilizing an AI interview chat bot 100. In order to start the interview, the chat bot 100 provides a predetermined startup reply 108A to the candidate 102”; [0122], “cognitive system 1514 predicts emotions of eight dimensions that include happiness, anger, contempt, disgust, fear, sadness, surprise, and neutral. During the interviewing by the chat bot 100, the cognitive system 1514 may evaluate the candidate's emotion every 5 seconds or for any other desired interval of time”; [0125], “he communication skill classifier 1500 may utilize an n-gram language model 1510 trained using reference answers of prepared questions by the classifier”; [0146]; [0115], “the question selection system 114 may select a positive comment in response to good answer and may select an encouraging comment in response to a bad answer from the collection of chat replies”). Therefore, in view of Wu, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method described in Darcy, by providing interviewing technique as taught by Wu, in order to evaluate communication skills, interpersonal skills, technical competency, and team collaboration of every candidate through interview (Wu, [0020]). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JACK YIP whose telephone number is (571)270-5048. The examiner can normally be reached Monday thru Friday; 9:00 AM - 5:00 PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, XUAN THAI can be reached at (571) 272-7147. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JACK YIP/Primary Examiner, Art Unit 3715
Read full office action

Prosecution Timeline

Jul 18, 2024
Application Filed
Feb 23, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12588859
SYSTEM AND METHOD FOR INTERACTING WITH HUMAN BRAIN ACTIVITIES USING EEG-FNIRS NEUROFEEDBACK
2y 5m to grant Granted Mar 31, 2026
Patent 12592160
System and Method for Virtual Learning Environment
2y 5m to grant Granted Mar 31, 2026
Patent 12558290
BLOOD PRESSURE LOWERING TRAINING DEVICE
2y 5m to grant Granted Feb 24, 2026
Patent 12525140
SYSTEMS AND METHODS FOR PROGRAM TRANSMISSION
2y 5m to grant Granted Jan 13, 2026
Patent 12512012
SYSTEM FOR EVALUATING RADAR VECTORING APTITUDE
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
33%
Grant Probability
70%
With Interview (+37.6%)
4y 1m
Median Time to Grant
Low
PTA Risk
Based on 702 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month