Prosecution Insights
Last updated: April 19, 2026
Application No. 18/002,527

SYSTEM AND METHOD FOR DETECTING OR PREDICTING RETURN IN MAJOR DEPRESSIVE DISORDER

Final Rejection §101§103
Filed
Dec 20, 2022
Examiner
KOLOSOWSKI-GAGER, KATHERINE
Art Unit
3687
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Janssen Pharmaceutica NV
OA Round
2 (Final)
26%
Grant Probability
At Risk
3-4
OA Rounds
4y 3m
To Grant
60%
With Interview

Examiner Intelligence

Grants only 26% of cases
26%
Career Allow Rate
95 granted / 358 resolved
-25.5% vs TC avg
Strong +34% interview lift
Without
With
+33.6%
Interview Lift
resolved cases with interview
Typical timeline
4y 3m
Avg Prosecution
54 currently pending
Career history
412
Total Applications
across all art units

Statute-Specific Performance

§101
35.0%
-5.0% vs TC avg
§103
33.9%
-6.1% vs TC avg
§102
14.5%
-25.5% vs TC avg
§112
12.5%
-27.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 358 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This action is in reference to the communication filed on 20 FEB 2026. Amendments to claims 21, 32, have been entered and considered. Claims 21-37 are present and have been examined. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 21-37 rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. As explained below, the claim(s) are directed to an abstract idea without significantly more. Step One: Is the Claim directed to a process, machine, manufacture or composition of matter? YES With respect to claim(s) 21-37 the independent claim(s) 21, 32 recite(s) a method and a system, each of which is a statutory category of invention. Step 2A – Prong One: Is the claim directed to a law of nature, a natural phenomenon (product of nature) or an abstract idea? YES With respect to claim(s) 27-37 the independent claim(s) (claims 21, 32) is/are directed, in part, as shown in exemplary claim 21: A (i) obtaining, (ii) training an anomaly detector using the training data, wherein the anomaly detector is configured to identify deviations from the training data; (iii) obtaining, (iv) extracting a plurality of features from the test data to generate test feature data, wherein the features correspond to metrics for at least one of monofractal patterns, multifractal dynamics and sample entropy; (v) analyzing the test feature data using the anomaly detector to compare the test feature data to the training data to detect an anomaly in the test feature data; and (vi) analyzing self-report test data to determine whether the patient is likely to experience onset of return of depression when an anomaly is detected in the test feature data, wherein the self-report test data is generated from a plurality of inputs from the patient in response to a self-report test after the anomaly is detected. These claim elements are considered to be abstract ideas because they are directed to mental processes which include concepts performed in the human mind (including observation, evaluation, judgment, and opinion). Collecting and analyzing information about a user/anomalies over a period, as well as analyzing and considering self-reported data from a patient, are all examples of observation/evaluation of the patient, in order to render an opinion as to when the patient is likely to experience an onset of return depression. Examiner further notes that the self-reporting limitation(s) are found to be directed to certain methods of organizing human activity, including the management of interactions between people, including following rules and/or instructions. Further, these claim elements are considered to be abstract ideas as they recite mathematical concepts, including mathematical relationships, formulas, equations, and/or calculations. Specifically, the training processes, anomaly detection, and extraction processes are examples of mathematical concepts as identified above – including formulas, equations, relationships. Accordingly, these claim recite a/n abstract idea(s). Step 2A – Prong Two: Does the claim recite additional elements that integrate the judicial exception into a practical application? NO. This judicial exception is not integrated into a practical application. In particular, the claim(s) recite(s) additional elements to perform the claim steps. Claim 21 recites the use of a computer to implement the method, as well as a “wearable device worn by the patient” from which test data is obtained. Claim 32 recites similar elements – a “computing device operably connected” to a “wearable device comprising at least one accelerometer,” with the addition of a user interface/processor and non-transitory computer readable storage medium. The computer/computing device/processor/computer readable medium in claims 21, 32, are recited at a high level of generality and as such amount to no more than adding the words “apply it” to the judicial exception, or mere instructions to implement the abstract idea on a computer, or merely uses the computer as a tool to perform the abstract idea (see MPEP 2106.05f), or generally links the use of the judicial exception to a particular technological field of use/computing environment (see MPEP 2106.05h). No improvement to the functioning of the computer or any other technology or technical field in computing elements as claimed (see MPEP 2106.05a), nor any other application or use of the judicial exception in some meaningful way beyond a general like between the use of the judicial exception to a particular technological environment (see MPEP 2106.05e). With regard to the wearable device in claims 21, 32, Examiner notes that the transmission between either wearable device and the computer(s) is at best adding insignificant extra solution activity to the judicial exception(s) identified (see MPEP 2106.05g), as is the “interface” in claim 32. The wearable device(s) themselves, including the accelerometer, is found to be an example of adding the words “apply it” to the judicial exception/generally link the judicial exception to a technological field of use (see MPEP 2106.05h). The wearable device, including the accelerometer is not found to be applying the judicial exception with, or by use of, a particular machine - see MPEP 2106.05(b). Accordingly, this/these additional element(s) do(es) not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Step 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? NO. The independent claim(s) is/are additionally directed to claim elements such as: Claim 21 recites the use of a computer to implement the method, as well as a “wearable device worn by the patient” from which test data is obtained. Claim 32 recites similar elements – a “computing device operably connected” to a “wearable device comprising at least one accelerometer,” with the addition of a user interface/processor and non-transitory computer readable storage medium. When considered individually, the above identified claim elements only contribute generic recitations of technical elements to the claims. It is readily apparent, for example, that the claim is not directed to any specific improvements of these elements. Examiner looks to Applicant’s specification in: [0042] The system 100 comprises a device 200 for passively detecting and generating data corresponding to physical behaviors of the patient (e.g., physical activity, sleep, mobility, etc.) and a computing device 300 for receiving data from the device 200 and analyzing the data to determine whether the patient is likely to experience onset of relapse of depression. In one embodiment, the device 200 detects and generates actigraphy data and/or mobility data of a patient. The device 200 is preferably suitably sized and shape to be wearable on the body of the patient. For example, the wearable device 200 may be in the form of a wearable clip that is attachable to the patient for wearing on the body of the patient throughout a day. In another embodiment, the device 200 is attached to a wearable band 250 (e.g., a watch band) for attaching the device 200 to a wrist of the patient, when the device 200 is in an operating configuration. [0043] As shown in FIG. 1, the device 200 comprises a processor 202, a computer accessible medium 204, at least one sensor 206 and an input/output device 208. The sensors 206 may comprise actigraphy sensor(s) for detecting movements of the patient and/or mobility sensor(s) for detecting travel patterns of the patient. The actigraphy sensor may be any suitable sensor for detecting movements of the patient. For example, the actigraphy sensor may be an accelerometer for detecting movement of the patient when the device 200 is worn by the patient in an operating configuration. [0044] The processor 202 can include, e.g., one or more microprocessors, and use instructions stored on the computer-accessible medium 204 (e.g., memory storage device). The computer-accessible medium 204 may, for example, be a non-transitory computer-accessible medium containing executable instructions therein. The system 100 may further include a memory storage device 210 provided separately from the computer accessible medium 204 for storing actigraphy data and/or mobility data therein. The input/output device 208 is any suitable device for receiving and/or transmitting data or instructions to or from the actigraphy device 200. In particular, the input/output device 208 may be a transceiver for receiving instructions to and/or transmitting data from the device 200. [0056] For example, the processor 302 directs the user interface 308 to display a plurality of questions that prompt responses from the patient, and receives a plurality of inputs from the user, via the user interface 308 in response to the questions. The plurality of questions may form a self-report assessment of characteristics of physical behavior (e.g., patient's self-assessment of activity, sleep adequacy, sleep quality). These passages, as well as others, makes it clear that the invention is not directed to a technical improvement. The computing elements are recited in functional terms/capabilities only – i.e. any computer capable of sending and receiving data and/or executing a process as claimed. When the claims are considered individually and as a whole, the additional elements noted above, appear to merely apply the abstract concept to a technical environment in a very general sense . The most significant elements of the claims, that is the elements that really outline the inventive elements of the claims, are set forth in the elements identified as an abstract idea. The fact that the generic computing devices are facilitating the abstract concept is not enough to confer statutory subject matter eligibility. As per dependent claims 22-31, 33-37: Dependent claims 22-31, 36, 37 are not directed any additional abstract ideas and are also not directed to any additional non-abstract claim elements. Rather, these claims offer further descriptive limitations of elements found in the independent claims and addressed above – such as the self-reporting test questions, repetition of the analysis, the type(s) of modeling used in the analysis, and adjusting a dose in treatment of a detected condition, . While these descriptive elements may provide further helpful context for the claimed invention these elements do not serve to confer subject matter eligibility to the invention since their individual and combined significance is still not heavier than the abstract concepts at the core of the claimed invention. Dependent claims 33, 34, 35 are not directed to any additional abstract ideas than those identified above, however they do nominally recite non-abstract elements similar to those identified above: the actigraphy device, the user interface, and the computing device. While these claims provide context or further description to the additional elements as identified, for at least the reasons identified above, these elements are not found to constitute a practical application nor significantly more. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 21-29, 32-37 is/are rejected under 35 U.S.C. 103 as being unpatentable over Fedor et al (US 20190117143 A1, hereinafter Fedor) in view of Costa et al (US 20140330159 A1, hereinafter Costa). In reference to claim 21: Fedor teaches: A computer-implemented method for detecting or predicting return of de- pression in a patient comprising: (i) obtaining, from a wearable device worn by the patient, training data of the patient over a training period, wherein the training data comprises training actigraphy data corresponding to movement of the patient over the training period, and the training period is during a time period when the patient has not experienced onset of return of depression (at least [figs 3a/3b and related text] “physiological sensor 300 is configured to be worn around a wrist 370 or around an arm 360 near a wrist. Physiological sensor 300 includes a sensor module 320, a wristband 310 and a USB port 307. Sensor module 320 houses a motion sensor 301, an EDA (electrodermal activity) sensor 302, a thermometer 303, a PPG (photoplethysmogram) sensor 304, an internal clock 305, a memory device 306, and an event marker button 330. “ at [fig 1 and related text] training period 101, during which “During the training period, gather physiological data, SMS usage data, and smartphone usage data regarding the patients (Step 103). Based on the patient self-reports, estimate depression ratings for missing datapoints during the training period (which datapoints correspond to times between the depression ratings by clinicians) (Step 104). “); (ii) training an anomaly detector using the training data, obtained from the wearable device, wherein the anomaly detector is configured to identify deviations from the training data (at least [070] training period allows for a detection of a depression rating, i.e. an anomaly; see [007] for discussion of the HDRS, see [049-051] for discussion of outliers, see [fig 2a-c and related text] for further variations of training to determine depression ratings; at [009] “Third, fourth and fifth, the training dataset may include, for each of the training patients, physiological sensor data, smartphone usage data and SMS data gathered during the training period.” See [053-055] for applicability of training to the sensor data, at [fig 1 and related text] step 103 includes gathering physiological data collected from the sensors during the training period, and also gather data of the same type after the training period, at step 108. At [figs 3a/3b and related text] for example, the sensor/wearable device that collects the physiological data is shown, and finally at [fig 4 and related text] “n FIG. 4, server 444 trains a ensemble machine learning (ML) algorithm on a training dataset regarding the multiple patients. The training dataset may comprise the clinicians' depression ratings, the estimated depression ratings, the physiological sensor data, the smartphone usage data, and the SMS usage data, all acquired during the training period.”); (iii) obtaining, from the wearable device, test data of the patient during a test period, at least a portion of the test period being after the training period, the test data comprising test actigraphy data corresponding to movement of the patient after the training period (at least [fig 1, 2a-c and related text] “monitoring period after training 107” during which the same data is collected from wearable device 300, see also [0068, 033-034] for discussion of sleep/actigraphy data; at [0064] “no data point from the first two weeks is selected as test data…”); (iv) extracting a plurality of features from the test data to generate test feature data, w wherein the features correspond to metrics (at least [013-014, 028, 033-34, 0104] sleep data collected for future comparison, i.e. vectoring); (v) analyzing the test feature data using the anomaly detector to compare the test feature data to the training data to detect an anomaly in the test feature data (at least [064-65] test data is used to select the most appropriately trained model; at [070] “Use the trained ensemble model to estimate, based on this passive data, one or more depression ratings for the patient (e.g., a depression rating for each of multiple dates during the monitoring period) (Step 109). In some cases, the machine learning program in Step 106 is an ensemble machine learning program. In Step 107, the depression rating by the clinician may be at the start of the monitoring period or later in the monitoring period.”); and (vi) analyzing self-report test data to determine whether the patient is likely to experience onset of return of depression when an anomaly is detected in the test feature data, wherein the self-report test data is generated from a plurality of inputs from the patient in response to a self-report test administered after the anomaly is detected (at least [070] “During the training period, accept, as input, self-reports by the patients (e.g., answers to surveys, multiple times daily) (Step 102). During the training period, gather physiological data, SMS usage data, and smartphone usage data regarding the patients (Step 103). Based on the patient self-reports, estimate depression ratings for missing datapoints during the training period (which datapoints correspond to times between the depression ratings by clinicians) (Step 104). Create an enlarged dataset of depression ratings for the training period, comprising the ratings by clinicians and the ratings estimated from the patient self-reports (Step 105). “ at [055] “In the prototype, the self-reported affect measures are not used for HDRS prediction after training (e.g., for predicting HDRS ratings for a new patient, using the prototype's trained machine learning program).” (at least [083-084] estimated level of return of depression: “ Based on these depression rating by clinicians and on these self-reports, server 444 may estimate depression ratings for each of the multiple patients for each of multiple intermediate times in the training period. These intermediate times may fall between the dates of the clinicians' depression ratings during the training period. For instance, in some cases: (a) a clinician inputs a bi-weekly depression rating for each patient (once every 14 days); and (b) server 444 estimates depression ratings for each patient for each other day in the training period.” see also [007, 009] for discussion of depression levels returning, and at [025] “scored by an expert clinician in a patient interview. For each patient, the clinical form of HDRS data is collected bi-weekly in a face-to-face meeting between a clinician and the patient. For each patient, the patient's depression level for the remaining dates is estimated by using machine learning that incorporates daily patient self-reports.” At [070] scoring and estimated depression ratings, and at [fig 1 2, 0161, 098] a monitoring period question may be asked again) While Fedor as cited teaches vectoring using a set of data about a patient, it does not specifically disclose monofractal patterns/multifractal patterns, or sample entropy. Costa however does teach: wherein the features correspond to metrics for at least one of monofractal patterns, multifractal dynamics and sample entropy;(at least [0062] “Complementary techniques that measure correlation properties of time series are fractal and multifractal analyses, including those based on detrended fluctuation analysis, box-counting or wavelet analysis. These methods can be applied to the raw data or micro-error data. The multiscale entropy (MSE) method discussed in the preferred embodiment has certain attractive features for capturing correlations across time scales and information content in that it explicitly measures the entropy, not only of the original signal, but also of a family of signals derived therefrom, which represent multiple time scales.” At [043] “The information and/or data collected can be used to produce a time series of data representing the task motion recorded. For example, the data recorded can represent the position of the subject's finger on the screen at predefined sampling intervals and time series representing the difference between the actual position and target position (e.g., the object, dot, or box on the path) can be determined. Next, the degree of complexity or irregularity can be quantified using an entropy measure, such as SampEn, for example, resulting in a Multiscale Entropy (MSE) plot of SampEn at various scale factors. In accordance with one embodiment of the invention, a Complexity Index (CI) can be determined as the area under the MSE curve for a predefined range of scale factors. The Complexity Index can be used as the neuromotor index (NI) or combined with other measures to form the neuromotor index.” See also [056] for an example calculation). Fedor and Costa are analogous references as both disclose means of modeling or predicting depression related symptoms. One of ordinary skill in the art would have found the inclusion of the multifractal/sample entropy as taught by Costa in the feature extraction of Fedor, as Costa teaches: “The multiscale entropy (MSE) method discussed in the preferred embodiment has certain attractive features for capturing correlations across time scales and information content in that it explicitly measures the entropy, not only of the original signal, but also of a family of signals derived therefrom, which represent multiple time scales. This technique allows one to distinguish highly variable signals without correlations (e.g., white noise) from more physiologic types of 1/f noise seen in the output of complex adaptive systems.” (see 062). One would have been motivated to include these means of extraction/correlation in order to specifically benefit from these improvements identified by Costa. In reference to claim 22: Fedor further teaches: wherein the self-reported test is collected from a time concurrent with the detected anomaly (at least [0161-0162] “(i) accepting, as input, an additional depression rating for a user, which additional rating is by a human and occurs at a specific time in an evaluation period,…(iii) accepting, as input, a second set of data regarding smartphone usage or SMS usage, which second set of data comprises data regarding smartphone usage or SMS usage of the user over time during the evaluation period; and (h) performing calculations to determine a depression rating for the user at one or more times that are in the evaluation period and are different than the specific time; performing calculations to determine a depression rating for the user at one or more times that are in the period and are different than the specific time,”). In reference to claim 23: Fedor further teaches: wherein the self-reported test is collected from the patient after an anomaly is detected . (at least [0161-0162] “(i) accepting, as input, an additional depression rating for a user, which additional rating is by a human and occurs at a specific time in an evaluation period,…(iii) accepting, as input, a second set of data regarding smartphone usage or SMS usage, which second set of data comprises data regarding smartphone usage or SMS usage of the user over time during the evaluation period; and (h) performing calculations to determine a depression rating for the user at one or more times that are in the evaluation period and are different than the specific time; performing calculations to determine a depression rating for the user at one or more times that are in the period and are different than the specific time,”). In reference to claim 24: Fedor further teaches: (vii) updating the training data to include the test data, and repeating steps (11) to (v1) until the patient 1s determined to have returned into depression (at least [064-067, 071] validation of data sets via repetitive testing: “. For each specific ML method and each specific dimensionality-reduced dataset, use the specific ML method to predict, based on the specific dimensionality-reduced dataset, the depression ratings for the missing datapoints (Step 205a). Then split data into 90% training and 10% testing, and perform cross validation to select the best combination of ML method and reduced-dimensionality dataset (Step 205b).”) In reference to claim 25: Fedor further teaches: wherein steps (11) to (vii) are repeated continuously until the patient is determined to have returned into depression. (at least [064-067, 071] validation of data sets via repetitive testing: “. For each specific ML method and each specific dimensionality-reduced dataset, use the specific ML method to predict, based on the specific dimensionality-reduced dataset, the depression ratings for the missing datapoints (Step 205a). Then split data into 90% training and 10% testing, and perform cross validation to select the best combination of ML method and reduced-dimensionality dataset (Step 205b).”) In reference to claim 26: Fedor further teaches: wherein step (vi) comprises: analyzing the self-report test data to generate a resulting score for the self-report test, and comparing the resulting score to at least one threshold value to determine whether the patient is likely to experience onset of return of depression (at least [025] “scored by an expert clinician in a patient interview. For each patient, the clinical form of HDRS data is collected bi-weekly in a face-to-face meeting between a clinician and the patient. For each patient, the patient's depression level for the remaining dates is estimated by using machine learning that incorporates daily patient self-reports.” At [070] scoring and estimated depression ratings). In reference to claim 27: Fedor further teaches: wherein the anomaly detector utilizes a long short-term memory (LSTM) neural network, the anomaly detector comprising an encoder and a decoder (at least [054] “For instance, in some cases, the prototype runs a long short-term memory (LSTM) network on the dataset as well as an augmented version of it. “ see [074, 076] for discussion of encode/decode). In reference to claim 29: Fedor further teaches: wherein the training period is at least 14 days (at least [009] “First, the training dataset includes, for each training patient, periodic (e.g., bi-weekly) depression ratings by a clinician during the training period”; at [011] “During a training period, the automated system accepts, as input, depression ratings by a clinician (e.g., bi-weekly HDRS ratings by clinicians) for multiple patients.”). In reference to claim 32: Fedor teaches: A system for detecting or predicting return of depression in a patient comprising: a wearable device comprising at least one accelerometer configured to detect movement of the patient, the wearable device configured to generate actigraphy data corresponding to movement of the patient (at least [011, 028] accelerometer, at [figs 3a/3b and related text] “physiological sensor 300 is configured to be worn around a wrist 370 or around an arm 360 near a wrist. Physiological sensor 300 includes a sensor module 320, a wristband 310 and a USB port 307. Sensor module 320 houses a motion sensor 301, an EDA (electrodermal activity) sensor 302, a thermometer 303, a PPG (photoplethysmogram) sensor 304, an internal clock 305, a memory device 306, and an event marker button 330. “ at [fig 1 and related text] training period 101, during which “During the training period, gather physiological data, SMS usage data, and smartphone usage data regarding the patients (Step 103).; and a computing device operably connected to the wearable actigraphy device to receive actigraphy data from the wearable device ( at least [fig 4 and related text, devices 401-403 connected to computers), the computing device comprising: a user interface for displaying output and receiving input from the patient, and a processor and a non-transitory computer readable storage medium including a set of instructions executable by the processor (at least [fig 4 and related text], the set of instructions operable to: obtain, from the wearable device, training actigraphy data corresponding to movement of the patient over a training period, wherein the training period is during a time period when the patient has not experienced onset of re- turn of depression (at least [figs 3a/3b and related text] “physiological sensor 300 is configured to be worn around a wrist 370 or around an arm 360 near a wrist. Physiological sensor 300 includes a sensor module 320, a wristband 310 and a USB port 307. Sensor module 320 houses a motion sensor 301, an EDA (electrodermal activity) sensor 302, a thermometer 303, a PPG (photoplethysmogram) sensor 304, an internal clock 305, a memory device 306, and an event marker button 330. “ at [fig 1 and related text] training period 101, during which “During the training period, gather physiological data, SMS usage data, and smartphone usage data regarding the patients (Step 103). Based on the patient self-reports, estimate depression ratings for missing datapoints during the training period (which datapoints correspond to times between the depression ratings by clinicians) (Step 104). “); train an anomaly detector using training data obtained from the wearable device, the training data comprising the training actigraphy data, wherein the anomaly detector is configured to identify deviations from the training data, (at least [070] training period allows for a detection of a depression rating, i.e. an anomaly; see [007] for discussion of the HDRS, see [049-051] for discussion of outliers, see [fig 2a-c and related text] for further variations of training to determine depression ratings, at [009] “Third, fourth and fifth, the training dataset may include, for each of the training patients, physiological sensor data, smartphone usage data and SMS data gathered during the training period.” See [053-055] for applicability of training to the sensor data, at [fig 1 and related text] step 103 includes gathering physiological data collected from the sensors during the training period, and also gather data of the same type after the training period, at step 108. At [figs 3a/3b and related text] for example, the sensor/wearable device that collects the physiological data is shown, and finally at [fig 4 and related text] “n FIG. 4, server 444 trains a ensemble machine learning (ML) algorithm on a training dataset regarding the multiple patients. The training dataset may comprise the clinicians' depression ratings, the estimated depression ratings, the physiological sensor data, the smartphone usage data, and the SMS usage data, all acquired during the training period.”); obtaining, from the wearable device, test actigraphy data correspond- Ing to movement of the patient during a test period, at least a portion of the test period being after the training period (at least [fig 1, 2a-c and related text] “monitoring period after training 107” during which the same data is collected from wearable device 300, see also [0068, 033-034] for discussion of sleep/actigraphy data; at [0064] “no data point from the first two weeks is selected as test data…”); extract a plurality of features from the test actigraphy data to generate test feature data, wherein the features correspond to metrics for at least one of activity, (at least [013-014, 028, 033-34, 0104] sleep data collected for future comparison, i.e. vectoring) analyze the test feature data using the anomaly detector to compare the test feature data to the training data to detect an anomaly in the test feature data (at least [064-65] test data is used to select the most appropriately trained model; at [070] “Use the trained ensemble model to estimate, based on this passive data, one or more depression ratings for the patient (e.g., a depression rating for each of multiple dates during the monitoring period) (Step 109). In some cases, the machine learning program in Step 106 is an ensemble machine learning program. In Step 107, the depression rating by the clinician may be at the start of the monitoring period or later in the monitoring period.”); analyze self-report test data to determine whether the patient is likely to experience onset of return of depression when an anomaly is detected in the test feature data, wherein the self-report test data is generated from a plurality of inputs received from the patient by the user interface in response to a self-report test administered after the anomaly is detected, the self report test comprising a plurality of self- report survey questions displayed on the user interface. (at least [070] “During the training period, accept, as input, self-reports by the patients (e.g., answers to surveys, multiple times daily) (Step 102). During the training period, gather physiological data, SMS usage data, and smartphone usage data regarding the patients (Step 103). Based on the patient self-reports, estimate depression ratings for missing datapoints during the training period (which datapoints correspond to times between the depression ratings by clinicians) (Step 104). Create an enlarged dataset of depression ratings for the training period, comprising the ratings by clinicians and the ratings estimated from the patient self-reports (Step 105). “ at [055] “In the prototype, the self-reported affect measures are not used for HDRS prediction after training (e.g., for predicting HDRS ratings for a new patient, using the prototype's trained machine learning program).” (at least [083-084] estimated level of return of depression: “ Based on these depression rating by clinicians and on these self-reports, server 444 may estimate depression ratings for each of the multiple patients for each of multiple intermediate times in the training period. These intermediate times may fall between the dates of the clinicians' depression ratings during the training period. For instance, in some cases: (a) a clinician inputs a bi-weekly depression rating for each patient (once every 14 days); and (b) server 444 estimates depression ratings for each patient for each other day in the training period.” see also [007, 009] for discussion of depression levels returning, and at [025] “scored by an expert clinician in a patient interview. For each patient, the clinical form of HDRS data is collected bi-weekly in a face-to-face meeting between a clinician and the patient. For each patient, the patient's depression level for the remaining dates is estimated by using machine learning that incorporates daily patient self-reports.” At [070] scoring and estimated depression ratings, and at [fig 1 2, 0161, 098] a monitoring period question may be asked again). While Fedor as cited teaches vectoring using a set of data about a patient, it does not specifically disclose monofractal patterns/multifractal patterns, or sample entropy. Costa however does teach: wherein the features correspond to metrics for at least one of monofractal patterns, multifractal dynamics and sample entropy;(at least [0062] “Complementary techniques that measure correlation properties of time series are fractal and multifractal analyses, including those based on detrended fluctuation analysis, box-counting or wavelet analysis. These methods can be applied to the raw data or micro-error data. The multiscale entropy (MSE) method discussed in the preferred embodiment has certain attractive features for capturing correlations across time scales and information content in that it explicitly measures the entropy, not only of the original signal, but also of a family of signals derived therefrom, which represent multiple time scales.” At [043] “The information and/or data collected can be used to produce a time series of data representing the task motion recorded. For example, the data recorded can represent the position of the subject's finger on the screen at predefined sampling intervals and time series representing the difference between the actual position and target position (e.g., the object, dot, or box on the path) can be determined. Next, the degree of complexity or irregularity can be quantified using an entropy measure, such as SampEn, for example, resulting in a Multiscale Entropy (MSE) plot of SampEn at various scale factors. In accordance with one embodiment of the invention, a Complexity Index (CI) can be determined as the area under the MSE curve for a predefined range of scale factors. The Complexity Index can be used as the neuromotor index (NI) or combined with other measures to form the neuromotor index.” See also [056] for an example calculation). Fedor and Costa are analogous references as both disclose means of modeling or predicting depression related symptoms. One of ordinary skill in the art would have found the inclusion of the multifractal/sample entropy as taught by Costa in the feature extraction of Fedor, as Costa teaches: “The multiscale entropy (MSE) method discussed in the preferred embodiment has certain attractive features for capturing correlations across time scales and information content in that it explicitly measures the entropy, not only of the original signal, but also of a family of signals derived therefrom, which represent multiple time scales. This technique allows one to distinguish highly variable signals without correlations (e.g., white noise) from more physiologic types of 1/f noise seen in the output of complex adaptive systems.” (see 062). One would have been motivated to include these means of extraction/correlation in order to specifically benefit from these improvements identified by Costa. In reference to claim 33: Fedor further teaches: wherein the plurality of self-report survey questions corresponds to symptoms of depression, and the plurality of inputs from the patient corresponds to rating on a numerical scale of each correspond symptom (at least [013-15, 065-066] HDRS is correlated to normal/depressed states). In reference to claim 34: Fedor further teaches: wherein the user interface is a touch screen (at least [0079] “In some cases, the patients enter the self-reports via a graphical user interface displayed on display screens (such as the display screens of computers 423, 424, 425) or on display screens or touch screens of their smartphones (e.g., 431, 432, 433). These self-reports may be sent to server 444.”). In reference to claim 35: Fedor further teaches: wherein the computing device is selected from a group consisting of a mobile computing device, a smart phone, and a computing tablet (at least [fig 4 and related text] display screens 423-425, phones 431-433, computer 421-3, etc.). In reference to claim 36: Fedor further teaches: wherein the anomaly detector utilizes a long short-term memory (LSTM) neural network, the anomaly detector comprising an encoder and a decoder (at least [054] “For instance, in some cases, the prototype runs a long short-term memory (LSTM) network on the dataset as well as an augmented version of it. “ see [074, 076] for discussion of encode/decode). In reference to claim 37: Fedor further teaches: wherein the plurality of self-report survey questions corresponds to symptoms of depression, and the plurality of inputs from the patient corresponds to rating on a numerical scale of each correspond symptom (at least [013-15, 065-066] HDRS is correlated to normal/depressed states). Claim(s) 30, 31 is/are rejected under 35 U.S.C. 103 as being unpatentable over Fedor in view of Costa further in view of Javitt et al (US 20200275838 A1, hereinafter Javitt). In reference to claim 30: Fedor/Costa teaches all the limitations above. Fedor as cited above discloses a likelihood of an onset of return depression. While both references disclose treating a return or onset of depression, the references do not specifically contemplate medication changes. Javitt however does teach: adjusting a dosage of an antidepressant administered to the patient when the patient is determined as likely to experience onset of return of depression (at least [045] predicted potential relapse/onset, at [053] “Existing guidelines recommend four pharmacological strategies for the management of partial response or non-response of MDD: (i) increasing the dose of the antidepressant, (ii) switching to a different antidepressant, (iii) augmenting the treatment regimen with a non-antidepressant agent such as lithium, atypical antipsychotic drugs or thyroid hormones, or (iv) combining the initial antidepressant with a second antidepressant. (See, e.g., Reference 5”). Javitt is analogous to both Fedor and Costa, as all references disclose a means of monitoring and treating a patient with depression. One of obvious skill would recognize that detecting or predicting a return of onset of depression as taught by each reference is an important first step to treating the onset- which Javitt teaches medication may be an important part thereof. Treating a detected or predicted medical event in advance of the onset is generally accepted as preventative medication and considered to generally be more effective than retroactively or responsively treating the event, and as such it would have been obvious to make medication changes to prevent a patient from a relapse/onset of depression. In reference to claim 31: Fedor/Costa teaches all the limitations above. Fedor as cited above discloses a likelihood of an onset of return depression. While both references disclose treating a return or onset of depression, the references do not specifically contemplate medication changes. Javitt however does teach: increasing a dosage of an antidepressant administered to the patient when the patient is determined as likely to experience onset of return of depression (at least [045] predicted potential relapse/onset, at [053] “Existing guidelines recommend four pharmacological strategies for the management of partial response or non-response of MDD: (i) increasing the dose of the antidepressant, (ii) switching to a different antidepressant, (iii) augmenting the treatment regimen with a non-antidepressant agent such as lithium, atypical antipsychotic drugs or thyroid hormones, or (iv) combining the initial antidepressant with a second antidepressant. (See, e.g., Reference 5”). Javitt is analogous to both Fedor and Costa, as all references disclose a means of monitoring and treating a patient with depression. One of obvious skill would recognize that detecting or predicting a return of onset of depression as taught by each reference is an important first step to treating the onset- which Javitt teaches medication may be an important part thereof. Treating a detected or predicted medical event in advance of the onset is generally accepted as preventative medication and considered to generally be more effective than retroactively or responsively treating the event, and as such it would have been obvious to make medication changes to prevent a patient from a relapse/onset of depression. Relevant Prior Art The following prior art not relied upon is made of record: US20140243608 to Hunt discloses evaluation and medication for psychiatric disorders. US 20200143922 A1 to Chekroud discloses predicting the outcome of a treatment for a patient suffering with depression. Response to Arguments Applicant’s remarks as originally filed on 2 FEB 2026 are noted. Examiner notes no additional remarks were filed with the 20 FEB 2026 claims. Applicant begins on page 7 with a discussion of the rejection under 35 USC 101. Applicant makes reference to Ex Parte Desjardin, arguing the claimed invention provides an improvement to the functioning of a computer or to a technology or technical field. Examiner respectfully submits that the “improvements” provided by the claimed invention however are not in any way technical. Detecting a relapse of depression is not itself a computer, a technology or a technical field, instead as noted above these improvements to predicting a relapse, while important, are accomplished by using a computer as a tool. As per Applicant’s remarks for claim 30, 31, Examiner finds that adjusting a dose is not administering a dose, and therefore, not a particular treatment. Applicant turns to a discussion of the prior art rejection on page 9, with a restatement of exemplary claim 21, and selected portions of Fedor on page 10. Examiner respectfully disagrees, various portions of Fedor as cited teach wherein the self-report data is re-referred to after the initial training portion and/or combined with the clinicians assessment. Applicant is encouraged to more positively recite the order of the steps including the self-report data/analyzation/anomaly rather than including it in a wherein clause. Applicant discusses Costa as combined with Fedor on page 11 of the remarks, however, it does not appear that Applicant is referencing the cited portions of the Costa reference, and makes reference to limitations for which the Costa reference is not cited, and as such, Examiner finds these remarks to be unpersuasive. Applicant turns to claim 32 on page 12/13, with similar remarks as those in reference to claim 21. At least for the reasons outlined with regard to claim 21, these remarks are found similarly unpersuasive. Applicant’s remarks regarding claims 30/31 are noted but are found to be somewhat confusing – Applicant makes reference to uncited portions of Javitt, and concludes that Costa does not teach various other limitations for which it is cited. Javitt is cited to teach changing a medication of a patient when a relapse is predicted. As such Examiner finds this line of arguments to be unpersuasive. Applicants remaining remarks on page 8 are found unpersuasive at least in view of the discussion above. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KATHERINE KOLOSOWSKI-GAGER whose telephone number is (571)270-5920. The examiner can normally be reached Monday - Friday. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mamon Obeid can be reached at 571-270-1813. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KATHERINE . KOLOSOWSKI-GAGER/ Primary Examiner Art Unit 3687 /KATHERINE KOLOSOWSKI-GAGER/Primary Examiner, Art Unit 3687
Read full office action

Prosecution Timeline

Dec 20, 2022
Application Filed
Sep 28, 2025
Non-Final Rejection — §101, §103
Feb 02, 2026
Response Filed
Feb 02, 2026
Response after Non-Final Action
Feb 20, 2026
Response Filed
Mar 13, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12499467
PREDICTING THE EFFECTIVENESS OF A MARKETING CAMPAIGN PRIOR TO DEPLOYMENT
2y 5m to grant Granted Dec 16, 2025
Patent 12462273
SYSTEM AND METHOD FOR USING DEVICE DISCOVERY TO PROVIDE ADVERTISING SERVICES
2y 5m to grant Granted Nov 04, 2025
Patent 12462938
MACHINE-LEARNING MODEL FOR GENERATING HEMOPHILIA PERTINENT PREDICTIONS USING SENSOR DATA
2y 5m to grant Granted Nov 04, 2025
Patent 12444507
BAYESIAN CAUSAL INFERENCE MODELS FOR HEALTHCARE TREATMENT USING REAL WORLD PATIENT DATA
2y 5m to grant Granted Oct 14, 2025
Patent 12437315
SYSTEMS AND METHODS FOR DYNAMICALLY DETERMINING EVENT CONTENT ITEMS
2y 5m to grant Granted Oct 07, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
26%
Grant Probability
60%
With Interview (+33.6%)
4y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 358 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month