Prosecution Insights
Last updated: April 19, 2026
Application No. 17/493,687

CONFIDENCE EVALUATION TO MEASURE TRUST IN BEHAVIORAL HEALTH SURVEY RESULTS

Non-Final OA §101§112
Filed
Oct 04, 2021
Examiner
COBANOGLU, DILEK B
Art Unit
3687
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Ellipsis Health Inc.
OA Round
5 (Non-Final)
33%
Grant Probability
At Risk
5-6
OA Rounds
4y 9m
To Grant
61%
With Interview

Examiner Intelligence

Grants only 33% of cases
33%
Career Allow Rate
163 granted / 492 resolved
-18.9% vs TC avg
Strong +28% interview lift
Without
With
+27.9%
Interview Lift
resolved cases with interview
Typical timeline
4y 9m
Avg Prosecution
57 currently pending
Career history
549
Total Applications
across all art units

Statute-Specific Performance

§101
35.3%
-4.7% vs TC avg
§103
27.2%
-12.8% vs TC avg
§102
21.1%
-18.9% vs TC avg
§112
13.6%
-26.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 492 resolved cases

Office Action

§101 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/24/2025 has been entered. Claims 1-2, 5-10, 12-13 and 16-22 remain pending in this application. Claim Objections Claims 21 and 22 are objected to because of the following informalities: In particular, both claims are dependent on claim 1, and repeat the same limitations. Examiner considers that claim 22 should depend on independent claim 10. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-2, 5-10, 12-13 and 16-22 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. In particular, claims 1 and 10 have been amended to recite “(g) in response to (f), intervening, with the computer system, in (a) to improve a quality of the response data by: automatically sending intervention data to the client device, wherein the intervention data causes a client device to present a message to the human subject to influence responses of the human subject to prompts, wherein the response data is added to the corpus of survey once the mental health survey is complete; and automatically terminating the survey without presenting prompts other than the presented prompts and omitting the response data from the corpus of survey data;” and “training a machine learning model for evaluating a mental health state using the corpus of survey data”, which appears to constitute new matter. In particular, Applicant does not point to, nor was the Examiner able to find, any support for a “automatically sending intervention data to the client device”, “automatically terminating the survey without presenting prompts other than the presented prompts and omitting the response data from the corpus of survey data” and “training a machine learning model for evaluating a mental health state using the corpus of survey data” determination and display feature within the specification as originally filed. The current specification recites “Such unreliable responses can lead to misdiagnoses of survey takers. However, consequences of unreliable responses can extend far beyond the correctness of a diagnosis of a given survey taker. Unreliable responses can render any statistical analysis or modeling of the corpus less accurate and less useful. Examples include analysis for population assessments, for monitoring, or for assessment of therapeutic treatments including medications. Examples also include Al systems that are trained to predict depression and that use the survey data as ground truth estimates for model training and evaluation. Some percentage of the survey data used for analysis, interpretation or machine learning based models will contain problems of the types just mentioned, resulting in suboptimal interpretations and suboptimal models.” in [0004]. As such, Applicant is respectfully requested to clarify the above issues and to specifically point out support for the newly added limitations in the originally filed specification and claims. Applicant is required to cancel the new matter in the reply to this Office action. Claims 2, 5-9, 12-13 and 16-22 incorporate the deficiencies of independent claims 1 and 10, through dependency, and are also rejected. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-2, 5-10, 12-13 and 16-22 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: Claims 1-2, 5-10, 12-13 and 16-22 are drawn to a method which is within the four statutory categories (i.e. process). Step 2A, Prong 1: Claims 1-2, 5-10, 12-13 and 16-20 are provided below with markings separating abstract elements from the additional limitations, wherein the bolded represents the additional limitations beyond abstract idea, and remaining limitations are directed to the abstract idea as discussed below. Claim 1. “A method for generating a corpus of survey data for training machine learning models from responses received from a human subject in a mental health survey for evaluating a mental health state, the method comprising: (a) administering the mental health survey to the human subject to cause the human subject to generate response data in response to one or more prompts of the mental health survey and measuring a latency of the human subject before responding to the one or more prompts by selecting the one or more prompts from a plurality of prompts using a computer system and transmitting the one or more prompts to a client device so as to cause the client device to present the one or more prompts to the human subject; (b) obtaining the response data and response metadata with the computer system, wherein the response data comprises a plurality of conditioning events and a plurality of conditioned events and the response metadata comprises a latency of the human subject before responding to the one or more prompts; (c) determining, with the computer system, a first probability that a first conditioned event is present in the response data based in part on a presence of a first conditioning event in the response data using a machine learning model, wherein the first probability is based in part on the response metadata including the latency of the human subject before responding to the one or more prompts, wherein the first probability is based on analysis of the corpus of survey data by finding all surveys with the conditioning event and determining a probability of the conditioned event from all surveys with the conditioning event; (d) repeating steps (b) and (c) for two or more other conditioned events and other conditioning events to generate a plurality of additional probabilities for a plurality of additional event pairs with the computer system using the machine learning model, wherein the additional probabilities are based in part on the response metadata including the latency of the human subject before responding to the one or more prompts, and wherein the additional probabilities are based on analysis of the corpus of survey data by finding a second plurality of surveys with other conditioning events and determining a probability of the other conditioned events from the second plurality of surveys with the other conditioning events; (e) combining, with the computer system, one or more probabilities from the first probability and the plurality of additional probabilities to generate a confidence vector data, wherein the confidence vector data represents a measure of confidence in the reliability of the human subject that generated the response data in response to the mental health survey; (f) determining, with a computer system, that the measure of confidence is below a predetermined threshold; (g) in response to (f), intervening, with the computer system, in (a) to improve a quality of the response data by: automatically sending intervention data to the client device, wherein the intervention data causes a client device to present a message to the human subject to influence responses of the human subject to prompts, wherein the response data is added to the corpus of survey once the mental health survey is complete; and automatically terminating the survey without presenting prompts other than the presented prompts and omitting the response data from the corpus of survey data; and (h) training a machine learning model for evaluating a mental health state using the corpus of survey data. Claim 2. The method of claim 1, wherein step is carried out using a machine learning model that is trained using a corpus of survey data. Claim 5. The method of claim 1, wherein the confidence vector is based on a comparison of a distribution of latencies of the human subject to an expected latency distribution. Claim 6. The method of claim 1, wherein the confidence vector is based on a comparison of a test duration to an expected test duration. Claim 7. The method of claim 1, wherein the confidence vector is further based on a difference between the latency of the human subject before responding to the prompts and an expected latency. Claim 8. The method of claim 7, wherein the expected latency is based in part on a previous latency obtained when the human subject provided a previous response to a query in the survey. Claim 9. The method of claim 7, wherein the expected latency is based in part on behavioral metadata. Claim 10. A method for generating a corpus of survey data for training machine learning models from responses received from a human subject in a mental health survey for evaluating a mental health state, the method comprising: (a) obtaining (i) a response to a query in a survey and (ii) metadata about the response, wherein the metadata comprises a latency for the response, wherein the survey is delivered to the human subject by sending the query as a prompt to a client device so as to cause the client device to present the prompt to the human subject, wherein the prompt is selected from a plurality of prompts by a computer system; and (b) determining, based at least in part on a difference between the metadata and an expected latency, a reliability of the response using a machine learning model, wherein the expected latency is based on analysis of the corpus of survey data; (c) determining that the reliability is below a predetermined threshold; (d) in response to (c), intervening in the survey by at least one of: automatically sending intervention data to the client device administering the survey, wherein the intervention data causes the client device to present a message to the human subject to influence responses of the human subject to prompts, wherein the response data is added to the corpus of survey once the mental health survey is complete; and automatically terminating the survey without presenting prompts other than the presented prompts and omitting the response data from the corpus of survey data; and (h) training a machine learning model for evaluating a mental health state using the corpus of survey data. Claim 12. The method of claim 10, wherein: the response to the query in the survey comprises a plurality of responses to a plurality of queries in the survey; the metadata comprises one or more latencies for one or more of the plurality of responses; the expected latency comprises a plurality of expected latencies; and determining the reliability of the response comprises determining the reliability of the plurality of responses of the survey. Claim 13. The method of claim 10, wherein the expected latency is based in part on a previous latency obtained when the human subject provided a previous response to the query in the survey. Claim 16. The method of claim 10, wherein the reliability of the response is based on a difference between a latency of the human subject before responding to prompts associated with the survey and an expected latency. Claim 17. The method of claim 16, wherein the expected latency is based in part on a previous latency obtained when the human subject provided a previous response. Claim 18. The method of claim 10, wherein the reliability of the response is based on a comparison of a distribution of a latency of the human subject to an expected latency distribution. Claim 19. The method of claim 10, wherein the reliability is based on a comparison of a test duration to an expected test duration. Claim 20. The method of claim 10, wherein the expected latency is based in part on behavioral metadata. Claim 21. (New) The method of claim 1, wherein: the predetermined threshold comprises an intervention threshold and a termination threshold, upon determining that the measure of confidence is below the intervention threshold and at or above the termination threshold, the computer system sends the intervention data to the client device, and upon determining that the measure of confidence is below the termination threshold, the computer system terminates the survey without presenting the prompts other than the presented prompts and omits the response data from the corpus of survey data. Claim 22. (New) The method of claim 1, wherein: the predetermined threshold comprises an intervention threshold and a termination threshold, upon determining that the measure of confidence is below the intervention threshold and at or above the termination threshold, the computer system sends the intervention data to the client device, and upon determining that the measure of confidence is below the termination threshold, the computer system terminates the survey without presenting the prompts other than the presented prompts and omits the response data from the corpus of survey data. Claims 1 and 10 are specifically directed to the abstract idea (See limitations not bolded above) of a mental process. The limitations of “administering a survey…, obtaining a response data…, determining a first probability…, repeating the steps…, combining two or more probabilities…, determining that the measure of confidence is below a threshold…”, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components (a computer system). That is, other than reciting “with the computer system,” nothing in the claim element precludes the step from practically being performed in the mind, or a user manually (using pen and paper) to perform administering the survey, obtaining the response data, determining probabilities/measures, and generating steps. After considering all claim elements, both individually and in combination and in ordered combination, it has been determined that the claims do not amount to significantly more than the abstract idea itself. Also, the limitations of “(c) determining, with the computer system, a first probability that a first conditioned event is present in the response data based in part on a presence of a first conditioning event in the response data using a machine learning model, wherein the first probability is based in part on the response metadata including the latency of the human subject before responding to the one or more prompts, wherein the first probability is based on analysis of the corpus of survey data by finding all surveys with the conditioning event and determining a probability of the conditioned event from all surveys with the conditioning event; (d) repeating steps (b) and (c) for two or more other conditioned events and other conditioning events to generate a plurality of additional probabilities for a plurality of additional event pairs with the computer system using the machine learning model, wherein the additional probabilities are based in part on the response metadata including the latency of the human subject before responding to the one or more prompts, and wherein the additional probabilities are based on analysis of the corpus of survey data by finding a second plurality of surveys with other conditioning events and determining a probability of the other conditioned events from the second plurality of surveys with the other conditioning events;” correspond to mathematical calculations, therefore the limitation falls within the “mathematical concept” grouping of abstract ideas. The newly added limitation of “training a machine learning model for evaluating a mental health state using the corpus of survey data” corresponds to mathematical relationships, which falls within the “mathematical concepts” grouping of abstract ideas. Claims 2, 5-9, 12-13 and 16-22 are ultimately dependent from claims 1/10 and include all the limitations of claims 1/10. Therefore, claims 2, 5-9, 12-13 and 16-22 recite the same abstract idea. Claims 2, 5-9, 12-13 and 16-22 describe a further limitation regarding the basis for measuring degree of confidence in mental survey for a human. These are all just further describing the abstract idea recited in claims 1/10, without adding significantly more. Step 2A, Prong 2: This judicial exception is not integrated into a practical application. In particular, claims recite the additional elements that are shown in bolded style above. These additional elements are directed to hardware and software elements, these limitations are not enough to qualify as “practical application” being recited in the claims along with the abstract idea since these elements are merely invoked as a tool to apply instructions of the abstract idea in a particular technological environment, and mere instructions to apply/implement/automate an abstract idea in a particular technological environment and merely limiting the use of an abstract idea to a particular field or technological environment do not provide practical application for an abstract idea (MPEP 2106.05(f) & (h)). The computer system in claim steps is recited at a high-level of generality (i.e., as a generic computer system performing generic computer functions of determining and generating) such that they amount no more than mere instructions to apply the exception using a generic computer component. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea. Step 2B: The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of using a computer system to perform both the determining and generating steps amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claims are not patent eligible. Response to Arguments Applicant's arguments filed 11/24/2025 have been fully considered but they are not persuasive. Applicant’s arguments will be addressed below in the order in which they appear. Applicant argues that the subject matter of claims 1 and 10 integrate the judicial exception into the abstract idea, similar to Claim 2 of the Example 46 of the USPTO’s Guidance. Applicant argues that, similar to claim 2 of the Example 46, the subject matter of the pending claims uses the confidence measures to automatically send intervention data to the client device or automatically terminate the survey on the client device. In response, Examiner submits that the limitations of “determining, with a computer system, that the measure of confidence is below a predetermined threshold” and “intervening in the survey by at least one of: automatically sending intervention data to the client device administering the survey, wherein the intervention data causes the client device to present a message to the human subject to influence responses of the human subject to prompts, wherein the response data is added to the corpus of survey once the mental health survey is complete; and automatically terminating the survey without presenting prompts other than the presented prompts and omitting the response data from the corpus of survey data” correspond to mere instructions to apply/implement/automate an abstract idea in a particular technological environment and merely limiting the use of an abstract idea to a particular field or technological environment, which do not provide practical application for an abstract idea. The claim 2 of example 46 is directed to a meaningful limitation in that it can employ the information provided by the judicial exception (the mental analysis of whether the animal is exhibiting an aberrant behavioral pattern indicative of grass tetany) to operate the feed dispenser (automatically identifying aberrant behavioral patterns and operating farm equipment based on such identification avoids the need for the farmer to evaluate the behavior of each animal in the herd on a continual basis, and then manually take appropriate action for each animal exhibiting aberrant behaviors). The Guidance recites that “in combination with the feed dispenser enables the control of appropriate farm equipment based on the automatic detection of grass tetany, which goes beyond merely automating the abstract idea”. Hence, claim 2 of Example 46 is directed to controlling of appropriate farm equipment, which goes beyond merely automating the abstract idea. The current claims, however, are directed to mere instructions to apply/implement/automate an abstract idea (when the confidence level is below a threshold, intervening mental health survey by prompting-sending a message-to the user), which is not directed to “other meaningful judicial limitation”. Applicant argues that newly added claims 21 and 22 recite limitations that automatic conditions improve the corpus of the survey data in real time without requiring human intervention and this can save computing resources, save human time and improve the response data. In response, Examiner submits that the limitation of “the predetermined threshold comprises an intervention threshold and a termination threshold, upon determining that the measure of confidence is below the intervention threshold and at or above the termination threshold, the computer system sends the intervention data to the client device, and upon determining that the measure of confidence is below the termination threshold, the computer system terminates the survey without presenting the prompts other than the presented prompts and omits the response data from the corpus of survey data” are directed to mere instructions to apply/implement/automate an abstract idea in a particular technological environment and merely limiting the use of an abstract idea to a particular field or technological environment. Also, there is no indication in the claims nor in the current specification on an improvement to the functioning of the computer (saving computing resources). The claims and the current specification are directed to based on the confidence level, terminating the survey, which is directed to mere instruction to apply/automate the abstract idea. Therefore, the argument is not persuasive and claims are rejected under 35 U.S.C. §101 as being directed to non-statutory subject matter. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DILEK B COBANOGLU whose telephone number is (571)272-8295. The examiner can normally be reached 8:30-5:00 ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Obeid Mamon can be reached at (571) 270-1813. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DILEK B COBANOGLU/Primary Examiner, Art Unit 3687
Read full office action

Prosecution Timeline

Oct 04, 2021
Application Filed
Sep 16, 2023
Non-Final Rejection — §101, §112
Jan 22, 2024
Response Filed
May 17, 2024
Final Rejection — §101, §112
Aug 23, 2024
Request for Continued Examination
Aug 26, 2024
Response after Non-Final Action
Nov 27, 2024
Non-Final Rejection — §101, §112
Apr 03, 2025
Response Filed
Jul 23, 2025
Final Rejection — §101, §112
Oct 23, 2025
Response after Non-Final Action
Nov 24, 2025
Request for Continued Examination
Dec 05, 2025
Response after Non-Final Action
Jan 10, 2026
Non-Final Rejection — §101, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12574434
METHOD OF HUB COMMUNICATION, PROCESSING, DISPLAY, AND CLOUD ANALYTICS
2y 5m to grant Granted Mar 10, 2026
Patent 12500948
METHOD OF HUB COMMUNICATION, PROCESSING, DISPLAY, AND CLOUD ANALYTICS
2y 5m to grant Granted Dec 16, 2025
Patent 12482562
SYSTEMS AND METHODS FOR AND DISPLAYING PATIENT DATA
2y 5m to grant Granted Nov 25, 2025
Patent 12380972
DATA COMMAND CENTER VISUAL DISPLAY SYSTEM
2y 5m to grant Granted Aug 05, 2025
Patent 12334223
LEARNING APPARATUS, MENTAL STATE SEQUENCE PREDICTION APPARATUS, LEARNING METHOD, MENTAL STATE SEQUENCE PREDICTION METHOD AND PROGRAM
2y 5m to grant Granted Jun 17, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
33%
Grant Probability
61%
With Interview (+27.9%)
4y 9m
Median Time to Grant
High
PTA Risk
Based on 492 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month