Prosecution Insights
Last updated: April 19, 2026
Application No. 18/571,207

SYSTEM AND METHOD FOR DIAGNOSING MENTAL DISORDER AND PREDICTING TREATMENT RESPONSE ON BASIS OF PSYCHIATRIC EXAMINATION DATA USING EYE TRACKING

Non-Final OA §101§102§103
Filed
Dec 15, 2023
Examiner
ROBERTS, ANNA L
Art Unit
3791
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Happymind Co. Ltd.
OA Round
1 (Non-Final)
55%
Grant Probability
Moderate
1-2
OA Rounds
3y 7m
To Grant
98%
With Interview

Examiner Intelligence

Grants 55% of resolved cases
55%
Career Allow Rate
81 granted / 147 resolved
-14.9% vs TC avg
Strong +43% interview lift
Without
With
+43.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
47 currently pending
Career history
194
Total Applications
across all art units

Statute-Specific Performance

§101
15.8%
-24.2% vs TC avg
§103
40.1%
+0.1% vs TC avg
§102
15.1%
-24.9% vs TC avg
§112
22.6%
-17.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 147 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Specification The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification. Claim Objections Claims 5 and 8 are objected to because of the following informalities: In claim 5, line 4 "the number of saccades" should be --[[the]] a number of saccades--. In claim 8, line 4-5 “in which latency time data on a time” should be –in which latency time data --[[on]] for a time--. Appropriate correction is required. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: information generation unit, input unit, and result-deriving unit in claim 10. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. Accordingly, each of these limitations is interpreted according to Fig. 1 and paragraphs 32-34 of the disclosure which describe the elements as components of a processor. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Utilizing the two step process adopted by the Supreme Court (Alice Corp vs CLS Bank Int'l, US Supreme Court, 110 USPQ2d 1976 (2014) and the recent 101 guideline Federal Register Vol. 84, No., Jan 2019)), determination of the subject matter eligibility under the 35 U.S.C. 101 is as follows: Specifically, the Step 1 requires claim belongs to one of the four statutory categories (process, machine, manufacture, or composition of matter). If Step 1 is satisfied, then in the first part of Step 2A (Prong One), identification of any judicial recognized exceptions in the claim is made. If any limitation in the claim is identified as judicial recognized exception, then in the second part of Step 2A (Prong Two), determination is made whether the identified judicial exception is being integrated into practical application. If the identified judicial exception is not integrated into a practical application, then in Step 2B, the claim is further evaluated to see if the additional elements, individually and in combination provide "inventive concept" that would amount to significantly more than the judicial exception. If the element and combination of elements do not amount to significantly more than the judicial recognized exception itself, then the claim is ineligible under the 35 U.S.C. 101. Claims 1-10 are rejected under 35 U.S.C. 101. Claim 1 is rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception, in this case an abstract idea, without significantly more. The claim recite(s) " deriving result information related to mental disorder diagnosis and treatment response from the psychiatric examination data including the user attention information by using the learning model for diagnosing mental disorders and predicting treatment responses". This judicial exception is not integrated into a practical application and the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Claim 1 satisfies Step 1, namely the claim is directed to one of the four statutory classes, method. Following Step 2A Prong one, any judicial exceptions are identified in the claims. In claim 1, the limitations "deriving result information related to mental disorder diagnosis and treatment response from the psychiatric examination data including the user attention information by using the learning model for diagnosing mental disorders and predicting treatment responses" are abstract ideas as they are directed to a mental process and mathematical calculation. With the identification of an abstract idea, the next phase is to proceed Step 2A, Prong Two, wherewith additional elements and taken as a whole, evaluation occurs of whether the identified abstract idea is integrated into a practical application. In Step 2A, Prong Two, the claim does not recite any additional elements or evidence that amounts to significantly more than the judicial exception. Besides the abstract idea, the claim recites the additional elements “generating user attention information using user's eye tracking on a monitor screen on which a test is being performed; receiving an input of psychiatric examination data including the generated user attention information into a learning model for diagnosing mental disorders and predicting treatment responses”. However, these components may be seen as the use of well-understood, routine, or conventional elements to perform a non-mental process in order to gather data for the mental process step, much like the example given in MPEP 2106.04(d)(2)(c), such that these limitations are extra-solution activity and thus do not integrate the judicial exception into a practical application. The generating and receiving steps leads to the final limitation of “deriving result information” such that the end result of use of the system is only the generic derived result which may be any generic output, or no output at all. As this derivation is not defined as requiring any further action, such as a particular form of prophylaxis or treatment or an improvement to a computer or other technology, the claim limitations constitute mere generation of data, in this case the measurement of data relating to user attention information, such that the claim does not integrate the judicial exception into any practical application. Under the broadest reasonable interpretation, the claim elements are recited with a high level of generality (as written, the process may be performed by a person in an undefined manner using any generic learning model) that there are no meaningful limitations to the abstract idea. Consequently, with the identified abstract idea not being integrated into a practical application, the next step is Step 2B, evaluating whether the additional elements provide "inventive concept" that would amount to significantly more than the abstract idea. In Step 2B, claim 1 does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The limitation of “generating user attention information using user's eye tracking on a monitor screen on which a test is being performed; receiving an input of psychiatric examination data including the generated user attention information into a learning model for diagnosing mental disorders and predicting treatment responses” constitutes extra-solution activity to the judicial exception, which does not amount to an inventive concept when the activity is well-understood, routine, or conventional, and are thus not indicative of integration into a practical application. The claim limitation constitutes adding a generic monitoring screen and eye-tracking system, which Eizenman (US 20140148728 A1) describes as well-understood, routine or conventional in its description of various prior art references which include eye tracking systems and related computer-based monitoring systems (Paragraphs 0003-0012) as well as “computing devices that include, but are not limited to, desk-top computers, portable computers, mobile computing devices such as tablets or cell phones with either internal eye-tracking devices (i.e., eye-tracking devices that are supported by the operating system of the computing devices) or eye-tracking devices that are external to the computing device (eg., data from the eye-tracker is transferred through one of the communication ports of the computing device)” (Paragraph 0043). Bower (US 20190216392 A1) similarly discloses that such elements are well-understood, routine, or conventional in the art in paragraphs 0045, 0142, and 0371 which describe common user response measurement means such as eye-tracking software/devices and computing systems including a main computer and desktop display. Abel Fernandez (US 20210174959 A1) additionally discloses that eye tracking as a diagnostic tool has been implemented in the art. As discussed above with respect to integration of the abstract idea into a practical application, the present elements amount to no more than mere indications to apply the exception. In Summary, claim 1 recites abstract idea without being integrated into a practical application, and does not provide additional elements that would amount to significantly more. As such, taken as a whole, the claim and is ineligible under the 35 U.S.C. 101. Claims 2-9 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception, in this case an abstract idea, without significantly more. As each of these claims depends from claim 1, which was rejected under 35 U.S.C. 101 in paragraph 9 of this action, these claims must be evaluated on whether they sufficiently add to the practical application of claim 1, or comprise significantly more than the limitations of claim 1. Besides the abstract idea of claim 1: claim 2 recites further limitations of the abstract idea which are themselves abstract, in this case where a person alone or with a generic computer may be capable of using a data set for diagnosing mental disorders and predicting treatment responses to construct a model; claims 3-9 recite further limitations of the additional elements which amount to the use of well-understood, routine, or conventional elements to perform a non-mental process in order to gather data for the mental process step, much like the example given in MPEP 2106.04(d)(2)(c), where the well-understood, routine, or conventional elements are the same as those of claim 1, and the claims merely recite further common measurements which may be performed with these elements and which are generated pre-solution to be used in the abstract idea. The claim element of claim 1 of a method of diagnosing mental disorders and predicting treatment responses performed by a psychiatric examination system is recited with a high level of generality (as written, the steps may be carried out by a person alone or with a generic computer in any undefined manner). This limitation provides no practical application, nor does it provide meaningful limitations to the abstract idea. Claim 10 is rejected under 35 U.S.C. 101 for similar reasons to claim 1. It is additionally noted, regarding analysis of claim 10 under Step 2A: Regarding “a result-deriving unit”, the limitation amounts to nothing more than an instruction to apply the abstract idea using a generic computer, which does not render an abstract idea eligible. The steps performed by the result-deriving unit are, as claimed, capable of being performed in the human mind similar to the examples given in MPEP 2106.04(a)(2)(III)(A)-(C), wherein it is described that “a claim to ‘collecting information, analyzing it, and displaying certain results of the collection and analysis’ where the data analysis steps are recited at a high level of generality such that they could practically be performed in the human mind” recites a mental process and that claims which merely use a computer as a tool to perform a mental process are not eligible when “there is nothing in the claims themselves that foreclose them from being performed by a human, mentally or with pen and paper” such as “mental processes of parsing and comparing data” when the steps are recited at a high level of generality and a computer is used merely as a tool to perform the processes. Furthermore, regarding analysis of claim 10 under Step 2B: The limitations of “an information generation unit”, “a monitor screen”, and “an input unit” constitute extra-solution activity to the judicial exception, which does not amount to an inventive concept when the activity is well-understood, routine, or conventional, and are thus not indicative of integration into a practical application. The claim limitation constitutes adding a generic processor, which Eizenman (US 20140148728 A1) describes as well-understood, routine or conventional in its description of various prior art references which include eye tracking systems and related computer-based monitoring systems (Paragraphs 0003-0012) as well as “computing devices that include, but are not limited to, desk-top computers, portable computers, mobile computing devices such as tablets or cell phones with either internal eye-tracking devices (i.e., eye-tracking devices that are supported by the operating system of the computing devices) or eye-tracking devices that are external to the computing device (eg., data from the eye-tracker is transferred through one of the communication ports of the computing device)” (Paragraph 0043). Bower (US 20190216392 A1) similarly discloses that such elements are well-understood, routine, or conventional in the art in paragraphs 0045, 0142, and 0371 which describe common user response measurement means such as eye-tracking software/devices and computing systems including a main computer and desktop display. Abel Fernandez (US 20210174959 A1) additionally discloses that eye tracking as a diagnostic tool has been implemented in the art. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-5 and 10 is/are rejected under 35 U.S.C. 102(a)(1) and (a)(2) as being anticipated by Eizenman (US 20140148728 A1). Regarding claim 1, Eizenman teaches a method of diagnosing mental disorders and predicting treatment responses performed by a psychiatric examination system (Paragraph 0013-0018, 0036-- a method of identifying individuals with neuropsychiatric disorders or to predict and determine the efficacy of treatment of the disorder or detecting individuals who suffer a trauma to the brain by acquiring information about visual scanning behaviour and fluctuations of visual scanning behaviour of individuals), the method comprising: generating user attention information using user's eye tracking on a monitor screen on which a test is being performed (Paragraph 0014-0018-- presenting to the individual a sequence of visual stimuli, wherein each visual stimulus is comprised of multiple images with specific characteristics, measuring the point-of-gaze of said subject on the visual stimuli; Paragraph 0037-0044-- Visual stimuli are presented on a computer monitor… Participants view a sequence of slides and the eye movement data is collected by a computer…steps of a) Presenting images; b) monitoring and recording eye movements during the presentation); receiving an input of psychiatric examination data including the generated user attention information into a learning model for diagnosing mental disorders and predicting treatment responses (Paragraph 0034-- The present invention uses parameters derived from the subject's VSB when viewing images to identify individual with neuropsychiatric disorders or to predict the efficacy of a treatment of a disorder; paragraph 0128-0129--differences between the relative fixation times on social images and neutral images are used for the identification of apathy (using a naive Baysian Classifier)…; claim 9-- determination of biases comprises the comparison of statistical measures of individual visual scanning parameters with those of controls using confidence intervals, likelihood ratio detectors, linear classifiers, non-linear and neural network classifiers or a combination thereof.); and deriving result information related to mental disorder diagnosis and treatment response from the psychiatric examination data including the user attention information by using the learning model for diagnosing mental disorders and predicting treatment responses (Paragraph 0013-0014-- predict and determine the efficacy of treatment of the disorder; claim 9-- determination of biases comprises the comparison of statistical measures of individual visual scanning parameters with those of controls using confidence intervals, likelihood ratio detectors, linear classifiers, non-linear and neural network classifiers or a combination thereof.). Regarding claim 2, Eizenman teaches the method of claim 1. Eizenman additionally teaches wherein the learning model is constructed by training using a data set for diagnosing mental disorders and predicting treatment responses (paragraph 0014-0018, 0036, 0042, 0072--making a determination of biases in visual scanning behaviour of the individual, by comparing the statistical measures of the individual to the statistical measures of controls; claim 9-- determination of biases comprises the comparison of statistical measures of individual visual scanning parameters with those of controls using confidence intervals, likelihood ratio detectors, linear classifiers, non-linear and neural network classifiers or a combination thereof…In accordance with the present invention for each visual scanning parameter or a set of visual scanning parameters statistical tests that compare the statistical description(s) of the VSB of the individual being tested with the statistical description of the VSB of control groups to determine if the individual suffers from the specific disorder that the assessment task is designed to identify. The control group can be a group of individuals that do not suffer from the disorder that the assessment task is designed to identify or/and a control group of individuals that suffer from the disorder that the assessment task is designed to identify). Regarding claim 3, Eizenman teaches the method of claim 1. Eizenman additionally teaches wherein the generating of the user attention information includes determining, from gaze coordinate values using eye tracking, a degree of attention on whether a user is looking at the monitor screen on which the test is being performed or a target area set within the monitor screen on the basis of a pre-set number of frames per second (Paragraph 0044-0047-- Fixations can be identified, for example, by clusters of gaze points that are within a specific distance (e.g. 1 degree) from each other for a time period that is greater than a minimum (eg., 200 milliseconds). Each fixation can be characterized by a set of parameters (FIG. 3(a)) such as: mean position on the display, duration and the order in the sequence of fixations from the time that a visual stimulus was presented. Each fixation is linked to a specific image on the display so that the fixation behaviour can be analysed with respect to the defining characteristics of images presented to the subject (see FIG. 3(b)) and with respect to the defining characteristics of specific regions (areas of interest) within an image.; paragraph 0049-0065-- Total number of fixations on each image or AOI within an image during each slide presentation… Relative number of fixations: Total number of fixations on each image or AOI within an image divided by the total number of fixations on all images or AOIs on the slide… Relative duration of fixations: Total duration of fixations on each image or AOI within an image divided by the total number of fixations on all images or AOIs on the slide…). Regarding claim 4, Eizenman teaches the method of claim 1. Eizenman additionally teaches wherein the generating of the user attention information includes setting an area of interest in a target area within the monitor screen on which the test is being performed (Paragraph 0037-0047-- Visual stimuli are presented on a computer monitor (for example, a 19 inch monitor) and each visual stimulus (slide) includes several distinct images…AOI within the image (e.g., color, corners)), and determining whether the user is focusing on the set area of interest on the basis of pre-set criteria information (Paragraph 0046-- If the average fixation position falls within the boundaries of an image or an area of interest (AOI) within an image, the characteristics of the image (e.g., valence, complexity) and the AOI within the image (e.g., color, corners) are recorded as part of the description of the fixation; paragraph 0049-0065-- Total number of fixations on each image or AOI within an image during each slide presentation… Relative number of fixations: Total number of fixations on each image or AOI within an image divided by the total number of fixations on all images or AOIs on the slide… Relative duration of fixations: Total duration of fixations on each image or AOI within an image divided by the total number of fixations on all images or AOIs on the slide…). Regarding claim 5, Eizenman teaches the method of claim 1. Eizenman additionally teaches wherein the generating of the user attention information includes determining eye movement state information related to user's eye movement on a target area within the monitor screen on which the test is being performed using the number of saccades or eye movement fixation (paragraph 0049-0065-- Total number of fixations on each image or AOI within an image during each slide presentation… Relative number of fixations: Total number of fixations on each image or AOI within an image divided by the total number of fixations on all images or AOIs on the slide… Relative duration of fixations: Total duration of fixations on each image or AOI within an image divided by the total number of fixations on all images or AOIs on the slide…). Regarding claim 10, Eizenman teaches a psychiatric examination system comprising (Paragraph 0013-0018, 0036-- a method of identifying individuals with neuropsychiatric disorders or to predict and determine the efficacy of treatment of the disorder or detecting individuals who suffer a trauma to the brain by acquiring information about visual scanning behaviour and fluctuations of visual scanning behaviour of individuals), the method comprising: an information generation unit configured to generate user attention information (Paragraph 0036--Eye tracking module configured to monitor the gaze position of the subject during viewing of visual stimuli) using user's eye tracking on a monitor screen (Fig. 1—presentation monitor; Paragraph 0019-0020) on which a test is being performed (Paragraph 0014-0018-- presenting to the individual a sequence of visual stimuli, wherein each visual stimulus is comprised of multiple images with specific characteristics, measuring the point-of-gaze of said subject on the visual stimuli; Paragraph 0037-0044-- Visual stimuli are presented on a computer monitor… Participants view a sequence of slides and the eye movement data is collected by a computer…steps of a) Presenting images; b) monitoring and recording eye movements during the presentation); an input unit configured to receive an input of psychiatric examination data including the generated user attention information (Paragraph 0036—computing module linked to the eye-tracking module to receive the gaze position data from the eye tracking module is utilized, to analyze the data and to derive a set of visual scanning parameters…) into a learning model for diagnosing mental disorders and predicting treatment responses (Paragraph 0034-- The present invention uses parameters derived from the subject's VSB when viewing images to identify individual with neuropsychiatric disorders or to predict the efficacy of a treatment of a disorder; paragraph 0128-0129--differences between the relative fixation times on social images and neutral images are used for the identification of apathy (using a naive Baysian Classifier)…; claim 9-- determination of biases comprises the comparison of statistical measures of individual visual scanning parameters with those of controls using confidence intervals, likelihood ratio detectors, linear classifiers, non-linear and neural network classifiers or a combination thereof.); and a result-deriving unit (Paragraph 0036--A computing module linked to the eye-tracking module to receive the gaze position data from the eye tracking module is utilized, to analyze the data and to derive a set of visual scanning parameters, and to compare the set of parameters for an individual with those of control subjects) configured to derive result information related to mental disorder diagnosis and treatment response from the psychiatric examination data including the user attention information by using the learning model for diagnosing mental disorders and predicting treatment responses (Paragraph 0013-0014-- predict and determine the efficacy of treatment of the disorder; claim 9-- determination of biases comprises the comparison of statistical measures of individual visual scanning parameters with those of controls using confidence intervals, likelihood ratio detectors, linear classifiers, non-linear and neural network classifiers or a combination thereof.). Claim(s) 1-2 and 10 is/are rejected under 35 U.S.C. 102(a)(1) and (a)(2) as being anticipated by Bower (US 20190216392 A1). Regarding claim 1, Bower teaches method of diagnosing mental disorders and predicting treatment responses performed by a psychiatric examination system (Paragraph 0003-0011-- apparatus, systems and methods cognitive platforms configured to implement software and/or other processor-executable instructions for the purpose of measuring data indicative of a user's performance at one or more tasks, to provide a user performance metric. The example performance metric can be used to derive an assessment of a user's cognitive abilities and/or to measure a user's response to a cognitive treatment, and/or to provide data or other quantitative indicia of a user's physiological condition or cognitive bias; Paragraph 0043-- The example platform products and cognitive platforms according to the principles described herein can be applicable to many different types of conditions, such as but not limited to social anxiety, depression, bipolar disorder, major depressive disorder…), the method comprising: generating user attention information using user's eye tracking on a monitor screen on which a test is being performed (Paragraph 0052, 0156-- Examples of physiological measurements include… eye-tracking device or other optical detection device including processing units programmed to determine degree of pupillary dilation; paragraph 0208-0209-- The user may respond to tasks by interacting with the computer device… data indicative of the individual's response can include physiological sensors/measures to incorporate inputs from the user's physical state, such as… eye movements, pupil dilation); receiving an input of psychiatric examination data including the generated user attention information into a learning model for diagnosing mental disorders and predicting treatment responses (Paragraph 0046, 0138, 0188-0189-- An example system, method, and apparatus according to the principles herein can be configured to execute an example classifier to generate a quantifier of the cognitive skills in an individual. The example classifier can be built using a machine learning tool, such as but not limited to linear/logistic regression, principal component analysis, generalized linear mixed models, random decision forests, support vector machines, and/or artificial neural networks… The trained classifier can be applied to measures of the responses of the individual to the tasks and/or interference (either or both with computer-implemented time-varying element) to classify the individual as to a population label (e.g., cognitive disorder, executive function disorder, disease or other cognitive condition)); and deriving result information related to mental disorder diagnosis and treatment response from the psychiatric examination data including the user attention information by using the learning model for diagnosing mental disorders and predicting treatment responses (Paragraph 0186-0189-- An example system, method, and apparatus according to the principles herein can be configured to execute an example classifier to generate a quantifier of the cognitive skills in an individual. The example classifier can be built using a machine learning tool, such as but not limited to linear/logistic regression, principal component analysis, generalized linear mixed models, random decision forests, support vector machines, and/or artificial neural networks… The trained classifier can be applied to measures of the responses of the individual to the tasks and/or interference (either or both with computer-implemented time-varying element) to classify the individual as to a population label (e.g., cognitive disorder, executive function disorder, disease or other cognitive condition)… a classifier using the performance measures of a labeled population of individuals, based on each individual's computed performance metrics, and other known outcome data on the individual, such as but not limited to outcome in the following categories: (i) an adverse event each individual experience in response to administration of a particular pharmaceutical agent, drug, or biologic; (ii) the amount, concentration, or dose titration of a pharmaceutical agent, drug, or biologic, administered to the individuals that resulted in a measurable or characterizable outcome for the individual (whether positive or negative)… The example classifier can be trained based on the computed values of performance metrics of the known individuals, to be able to classify other yet-to-be classified individuals as to potential outcome in any of the possible categories; paragraph 0138, 0162--the App 214 can include processor-executable instructions to provide one or more of: (i) a predictive model output indicative of the cognitive capabilities of the individual, (ii) a likelihood of the individual experiencing an adverse event in response to administration of the pharmaceutical agent, drug, or biologic, (iii) a change in one or more of the amount, concentration, or dose titration of the pharmaceutical agent, drug, or biologic, and (iv) a change in the individual's cognitive capabilities, a recommended treatment regimen, or recommending or determining a degree of effectiveness of at least one of a behavioral therapy, counseling, or physical exercise). Regarding claim 2, Bower teaches the method of claim 1. Bower additionally teaches wherein the learning model is constructed by training using a data set for diagnosing mental disorders and predicting treatment responses (Paragraph 0189-- classification techniques that may be used to train a classifier using the performance measures of a labeled population of individuals, based on each individual's computed performance metrics, and other known outcome data on the individual, such as but not limited to outcome in the following categories: (i) an adverse event each individual experience in response to administration of a particular pharmaceutical agent, drug, or biologic; (ii) the amount, concentration, or dose titration of a pharmaceutical agent, drug, or biologic, administered to the individuals that resulted in a measurable or characterizable outcome for the individual (whether positive or negative); (iii) any change in the individual's cognitive capabilities based on one or more interactions with the single-tasking and multi-tasking tasks rendered using the computing devices herein; (iv) a recommended treatment regimen, or recommending or determining a degree of effectiveness of at least one of a behavioral therapy, counseling, or physical exercise that resulted in a measurable or characterizable outcome for the individual (whether positive or negative); (v) the performance score of the individual at one or more of a cognitive test or a behavioral test, and (vi) the status or degree of progression of a cognitive condition, a disease or an executive function disorder of the individual). Regarding claim 10, Bower teaches psychiatric examination system comprising (Paragraph 0003-0011-- apparatus, systems and methods cognitive platforms configured to implement software and/or other processor-executable instructions for the purpose of measuring data indicative of a user's performance at one or more tasks, to provide a user performance metric. The example performance metric can be used to derive an assessment of a user's cognitive abilities and/or to measure a user's response to a cognitive treatment, and/or to provide data or other quantitative indicia of a user's physiological condition or cognitive bias; Paragraph 0043-- The example platform products and cognitive platforms according to the principles described herein can be applicable to many different types of conditions, such as but not limited to social anxiety, depression, bipolar disorder, major depressive disorder…), the method comprising: an information generation unit configured to generate user attention information using user's eye tracking on a monitor screen on which a test is being performed (Paragraph 0052, 0156-- Examples of physiological measurements include… eye-tracking device or other optical detection device including processing units programmed to determine degree of pupillary dilation; paragraph 0208-0209-- The user may respond to tasks by interacting with the computer device… data indicative of the individual's response can include physiological sensors/measures to incorporate inputs from the user's physical state, such as… eye movements, pupil dilation); an input unit configured to receive an input of psychiatric examination data including the generated user attention information into a learning model for diagnosing mental disorders and predicting treatment responses (Paragraph 0046, 0138, 0188-0189-- An example system, method, and apparatus according to the principles herein can be configured to execute an example classifier to generate a quantifier of the cognitive skills in an individual. The example classifier can be built using a machine learning tool, such as but not limited to linear/logistic regression, principal component analysis, generalized linear mixed models, random decision forests, support vector machines, and/or artificial neural networks… The trained classifier can be applied to measures of the responses of the individual to the tasks and/or interference (either or both with computer-implemented time-varying element) to classify the individual as to a population label (e.g., cognitive disorder, executive function disorder, disease or other cognitive condition)); and a result-deriving unit configured to derive result information related to mental disorder diagnosis and treatment response from the psychiatric examination data including the user attention information by using the learning model for diagnosing mental disorders and predicting treatment responses (Paragraph 0186-0189-- An example system, method, and apparatus according to the principles herein can be configured to execute an example classifier to generate a quantifier of the cognitive skills in an individual. The example classifier can be built using a machine learning tool, such as but not limited to linear/logistic regression, principal component analysis, generalized linear mixed models, random decision forests, support vector machines, and/or artificial neural networks… The trained classifier can be applied to measures of the responses of the individual to the tasks and/or interference (either or both with computer-implemented time-varying element) to classify the individual as to a population label (e.g., cognitive disorder, executive function disorder, disease or other cognitive condition)… a classifier using the performance measures of a labeled population of individuals, based on each individual's computed performance metrics, and other known outcome data on the individual, such as but not limited to outcome in the following categories: (i) an adverse event each individual experience in response to administration of a particular pharmaceutical agent, drug, or biologic; (ii) the amount, concentration, or dose titration of a pharmaceutical agent, drug, or biologic, administered to the individuals that resulted in a measurable or characterizable outcome for the individual (whether positive or negative)… The example classifier can be trained based on the computed values of performance metrics of the known individuals, to be able to classify other yet-to-be classified individuals as to potential outcome in any of the possible categories; paragraph 0138, 0162--the App 214 can include processor-executable instructions to provide one or more of: (i) a predictive model output indicative of the cognitive capabilities of the individual, (ii) a likelihood of the individual experiencing an adverse event in response to administration of the pharmaceutical agent, drug, or biologic, (iii) a change in one or more of the amount, concentration, or dose titration of the pharmaceutical agent, drug, or biologic, and (iv) a change in the individual's cognitive capabilities, a recommended treatment regimen, or recommending or determining a degree of effectiveness of at least one of a behavioral therapy, counseling, or physical exercise). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Eizenman in view of Suzuki (US 20190311484 A1). Regarding claim 6, Eizenman teaches the method of claim 1. However, Eizenman does not explicitly disclose wherein the generating of the user attention information includes calculating speed information on user's eye movement by using gaze coordinates of a user's gaze on the monitor screen on which the test is being performed or a target area set within the monitor screen and time data, and measuring a variation of the calculated speed information on the user's eye movement. Suzuki, in the same field of endeavor of monitoring a mental state of a user based on eye movement tracking (Paragraph 0001-0003), discloses wherein the generating of the user attention information includes calculating speed information on user's eye movement by using gaze coordinates of a user's gaze on the monitor screen on which the test is being performed or a target area set within the monitor screen and time data (Paragraph 0041-0044-- The moving state input unit 21 receives and inputs information indicating eye movement speeds in a time series which is transmitted from the camera 10. The moving state input unit 21 outputs the input information indicating the eye movement speeds to the saccade period extracting unit 22), and measuring a variation of the calculated speed information on the user's eye movement (Paragraph 0041-0044, 0047-- extracts a saccade period in which an eye performs a micro-saccade based on timer-series variation of the eye movement speeds indicated by the information input from the moving state input unit 21). It would have been obvious to one having ordinary skill in the art at the time of filing to modify the method of Eizenman to additionally include the speed determination of Suzuki in order to predictably improve the method by monitoring additional metrics of eye movement which may correspond to changes in mental functioning of the user. Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bower in view of Suzuki (US 20190311484 A1). Regarding claim 6, Bower teaches the method of claim 1. However, Bower does not explicitly disclose wherein the generating of the user attention information includes calculating speed information on user's eye movement by using gaze coordinates of a user's gaze on the monitor screen on which the test is being performed or a target area set within the monitor screen and time data, and measuring a variation of the calculated speed information on the user's eye movement. Suzuki, in the same field of endeavor of monitoring a mental state of a user based on eye movement tracking (Paragraph 0001-0003), discloses wherein the generating of the user attention information includes calculating speed information on user's eye movement by using gaze coordinates of a user's gaze on the monitor screen on which the test is being performed or a target area set within the monitor screen and time data (Paragraph 0041-0044-- The moving state input unit 21 receives and inputs information indicating eye movement speeds in a time series which is transmitted from the camera 10. The moving state input unit 21 outputs the input information indicating the eye movement speeds to the saccade period extracting unit 22), and measuring a variation of the calculated speed information on the user's eye movement (Paragraph 0041-0044, 0047-- extracts a saccade period in which an eye performs a micro-saccade based on timer-series variation of the eye movement speeds indicated by the information input from the moving state input unit 21). It would have been obvious to one having ordinary skill in the art at the time of filing to modify the method of Bower to additionally include the speed determination of Suzuki in order to predictably improve the method by monitoring additional metrics of eye movement which may correspond to changes in mental functioning of the user. Claim(s) 7-9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Eizenman in view of Katnani (US 20200029880 A1 ). Regarding claim 7, Eizenman teaches the method of claim 1. However, Eizenman does not explicitly disclose wherein the generating of the user attention information includes setting the remaining areas other than an area of interest set in a target area within the monitor screen on which the test is being performed as non- interest areas, and determining response inhibition information related to a user's gaze at the set non-interest areas using visual indicators. Katnani, in the same field of endeavor of a system and method for determining a mental condition of a user based on eye measurements (Abstract, paragraph 0006, 0010), discloses wherein the generating of the user attention information includes setting the remaining areas other than an area of interest set in a target area within the monitor screen on which the test is being performed as non- interest areas, and determining response inhibition information related to a user's gaze at the set non-interest areas using visual indicators (Paragraph 0010, 0033, 0039, 0059-- In one embodiment, the inhibitory reflex test comprises an anti-saccade task. In this case, displaying the inhibitory reflex test may comprise displaying a motionless target in a center of a field of vision of the user, and subsequently displaying a first visual stimulus in a periphery of the field of vision of the user. The user may be instructed to fixate on the motionless target, and to make a saccade in a direction away from the first visual stimulus when the first visual stimulus is displayed in the periphery of the field of vision of the user… The level of impairment of the user may be determined by determining either a reaction time or an error of the user in response to the anti-saccade task). It would have been obvious to one having ordinary skill in the art at the time of filing to modify the method of Eizenman to additionally include the response inhibition of Katnani in order to predictably improve the method by monitoring additional metrics of eye movement which may correspond to changes in mental functioning of the user, where Katnani discloses that response inhibition changes may demonstrate an individual’s capacity for sustained attention and response control (Paragraph 0039-0041 of Katnani). Regarding claim 8, Eizenman teaches the method of claim 1. However, Eizenman does not explicitly disclose wherein the generating of the user attention information includes measuring latency time data for a time until user's eye movement occurs in order to gaze at a new stimulus when the new stimulus appears on the monitor screen on which the test is being performed, in a case in which latency time data on a time until the user's eye movement occurs is greater than or equal to pre-set criteria. Katnani, in the same field of endeavor of a system and method for determining a mental condition of a user based on eye measurements (Abstract, paragraph 0006, 0010), discloses wherein the generating of the user attention information includes measuring latency time data for a time until user's eye movement occurs in order to gaze at a new stimulus when the new stimulus appears on the monitor screen on which the test is being performed, in a case in which latency time data on a time until the user's eye movement occurs is greater than or equal to pre-set criteria (Paragraph 0058-0079, 0230, 0258-0279-- ii. an average saccadic latency, saccadic latency defined as an amount of time for the subject to initiate a saccade to the zone… a degree of compromise in executive processes, with increased saccadic latency). It would have been obvious to one having ordinary skill in the art at the time of filing to modify the method of Eizenman to additionally include the latency measurements of Katnani in order to predictably improve the method by monitoring additional metrics of eye movement which may correspond to changes in mental functioning of the user, where Katnani discloses that response inhibition changes may demonstrate an individual’s executive processing ability (Paragraph 0058-0079, 0230, 0258-0279 of Katnani). Regarding claim 9, Eizenman teaches the method of claim 1. However, Eizenman does not explicitly disclose wherein the generating of the user attention information includes extracting data on a change in pupil size of a user according to: an area of interest which is set in a target area within the monitor screen on which the test is being performed; and time. Katnani, in the same field of endeavor of a system and method for determining a mental condition of a user based on eye measurements (Abstract, paragraph 0006, 0010), discloses wherein the generating of the user attention information includes extracting data on a change in pupil size of a user according to: an area of interest which is set in a target area within the monitor screen on which the test is being performed; and time (Paragraph 0025-0027, 0034-0040, 0107-0110, 0131-0134, 0148-0151, 0200-0206-- a. track the pupil diameter of the subject reading the text; and b. if the pupil diameter of the subject does not show a reduction as advancing in reading the text, then report in the test report that that a compromise in executive processes is detected… a means [17] for measuring a pupil diameter of the subject, wherein the processor is further configured for calculating fixation durations on targets of person while performing the visual test, if the fixation duration of the subject [645] while fixating on targets is lower than for the control group, then report in the test report that that a compromise in attentional and executive processes is detected). It would have been obvious to one having ordinary skill in the art at the time of filing to modify the method of Eizenman to additionally include the pupil diameter monitoring of Katnani in order to predictably improve the method by monitoring additional metrics of eye movement which may correspond to changes in mental functioning of the user, where Katnani discloses that response inhibition changes may demonstrate an individual’s attention executive processing ability (Paragraph 0025-0027, 0034-0040, 0107-0110, 0131-0134, 0148-0151, 0200-0206 of Katnani). Claim(s) 7-9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bower in view of Katnani (US 20200029880 A1 ). Regarding claim 7, Bower teaches the method of claim 1. However, Bower does not explicitly disclose wherein the generating of the user attention information includes setting the remaining areas other than an area of interest set in a target area within the monitor screen on which the test is being performed as non- interest areas, and determining response inhibition information related to a user's gaze at the set non-interest areas using visual indicators. Katnani, in the same field of endeavor of a system and method for determining a mental condition of a user based on eye measurements (Abstract, paragraph 0006, 0010), discloses wherein the generating of the user attention information includes setting the remaining areas other than an area of interest set in a target area within the monitor screen on which the test is being performed as non- interest areas, and determining response inhibition information related to a user's gaze at the set non-interest areas using visual indicators (Paragraph 0010, 0033, 0039, 0059-- In one embodiment, the inhibitory reflex test comprises an anti-saccade task. In this case, displaying the inhibitory reflex test may comprise displaying a motionless target in a center of a field of vision of the user, and subsequently displaying a first visual stimulus in a periphery of the field of vision of the user. The user may be instructed to fixate on the motionless target, and to make a saccade in a direction away from the first visual stimulus when the first visual stimulus is displayed in the periphery of the field of vision of the user… The level of impairment of the user may be determined by determining either a reaction time or an error of the user in response to the anti-saccade task). It would have been obvious to one having ordinary skill in the art at the time of filing to modify the method of Bower to additionally include the response inhibition of Katnani in order to predictably improve the method by monitoring additional metrics of eye movement which may correspond to changes in mental functioning of the user, where Katnani discloses that response inhibition changes may demonstrate an individual’s capacity for sustained attention and response control (Paragraph 0039-0041 of Katnani). Regarding claim 8, Bower teaches the method of claim 1. However, Bower does not explicitly disclose wherein the generating of the user attention information includes measuring latency time data for a time until user's eye movement occurs in order to gaze at a new stimulus when the new stimulus appears on the monitor screen on which the test is being performed, in a case in which latency time data on a time until the user's eye movement occurs is greater than or equal to pre-set criteria. Katnani, in the same field of endeavor of a system and method for determining a mental condition of a user based on eye measurements (Abstract, paragraph 0006, 0010), discloses wherein the generating of the user attention information includes measuring latency time data for a time until user's eye movement occurs in order to gaze at a new stimulus when the new stimulus appears on the monitor screen on which the test is being performed, in a case in which latency time data on a time until the user's eye movement occurs is greater than or equal to pre-set criteria (Paragraph 0058-0079, 0230, 0258-0279-- ii. an average saccadic latency, saccadic latency defined as an amount of time for the subject to initiate a saccade to the zone… a degree of compromise in executive processes, with increased saccadic latency). It would have been obvious to one having ordinary skill in the art at the time of filing to modify the method of Bower to additionally include the latency measurements of Katnani in order to predictably improve the method by monitoring additional metrics of eye movement which may correspond to changes in mental functioning of the user, where Katnani discloses that response inhibition changes may demonstrate an individual’s executive processing ability (Paragraph 0058-0079, 0230, 0258-0279 of Katnani). Regarding claim 9, Bower teaches the method of claim 1. However, Bower does not explicitly disclose wherein the generating of the user attention information includes extracting data on a change in pupil size of a user according to: an area of interest which is set in a target area within the monitor screen on which the test is being performed; and time. Katnani, in the same field of endeavor of a system and method for determining a mental condition of a user based on eye measurements (Abstract, paragraph 0006, 0010), discloses wherein the generating of the user attention information includes extracting data on a change in pupil size of a user according to: an area of interest which is set in a target area within the monitor screen on which the test is being performed; and time (Paragraph 0025-0027, 0034-0040, 0107-0110, 0131-0134, 0148-0151, 0200-0206-- a. track the pupil diameter of the subject reading the text; and b. if the pupil diameter of the subject does not show a reduction as advancing in reading the text, then report in the test report that that a compromise in executive processes is detected… a means [17] for measuring a pupil diameter of the subject, wherein the processor is further configured for calculating fixation durations on targets of person while performing the visual test, if the fixation duration of the subject [645] while fixating on targets is lower than for the control group, then report in the test report that that a compromise in attentional and executive processes is detected). It would have been obvious to one having ordinary skill in the art at the time of filing to modify the method of Bower to additionally include the pupil diameter monitoring of Katnani in order to predictably improve the method by monitoring additional metrics of eye movement which may correspond to changes in mental functioning of the user, where Katnani discloses that response inhibition changes may demonstrate an individual’s attention executive processing ability (Paragraph 0025-0027, 0034-0040, 0107-0110, 0131-0134, 0148-0151, 0200-0206 of Katnani). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANNA ROBERTS whose telephone number is (571)272-7912. The examiner can normally be reached M-F 8:30-4:30 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alexander Valvis can be reached at (571) 272-4233. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANNA ROBERTS/Examiner, Art Unit 3791
Read full office action

Prosecution Timeline

Dec 15, 2023
Application Filed
Jan 24, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594013
ANALYTE SENSORS AND SENSING METHODS FEATURING DUAL DETECTION OF GLUCOSE AND KETONES
2y 5m to grant Granted Apr 07, 2026
Patent 12594014
ANALYTE SENSORS AND SENSING METHODS FEATURING DUAL DETECTION OF GLUCOSE AND KETONES
2y 5m to grant Granted Apr 07, 2026
Patent 12589227
GUIDE WIRES
2y 5m to grant Granted Mar 31, 2026
Patent 12588844
ANALYTE SENSORS AND SENSING METHODS FEATURING DUAL DETECTION OF GLUCOSE AND KETONES
2y 5m to grant Granted Mar 31, 2026
Patent 12569646
SYSTEMS AND METHODS FOR PERFORMING TISSUE BIOPSY
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
55%
Grant Probability
98%
With Interview (+43.0%)
3y 7m
Median Time to Grant
Low
PTA Risk
Based on 147 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month