Prosecution Insights
Last updated: April 19, 2026
Application No. 18/580,998

PSYCHOLOGICAL EXAM SYSTEM BASED ON ARTIFICIAL INTELLIGENCE AND OPERATION METHOD THEREOF

Final Rejection §101§103
Filed
Oct 15, 2024
Examiner
YIP, JACK
Art Unit
3715
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Omniconnect Corp.
OA Round
2 (Final)
33%
Grant Probability
At Risk
3-4
OA Rounds
4y 1m
To Grant
70%
With Interview

Examiner Intelligence

Grants only 33% of cases
33%
Career Allow Rate
229 granted / 702 resolved
-37.4% vs TC avg
Strong +38% interview lift
Without
With
+37.6%
Interview Lift
resolved cases with interview
Typical timeline
4y 1m
Avg Prosecution
51 currently pending
Career history
753
Total Applications
across all art units

Statute-Specific Performance

§101
22.8%
-17.2% vs TC avg
§103
42.4%
+2.4% vs TC avg
§102
15.0%
-25.0% vs TC avg
§112
12.4%
-27.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 702 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment In response to the amendment filed 6/13/2025; claims 12,14-15,17,19-20 and 22-30 are pending; claims 1-11,13,16,18 and 21 have been cancelled. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 12,14-15,17,19-20 and 22-30 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Step 1: Is the claimed invention a statutory category of invention? Claims 1, 17 and 20 are directed to a method/system/computer program for providing psychological exam (Step 1, Yes). Step 2A, Prong 1: Does the claim recite an abstract idea? The limitation of steps: … sequentially providing psychological exam content [[for]] having different stimulus styles corresponding to an emotional task, a cognitive style task, and an anti-saccade task, to respectively measure a plurality of personality factors, the plurality of personality factors including a first personality factor measured sensitively by the emotional task, a second personality factor measured sensitively by the cognitive style task, and a third personality factor measured sensitively by the anti-saccade task; acquiring eye tracking data for each of the provided psychological exam content through a camera sensor in real-time; extracting eye movement features for each of the psychological exam content having the different stimulus styles, respectively, based on the acquired eye tracking data; and outputting characteristic data for the plurality of personality factors according to each of the extracted eye movement features based on learning data accumulated by machine learning, and providing psychological exam result data combined with the outputted characteristic data for the plurality of personality factors, wherein the learning data 1s accumulated based on training data labeled with the characteristic data for the plurality of personality factors acquired through a previously conducted psychological test via a questionnaire and the extracted eye movement features. This type of mental process can be practically performed with a human psychologist, for instance by psychologist would readily perform each of the claimed steps. The claimed method akin to mental process of observations of the patient, evaluations psychological exam and the eye movement, and providing result of the patient. The mere nominal recitation of computer hardware performing these steps does not take the claim limitation outside of the mental processes grouping. Thus, the claim recites a mental process (Step 2A, Prong 1: yes). Step 2A, Prong 2: Does the claim recite additional elements that integrate the judicial exception into a practical application? Per the 2019 Revised Patent Subject Matter Eligibility Guidance, if a claim as a whole integrates the recited judicial exception into a practical application of that exception, a claim is not "directed to" a judicial exception. Alternatively, a claim that does not integrate a recited judicial exception into a practical application is directed to the exception. Evaluating whether a claim integrates an abstract idea into a practical application is performed by a) identifying whether there are any additional elements recited in the claim beyond the abstract idea, and b) evaluating those additional elements individual and in combination to determine whether they integrate the abstract idea into a practical application, using one or more of the considerations laid out by the Supreme Court and the Federal Circuit. Exemplary considerations indicative that an additional element (or combination of elements) may have or has not been integrated into a practical application are set forth in the 2019 PEG. With respect to the instant claims, Claims 12, 17 and 20 recite the additional elements of: a camera sensor. Claim 17 recites: at least one memory and at least one processor. Claim 20 recites: a non-transitory recording medium. The limitations of the camera configured to record the image of the user consist of steps of mere data acquisition which is extra solution activity that does not provide a practical application of an abstract method. It is particularly noted that the use of at least one memory and at least one processor "as a tool" to perform an abstract method and steps that only amount to extra solution activity are indicated in the 2019 PEG as examples that an additional element has not been integrated into a practical application. Even in combination, the recited additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits, such as an improvement to a computing system, on practicing the abstract idea (STEP 2A, Prong 2: NO). Step 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? Claim 12, 17 and 20 recite the additional elements of: a camera. Claim 17 recites: at least one memory and at least one processor. Claim 20 recites: a non-transitory recording medium set forth above for Step 2A, Prong 2. Regarding these limitations: Applicant's specification states " each mounted on a camera sensor or capable of being connected to an external camera, since the psychological exam is carried out by tracking a user's eye gaze" in the Applicant’s published application, para. [0037]. “The processor 730 may control a series of processes in which the psychological exam system operates according to the foregoing embodiment of the present disclosure. For example, elements of the psychological exam system may be controlled to perform an operation of the psychological exam system according to one embodiment. There may be a plurality of processors 730, and the processor 730 may perform the operation of the psychological exam system through executing a program stored in the memory 720” in the Applicant’s published application, para. [0092]. There is no indication in the Specification that Applicants have achieved an advancement or improvement in image recognition and psychological exam. Dependent claims 14-15,19 and 22-30 inherit the deficiencies of their respective parent claims through their dependencies and do not recite additional limitations sufficient to direct the claims to more than the claimed abstract idea, and are thus rejected for the same reasons. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 12,14,17,19,22-24 are rejected under 35 U.S.C. 103 as being unpatentable over Khaderi et al. (US 2017/0293356 A1) in view of Wall et al. (US 2021/0133509 A1). Re claims 12, 17 and 22: Khaderi et al. (US 2017/0293356 A1) teaches 12. A method of operating a psychological exam system based on artificial intelligence (Khaderi, Abstract; [0186], “machine learning”; [0291], “image processing and analysis, relying on machine learning or deep learning applications”), the method comprising: sequentially providing psychological exam content having different stimulus styles corresponding to an emotional task, a cognitive style task, and an anti-saccade task, to respectively measure a plurality of personality factors, the plurality of personality factors including a first personality factor measured sensitively by the emotional task, a second personality factor measured sensitively by the cognitive style task, and a third personality factor measured sensitively by the anti-saccade task (Khaderi, [0411], “image 1004 is selected to extract face attributes for analyzing emotions of the subject”; [0054], “determining a degree of decrease in PTSD symptoms of the user”; fig. 16, “Focused Attention Score, Divided Attention Score … Reaction Score”; [0463], “Multi-tracking (M) may represent the ability of the user to sense multiple objects at the same time. Divided attention tasks may require user to act upon multiple things happening at once”; fig. 13, “Anti-Saccade”; [0371], “anti-saccade test”); acquiring eye tracking data for each of the provided psychological exam content through a camera sensor in real-time (Khaderi, [0016], “at least one of a camera configured to acquire eye movement data”; [0299], “a sensor data stream 202 comprising sensor data collected from users in real time is provided to a real time layer 204”); extracting eye movement features for each of the psychological exam content having the different stimulus styles, respectively, based on the acquired eye tracking data (Khaderi, [0371], “The anti-saccade test may be performed using the standardized set of visual stimuli. The results of the anti-saccade test may comprise, for example, mean reaction times as described above for the pro-saccade test, with typical mean reaction times falling into the range of roughly 190 to 270 ms. Other results may include initial direction of eye motion, final eye resting position, time to final resting position, initial fovea distance (i.e., how far the fovea moves in the direction of the flashed target), final fovea resting position, and final fovea distance (i.e., how far the fovea moves in the direction of the desired focus point)”; [0374]; [0381]); and outputting characteristic data for the plurality of personality factors according to each of the extracted eye movement features based on learning data accumulated by machine learning (Khaderi, [0224]; [0365]; [0417], “Customized machine learning algorithms may be created to predict key parameters ranging from blink rate, fatigue, emotions, gaze direction, attention, phorias, convergence, divergence, fixation, gaze direction, pupil size, and others”; [0472]). 17. A psychological exam system (Khaderi, Abstract; [0186], “machine learning”; [0291], “image processing and analysis, relying on machine learning or deep learning applications”), the system comprising: at least one memory that stores a program for a psychological exam; and at least one processor for running the program (Khaderi, [0014]) and configured to: sequentially provide psychological exam content having different stimulus styles corresponding to an emotional task, a cognitive style task, and an anti-saccade task, to respectively measure a plurality of personality factors, the plurality of personality factors including a first personality factor measured sensitively by the emotional task, a second personality factor measured sensitively by the cognitive style task, and a third personality factor measured sensitively by the anti-saccade task (Khaderi, [0411], “image 1004 is selected to extract face attributes for analyzing emotions of the subject”; [0054], “determining a degree of decrease in PTSD symptoms of the user”; fig. 16, “Focused Attention Score, Divided Attention Score … Reaction Score”; [0463], “Multi-tracking (M) may represent the ability of the user to sense multiple objects at the same time. Divided attention tasks may require user to act upon multiple things happening at once”; fig. 13, “Anti-Saccade”; [0371], “anti-saccade test”); acquire eye tracking data for each of the provided psychological exam content through a camera sensor in real-time (Khaderi, [0016], “at least one of a camera configured to acquire eye movement data”; [0299], “a sensor data stream 202 comprising sensor data collected from users in real time is provided to a real time layer 204”); extract eye movement features for each of the psychological exam content having the different stimulus styles, respectively, based on the acquired eye tracking data (Khaderi, [0371], “The anti-saccade test may be performed using the standardized set of visual stimuli. The results of the anti-saccade test may comprise, for example, mean reaction times as described above for the pro-saccade test, with typical mean reaction times falling into the range of roughly 190 to 270 ms. Other results may include initial direction of eye motion, final eye resting position, time to final resting position, initial fovea distance (i.e., how far the fovea moves in the direction of the flashed target), final fovea resting position, and final fovea distance (i.e., how far the fovea moves in the direction of the desired focus point)”; [0374]; [0381]); and output characteristic data for the plurality of personality factors according to each of the extracted eye movement features based on learning data accumulated by machine learning (Khaderi, [0224]; [0365]; [0417], “Customized machine learning algorithms may be created to predict key parameters ranging from blink rate, fatigue, emotions, gaze direction, attention, phorias, convergence, divergence, fixation, gaze direction, pupil size, and others”; [0472]). 22. A computer program product (Khaderi, Abstract; [0186], “machine learning”; [0291], “image processing and analysis, relying on machine learning or deep learning applications”) comprising: a non-transitory recording medium on which a program for executing a method of operating a psychological exam system is stored, wherein the method (Khaderi, [0014]) comprises: sequentially providing psychological exam content having different stimulus styles corresponding to an emotional task, a cognitive style task, and an anti-saccade task, to respectively measure a plurality of personality factors, the plurality of personality factors including a first personality factor measured sensitively by the emotional task, a second personality factor measured sensitively by the cognitive style task, and a third personality factor measured sensitively by the anti-saccade task (Khaderi, [0411], “image 1004 is selected to extract face attributes for analyzing emotions of the subject”; [0054], “determining a degree of decrease in PTSD symptoms of the user”; fig. 16, “Focused Attention Score, Divided Attention Score … Reaction Score”; [0463], “Multi-tracking (M) may represent the ability of the user to sense multiple objects at the same time. Divided attention tasks may require user to act upon multiple things happening at once”; fig. 13, “Anti-Saccade”; [0371], “anti-saccade test”); acquiring eye tracking data for each of the provided psychological exam content through a camera sensor in real-time (Khaderi, [0016], “at least one of a camera configured to acquire eye movement data”; [0299], “a sensor data stream 202 comprising sensor data collected from users in real time is provided to a real time layer 204”); extracting eye movement features for each of the psychological exam content having the different stimulus styles, respectively, based on the acquired eye tracking data (Khaderi, [0371], “The anti-saccade test may be performed using the standardized set of visual stimuli. The results of the anti-saccade test may comprise, for example, mean reaction times as described above for the pro-saccade test, with typical mean reaction times falling into the range of roughly 190 to 270 ms. Other results may include initial direction of eye motion, final eye resting position, time to final resting position, initial fovea distance (i.e., how far the fovea moves in the direction of the flashed target), final fovea resting position, and final fovea distance (i.e., how far the fovea moves in the direction of the desired focus point)”; [0374]; [0381]); and outputting characteristic data for the plurality of personality factors according to each of the extracted eye movement features based on learning data accumulated by machine learning (Khaderi, [0224]; [0365]; [0417], “Customized machine learning algorithms may be created to predict key parameters ranging from blink rate, fatigue, emotions, gaze direction, attention, phorias, convergence, divergence, fixation, gaze direction, pupil size, and others”; [0472]). Khaderi does not explicitly disclose providing psychological exam result data combined with the outputted characteristic data for the plurality of personality factors, wherein the learning data is accumulated based on training data labeled with the characteristic data for the plurality of personality factors acquired through a previously conducted psychological test via a questionnaire and the extracted eye movement features. Wall et al. (US 2021/0133509 A1) teaches systems, devices, methods and media for model optimization and data analysis using machine learning. Input data can be processed and analyzed to identify relevant discriminating features, which can be modeled using a plurality of machine learning models (Wall, Abstract). Wall teaches outputting characteristic data for the plurality of personality factors according to each of the extracted eye movement features based on learning data accumulated by machine learning (Wall, [0436], “Examples of active data collection comprise devices, devices or methods for tracking eye movements … Video recording of subject's face during activity (for example, quality/quantity of eye fixations vs saccades, heat map of eye focus on the screen, focus/attention span, variability of facial expression”; [0022], “Such data analysis can include artificial intelligence, including for example machine learning, and/or statistical models to assess user data and user profiles to further personalize, improve or assess efficacy of the therapeutic interventions”; [0089], “classifier trained using machine learning to categorize said face as exhibiting said emotion”), and providing psychological exam result data combined with the outputted characteristic data for the plurality of personality factors (Wall, [0023], “a device or methods for digitally collecting information and processing and evaluating the provided data to improve the medical, psychological, or physiological state of an individual”; [0434], “comprising collecting digital information and processing and evaluating the provided data to improve the medical, psychological, or physiological state of an individual”; [0439]), wherein the learning data is accumulated based on training data labeled with the characteristic data for the plurality of personality factors acquired through a previously conducted psychological test via a questionnaire and the extracted eye movement features (Wall, [0320], “An expression of an individual interacted with by the digital therapy recipient is assessed using a machine learning classifier and when a face is classified as expressing an emotion”; [0328], “The emotion detection system includes artificial intelligence or machine learning model(s) trained to identify the emotional or social cues”; [0449], “Feedback can include performance scores on various activities or games, indicating whether an emotional response is correct, explanation of incorrect answers”; [0377], “video analysis for the determination of feature values may be performed by a machine. For example, the video analysis may comprise detecting objects (e.g., subject, subject's spatial position, face, eyes, mouth, hands, limbs, fingers, toes, feet, etc.)”; [0448], “real-time machine learning-based classification of commonly used emotions”; [0285], “A most predictive next question can be identified after each prior question is answered”; [0358], ‘prediction module can load a previously saved assessment model”). Therefore, in view of Wall, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method/system described in Khaderi, by providing the questionnaire and previously conducted psychological test as taught by Wall, since it was known in the art to provide questionnaire to assess a person’s emotional state and combine with facial expression (i.e., gaze) (Wall, [0487], “Data can comprise information collected through diagnostic tests, diagnostic questions, or questionnaires (2605). In some instances, data from diagnostic tests (2605) can comprise data collected from a secondary observer (e.g. a parent, guardian, or individual that is not the subject being analyzed). Data can include active data sources (2610), for example data collected from devices configured for tracking eye movement, or measuring or analyzing speech patterns”). Re claim 14: Khaderi teaches 14. The method of claim 12, wherein the sequentially providing psychological exam content comprises: providing image-based psychological exam content for emotional stimulus for the first personality factor measured relatively sensitively according to the emotional task (Khaderi, [0284], “may be defined by the position and extent of various visual stimuli and/or may be later derived after data collection by image processing analysis identifying contiguous, relevant and/or salient areas … may be used to identify regions of interest (e.g. an area where a user tends to fixate is defined by gaze position data)”); providing image and text-based psychological exam content for information processing to determine a preference of object and verbal styles for the second personality factor measured relatively sensitively according to the cognitive style task (Wall, [0320], “An expression of an individual interacted with by the digital therapy recipient is assessed using a machine learning classifier and when a face is classified as expressing an emotion”; [0328], “The emotion detection system includes artificial intelligence or machine learning model(s) trained to identify the emotional or social cues”; [0449], “Feedback can include performance scores on various activities or games, indicating whether an emotional response is correct, explanation of incorrect answers”; [0377], “video analysis for the determination of feature values may be performed by a machine. For example, the video analysis may comprise detecting objects (e.g., subject, subject's spatial position, face, eyes, mouth, hands, limbs, fingers, toes, feet, etc.)”; [0448], “real-time machine learning-based classification of commonly used emotions”; [0285], “A most predictive next question can be identified after each prior question is answered”; [0358], ‘prediction module can load a previously saved assessment model”); and providing target image-based psychological exam content to induce an eye movement for the third personality factor measured relatively sensitively according to the anti-saccade task (Khaderi, [0371], “an anti-saccade eye tracking test may be performed. The anti-saccade test measures the amount of time required for an individual to shift his or her gaze from a stationary object away from a flashed target, towards a desired focus point”; [0358], “anti-saccadic movements (toward un-intended target), the amount of anti-saccadic error (time and direction from intended to unintended target)”). Re claim 19: Khaderi teaches 19. The system of claim 17, wherein the at least one processor is further configured to: provide image-based psychological exam content for emotional stimulus for the first personality factor measured relatively sensitively according to the emotional task (Khaderi, [0284], “may be defined by the position and extent of various visual stimuli and/or may be later derived after data collection by image processing analysis identifying contiguous, relevant and/or salient areas … may be used to identify regions of interest (e.g. an area where a user tends to fixate is defined by gaze position data)”); provide image and text-based psychological exam content for information processing to determine a preference of object and verbal styles for the second personality factor measured relatively sensitively according to the cognitive style task (Wall, [0320], “An expression of an individual interacted with by the digital therapy recipient is assessed using a machine learning classifier and when a face is classified as expressing an emotion”; [0328], “The emotion detection system includes artificial intelligence or machine learning model(s) trained to identify the emotional or social cues”; [0449], “Feedback can include performance scores on various activities or games, indicating whether an emotional response is correct, explanation of incorrect answers”; [0377], “video analysis for the determination of feature values may be performed by a machine. For example, the video analysis may comprise detecting objects (e.g., subject, subject's spatial position, face, eyes, mouth, hands, limbs, fingers, toes, feet, etc.)”; [0448], “real-time machine learning-based classification of commonly used emotions”; [0285], “A most predictive next question can be identified after each prior question is answered”; [0358], ‘prediction module can load a previously saved assessment model”); and provide target image-based psychological exam content to induce an eye movement for the third personality factor measured relatively sensitively according to the anti-saccade task (Khaderi, [0371], “an anti-saccade eye tracking test may be performed. The anti-saccade test measures the amount of time required for an individual to shift his or her gaze from a stationary object away from a flashed target, towards a desired focus point”; [0358], “anti-saccadic movements (toward un-intended target), the amount of anti-saccadic error (time and direction from intended to unintended target)”). Re claim 23: 23. The method of claim 12, wherein the acquiring the eye tracking data for each of the provided psychological exam content through the camera sensor in-real time includes sensing an eye movement (Khaderi, [0016], “at least one of a camera configured to acquire eye movement data”; [0299], “a sensor data stream 202 comprising sensor data collected from users in real time is provided to a real time layer 204”), a gaze direction, and a gaze duration while each of the provided psychological exam content is displayed (Khaderi, [0290], “The data may be combined further with estimated gaze direction from eye tracking and this may be further converted into retinal coordinates”; [0237], “Fixation Duration”). Re claim 24: 24. The system of claim 17, wherein the eye tracking data is acquired by sensing an eye movement, a gaze direction, and a gaze duration while each of the provided psychological exam content is displayed (Khaderi, [0016], “at least one of a camera configured to acquire eye movement data”; [0299], “a sensor data stream 202 comprising sensor data collected from users in real time is provided to a real time layer 204”; [0290], “The data may be combined further with estimated gaze direction from eye tracking and this may be further converted into retinal coordinates”; [0237], “Fixation Duration”). Claims 15 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Khaderi and Wall as applied to claims 12 and 17 above, and further in view of Hill (US 2012/0002848 A1) and Burmistrov et al. (US 2022/0101873 A1). Re claims 15 and 20: Khaderi does not explicitly disclose 15. The method of claim 12, wherein the first personality factor measured relatively sensitively according to the emotional task comprises neuroticism, extraversion, and agreeableness, wherein the second personality factor measured relatively sensitively according to the cognitive style task comprises extraversion, openness, agreeableness, and conscientiousness, and wherein the third personality factor measured relatively sensitively according to the anti-saccade task comprises honesty. Hill teaches 15. The method of claim 13, wherein the personality factor measured relatively sensitively according to the emotional task comprises neuroticism, extraversion, and agreeableness (Hill, fig. 2a; [0067]), wherein the personality factor measured relatively sensitively according to the cognitive style task comprises extraversion, openness, agreeableness, and conscientiousness (Hill, fig. 2a; [0067]). 20. The system of claim 18, wherein the personality factor measured relatively sensitively according to the emotional task comprises neuroticism, extraversion, and agreeableness, wherein the personality factor measured relatively sensitively according to the cognitive style task comprises extraversion, openness, agreeableness, and conscientiousness (Hill, fig. 2a; [0067]). Burmistrov teaches wherein the third personality factor measured relatively sensitively according to the anti-saccade task comprises honesty (Burmistrov, [0066], “speaker physiology analysis module 240 may process other sensor data, such as visual data from forward-facing cameras, that is associated with speaker 160 in order to identify non-verbal cues. For example, speaker physiology analysis module 240 could perform various computer vision (CV) algorithms to classify human physiological data as indicating truthfulness, deceit, indecisiveness, and so forth. In such instances, speaker physiology analysis module 240 may identify a particular set of physiological data (e.g., a blink time of over one second, eye gaze direction and direction changes, rapid blink rate, eye saccades, etc.) and classify the physiological data as indicating deceit”). Therefore, in view of Hill and Burmistrov, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method/system described in Khaderi, by assessing the big-5 factor model as taught by Hill and honesty level as taught by Burmistrov, in order to provide a user with feedback regarding the likelihood that the respondent is factually true (Burmistrov, [0017]) and there are numerous advantages to the Big Five Model. It predicts human behavior in diverse settings and situations. It is genetically heritable and stable across life span. The traits are also universal across cultures and each of the traits is statistically independent from the others. Moreover, in linking personality traits to lasting habits and reactions to specific stimuli, the Big Five Model provides a reliable, intimate reading of buyer attitudes and decision-making styles, receptivity to advertising styles and content, and aids in prediction of offer usage rates and degrees of loyalty based on the emotional context or aspects of a marketing effort (Hill, [0018]). Claims 25 – 26 are rejected under 35 U.S.C. 103 as being unpatentable over Khaderi and Wall as applied to claims 12 or 17 above, and further in view of Kapoula (US 2023/0225611 A1) Re claims 25 - 26: Khaderi teaches 25. The method of claim 12, wherein the eye movement features in the emotional task include a fixation rate (FR) (Khaderi, [0113], “fixation rate”), a fixation duration (FD) (Khaderi, [0031]), a saccade fixation rate (SFR) (Khaderi, [0031], “saccade rate”). 26. The system of claim 17, wherein the eye movement features in the emotional task include a fixation rate (FR) (Khaderi, [0113], “fixation rate”), a fixation duration (FD) (Khaderi, [0031]), a saccade fixation rate (SFR) (Khaderi, [0031], “saccade rate”). Khaderi does not explicitly disclose a mean saccade amplitude (MSA), a mean saccade peak velocity (MSPV), a right large saccade (RLS), and a left large saccade (LLS). Kapoula teaches a method of processing data representative of a person’s binocular motricity. Kapoula teaches The system / method, wherein the eye movement features in the emotional task include a fixation rate (FR), a fixation duration (FD), a saccade fixation rate (SFR), a mean saccade amplitude (MSA) (Kapoula, [0019] – [0020]), a mean saccade peak velocity (MSPV) (Kapoula, [01293]), a right large saccade (RLS), and a left large saccade (LLS) (Kapoula, [0081]). Therefore, in view of Kapoula, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method and system described in Khaderi, by providing the saccade parameters as taught by Kapoula, since the parameters being characteristic of a possible pathology of a person in particular learning disorders such as dyslexia or vertigo (Kapoula, [0012]). Claims 27 – 28 are rejected under 35 U.S.C. 103 as being unpatentable over Khaderi and Wall as applied to claims 12 or 17 above, and further in view of Maltz (US 2018/0260024 A1) Re claims 27 – 28: Khaderi teaches 27. The method of claim 12, wherein the eye movement features related to an area of interest (AOI). 28. The system of claim 17, wherein the eye movement features related to an area of interest (AOI) (Khaderi, [0249], “saccades within a particular region”). Khaderi does not explicitly disclose wherein the eye movement features … include a dwell time and a revisit. Maltz teaches a user-worn eye-tracking device, method, and system (Maltz, Abstract). Maltz teaches the system/method, wherein the eye movement features related to an area of interest (AOI) designated in the cognitive style task include a dwell time and a revisit (Maltz, [0127]). Therefore, in view of Maltz, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method and system described in Khaderi, by providing the saccade parameters as taught by Maltz, since if the eye is revisiting on particular target regions, and determining if the user's eye has a tendency to visit the target region just prior to the main incidence of target fixation and dwell time on the target may also be used because this also gives important information that the eye is fixing on a particular target (Maltz, [0127]). Claims 29 – 30 are rejected under 35 U.S.C. 103 as being unpatentable over Khaderi and Wall as applied to claims 12 or 17 above, and further in view of Krukowski et al. (US 2021/0330185 A1) Re claims 29 – 30: Khaderi teaches the method /system, wherein the eye movement features in the anti-saccade task include a response time and error rate used to measure a degree of a context-inappropriate reaction inhibition (Khaderi, [0374], “Saccades require time to plan and execute, and a delay, or latency”). Khaderi does not explicitly disclose DLPFC. Krukowski teaches a diagnostic device for use in performing refractive errors assessment and neurodegenerative disorders screening (Krukowski, Abstract). Krukowski teaches 29. The method of claim 12, wherein the eye movement features in the anti-saccade task include a response time and error rate used to measure a degree of a context-inappropriate reaction inhibition function of a dorsolateral prefrontal cortex (DLPFC). 30. The system of claim 17, wherein the eye movement features in the anti-saccade task include a response time and error rate used to measure a degree of a context-inappropriate reaction inhibition function of a dorsolateral prefrontal cortex (DLPFC) (Krukowski, [0218]). Therefore, in view of Krukowski, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method and system described in Khaderi, by providing DLPFC as taught by Krukowski, since it was known in the art that the functions of the dorsolateral prefrontal cortex (DLPFC) are related to language processing ability. Response to Arguments Applicant’s arguments with respect to claim(s) 12,14-15,17,19-20 and 22-30 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Regarding 101 rejections: Applicant argues: Example 47 as patent eligible because it recited additional elements, such as detecting anomalies in network traffic using the trained artificial intelligence (AI), detecting source address in real time, and blocking future traffic, when considered in combination, that integrated into a practical application by improving the functioning of a computer or technical field. The examiner has not found any similarity between the claims here and claims of Example 47 of the July Update that detects malicious network packets and network intrusion. Applicant has not demonstrated any similarity between the claims here and Example 47. Unlike example 47, Applicant’s claims here are claiming an improvement in the psychological exam itself and apply specific mathematic operations to accomplish that practical application. Thus, the claims are not merely using generic machine learning for an otherwise mental process but instead are directed to a technical improvement in the machine learning process itself. Applicant argues: Similar to Example 48, claimed method and system recite an additional element involving the specific use of AI or machine learning model to process complex, dynamic, and continuously acquired sensory data generated by humans - eye tracking data in amended claim 12, and mixed speech signals in Example 48 - to output characteristic data for the plurality of personality factors according to the extracted eye movement features. The limitations of the eye tracking device (i.e., camera) configured to track the eye movment consist of steps of mere data acquisition which is extra solution activity that does not provide a practical application of an abstract method. Under the 2019 PEG, a conclusion that an additional element is insignificant extra-solution activity in Step 2A should be reevaluated in Step 2B. Here, the a camera sensor was considered to be extra-solution activity in Step 2A, Prong 2, and thus it is reevaluated in Step 2B to determine if it is more than what is well-understood, routine, and conventional activity in the field. The Symantec, TL/ Communications, O/P Techs, and buySAFE court decisions cited in MPEP 2106.05(d)(II) indicate that transmitting data including sending messages is a well-understood, routine, and conventional function when it is claimed in a merely generic manner, as it is here in the recitation of the notification unit generating notifications and the biological and current position acquiring units. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JACK YIP whose telephone number is (571)270-5048. The examiner can normally be reached Monday thru Friday; 9:00 AM - 5:00 PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, XUAN THAI can be reached at (571) 272-7147. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JACK YIP/Primary Examiner, Art Unit 3715
Read full office action

Prosecution Timeline

Oct 15, 2024
Application Filed
May 05, 2025
Examiner Interview (Telephonic)
May 07, 2025
Non-Final Rejection — §101, §103
Aug 12, 2025
Response Filed
Nov 15, 2025
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12588859
SYSTEM AND METHOD FOR INTERACTING WITH HUMAN BRAIN ACTIVITIES USING EEG-FNIRS NEUROFEEDBACK
2y 5m to grant Granted Mar 31, 2026
Patent 12592160
System and Method for Virtual Learning Environment
2y 5m to grant Granted Mar 31, 2026
Patent 12558290
BLOOD PRESSURE LOWERING TRAINING DEVICE
2y 5m to grant Granted Feb 24, 2026
Patent 12525140
SYSTEMS AND METHODS FOR PROGRAM TRANSMISSION
2y 5m to grant Granted Jan 13, 2026
Patent 12512012
SYSTEM FOR EVALUATING RADAR VECTORING APTITUDE
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
33%
Grant Probability
70%
With Interview (+37.6%)
4y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 702 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month