Prosecution Insights
Last updated: April 19, 2026
Application No. 18/972,781

NEURO-OPHTHALMIC RISK ASSESSMENT

Non-Final OA §101§112
Filed
Dec 06, 2024
Examiner
MORICE DE VARGAS, SARA JESSICA
Art Unit
3681
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Locaze LLC
OA Round
3 (Non-Final)
8%
Grant Probability
At Risk
3-4
OA Rounds
3y 11m
To Grant
32%
With Interview

Examiner Intelligence

Grants only 8% of cases
8%
Career Allow Rate
2 granted / 26 resolved
-44.3% vs TC avg
Strong +24% interview lift
Without
With
+24.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 11m
Avg Prosecution
33 currently pending
Career history
59
Total Applications
across all art units

Statute-Specific Performance

§101
35.7%
-4.3% vs TC avg
§103
34.4%
-5.6% vs TC avg
§102
8.8%
-31.2% vs TC avg
§112
20.7%
-19.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 26 resolved cases

Office Action

§101 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed ----12/08/2025 has been entered. Status of Claims Claims 1-5, 7-14, and 16-20 are currently pending and have been examined. Claims 1-2, 4, 7, 10-12, 14, 16, and 19-20 have been amended. Claims 6 and 15 have been canceled. Claims 1-5, 7-14, and 16-20 have been rejected. Claim Rejections - 35 USC § 112(a) The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claim 1-5, 7-14, and 16-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claim 1, 11, and 20 further disclose, “determining, by the data processing circuitry using a combination of the parametric values and one or more Machine Learning (ML) models, one or more risk scores corresponding to the facial image of the user, wherein the one or more risk scores are determined based on baseline data comprising images of the user without any oculomotor impairment.” The specification does not properly does not disclose how the machine learning (ML) model determines the risk score by comparing the user feature metrics with baseline feature metrics, it merely states that it does so. There is no disclosure of any boundary, range, threshold, or similar requirement used to categorize the comparison with the baseline into risk scores. At most, the specification indicates that there is only two possible risk scores which is “risk” or “no risk,” however, there still is no disclosure of any range (or equivalent) that is used to determine “risk” or “no risk.” A limitation may lack written description when the claims define the invention in functional language specifying a desired result but the specification does not sufficiently describe how the function is performed or the result is achieved. For software, this can occur when the algorithm or steps/procedure for performing the computer function are not explained at all or are not explained in sufficient detail (simply restating the function recited in the claim is not necessarily sufficient). In other words, the algorithm or steps/procedure taken to perform the function must be described with sufficient detail so that one of ordinary skill in the art would understand how the inventor intended the function to be performed. See MPEP §§ 2163.02 and 2181, subsection IV. It is not enough that one skilled in the art could write a program to achieve the claimed function, because the specification must explain how the inventor intends to achieve the claimed function to satisfy the written description requirement. See MPEP 2161.01 (citing Vasudevan Software, Inc. v. MicroStrategy, Inc., 782 F.3d 671, 681-683 (Fed. Cir. 2015)). The instant application does not disclose the algorithm that the ML model uses to determine the risk scores (such as the a boundary, range, or threshold or any similar requirement in relation to the risk scores). Claims 1, 11, and 20 disclose, “generating, by the data processing circuitry based on the one or more risk scores, an output indicative of a measure of risk of neuro-ophthalmic impairment for the userbased on the one or more risk scores. Further, Claims 2 and 12 further disclose, “identify, based on the one or more parametric values using the one or more ML models, one or more impairment levels for one or more impairments.” Aside from repeating the claim language, the specification does not properly disclose how the machine learning model is determining the impairment level. There is no disclosure of any sort of boundary, range, threshold of any similar requirements the ML utilizes to determine the one or more impairment levels based on the parametric values. Paras 40-41 at most discloses “impairment” or “no impairment,” however, there is still no disclosure of a range (or equivalent) used to determine impairment or not impaired for the “impairment level.” While para 76 discloses, “The non-impaired range may include a range of threshold value for each risk score of the risk score(s) corresponding to a neuro-ophthalmic test. The data comparator 308 may further determine whether risk score(s) are within the corresponding non-impaired range,” it does not disclose an impairment level. Claims 2 and 12 discloses, “identifying… based on parametric values using the one or more ML models, one or more impairment levels for one or more impairments…” There is no disclosure of any sort of boundary, range, threshold, or any similar requirements the ML model utilizes to determine the one or more impairment levels based on the parametric values. Instead, para 76 discloses support for determining either “non-impaired” or “impaired” based on a range of threshold value for each risk score (not for the parametric values). As presented above, a limitation may lack written description when the claims define the invention in functional language specifying a desired result but the specification does not sufficiently describe how the function is performed or the result is achieved. For software, this can occur when the algorithm or steps/procedure for performing the computer function are not explained at all or are not explained in sufficient detail (simply restating the function recited in the claim is not necessarily sufficient). The instant application does not disclose the algorithm that the ML model uses to determine the impairment level (such as the a boundary, range, or threshold or any similar requirement in relation to the impairment levels). Claims 3-5, 7-10, 13-14, and 16-19 are rejected as dependent on a rejected base claim. Therefore, Claims 1-5, 7-14, and 16-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-5, 7-14, and 16-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claimed invention is directed to an abstract idea without significantly more. Claims 1-5, 7-14, and 16-20 are directed to a system, method, or product which are one of the statutory categories of invention. (Step 1: YES). Independent Claim 1 discloses a method for performing one or more neuro-ophthalmic risk assessments of a user, the method comprising: receiving, by data processing circuitry, at least one facial image of the user, wherein the at least one facial image is captured using a mobile device; determining, by the data processing circuitry based on the at least one facial image and using one or more machine-learning models, parametric values associated with neuro-ophthalmic tests indicative of accommodative sufficiency, vestibular ocular reflex, saccadic eye movement, smooth pursuit eye movement, and optokinetic nystagmus; determining, by the data processing circuitry using a combination of the parametric values and one or more Machine Learning (ML) models, one or more risk scores corresponding to the facial image of the user wherein the one or more risk scores are determined based on baseline data comprising images of the user without any oculomotor impairment; generating, by the data processing circuitry based on the one or more risk scores, an output indicative of a measure of risk of neuro-ophthalmic impairment for the user and providing the output for display on the mobile device. Independent Claim 11 discloses a system to perform one or more neuro-ophthalmic risk assessments of a user, the system comprises: a database; and data processing circuitry coupled to the database, wherein the data processing circuitry is configured to: receive at least one facial image of the user wherein the at least one facial image is captured using a mobile device; determine, based on the at least one facial image and using one or more machine- learning models, neuro-ophthalmic tests indicative of accommodative sufficiency, vestibular ocular reflex, saccadic eye movement, smooth pursuit eye movement, and optokinetic nystagmus; determine, using a combination of the parametric values and one or more Machine Learning (ML) models, one or more risk scores corresponding to the facial image of the user wherein the one or more risk scores are determined based on baseline data comprising images of the user without any oculomotor impairment; generate based on the one or more risk scores, an output indicative of a measure of risk of neuro-ophthalmic impairment for the user and provide the output for display on the mobile device. Independent Claim 20 discloses a computer program product for one or more neuro-ophthalmic risk assessments of a user, the computer program product comprising computer-executable instructions that are stored on a non-transitory computer-readable medium and that, when executed by a data processing circuitry performs operations comprising: receiving at least one facial image of the user wherein the at least one facial image is captured using a mobile device; determining, based on the at least one facial image and using one or more machine- learning models, parametric values associated with neuro- ophthalmic tests indicative of accommodative sufficiency, vestibular ocular reflex, saccadic eye movement, smooth pursuit eye movement, and optokinetic nystagmus; determining, using a combination of the parametric values and one or more Machine Learning (ML) models, one or more risk scores corresponding to the facial image of the user wherein the one or more risk scores are determined based on baseline data comprising images of the user without any oculomotor impairment; generating based on the one or more risk scores, an output indicative of a measure of risk of neuro-ophthalmic impairment for the user and providing the output for display on the mobile device. The examiner is interpreting the above bolded limitations as additional elements as further discussed below. The remaining limitations are merely directed to instructions or rules to organize patients into different levels of risk. The series of steps recited above describe managing personal behavior or relationships or interactions between people and thus are grouped as certain methods of organizing human activity which is an abstract idea. (Step 2A- Prong 1: YES. The claims are abstract). This judicial exception is not integrated into a practical application. Limitations that are not indicative of integration into a practical application include: (1) Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05.f), (2) Adding insignificant extra- solution activity to the judicial exception (MPEP 2106.05.g), (3) Generally linking the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05.h). Independent Claim 1 discloses the following additional elements: Data processing circuitry A mobile device One or more Machine Learning (ML) models Independent Claim 11 discloses the following additional elements: A database Data processing circuitry coupled to the database A mobile device One or more Machine Learning (ML) models Independent Claim 20 discloses the following additional elements: A computer program product comprising computer-executable instructions that are stored on a non-transitory computer-readable medium Data processing circuitry A mobile device One or more Machine Learning (ML) models In particular, the mobile device, one or more machine learning (ML) models (of claims 1, 11, and 20), the data processing circuitry (of claims 1 and 20), the database, data processing circuitry coupled to the database (of claim 11), and the computer program product comprising computer-executable instructions that are stored on a non-transitory computer-readable medium (of claim 20) are recited at a high-level of generality such that it amounts to no more than mere instructions to implement an abstract idea by adding the words ‘apply it’ (or an equivalent) with the judicial exception. Applicant’s specification states at paragraph 39 - Examples of the user device 102 may include, but are not limited to portable handheld electronic devices such as a mobile phone, a tablet, a laptop, a smart watch etc., or fixed electronic devices such as a desktop computer, computing devices, etc. Aspects of the present disclosure are intended to include or otherwise cover any type of user devices, available now or later developed through advancement in technology, as the user device 102, without deviating from the scope of the present disclosure. Thus, disclosing various generic devices that are performing as expected. Accordingly, these additional elements, when considered separately and as an ordered combination, do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Accordingly, claim(s) 1, 11, and 20 are directed to an abstract idea(s) without a practical application. (Step 2A-Prong 2: NO: the additional claimed elements are not integrated into a practical application). The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of the mobile device, one or more machine learning (ML) models (of claims 1, 11, and 20), the data processing circuitry (of claims 1 and 20), the database, data processing circuitry coupled to the database (of claim 11), and the computer program product comprising computer-executable instructions that are stored on a non-transitory computer-readable medium (of claim 20) amount to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept ("significantly more’). MPEP2106.05(I)(A) indicates that merely saying "apply it” or equivalent to the abstract idea cannot provide an inventive concept ("significantly more"). Accordingly, even in combination, these additional elements do not provide significantly more. As such the independent claims 1, 11, and 20 are not patent eligible. (Step 2B: NO. The claims do not provide significantly more). Dependent claim(s) 2-5, 7-10, 12-14, and 16-19 are similarly rejected because they either further define/narrow the abstract idea and/or do not further limit the claim to a practical application or provide an inventive concept such that the claims are subject matter eligible even when considered individually or as an ordered combination. Dependent claims 8-9 disclose a database which is not disclosed under the Independent Claim 1 from which is depends from, but is evaluated above under Independent Claim 11 as amounting to no more than mere instructions to implement an abstract idea by adding the words ‘apply it’ (or an equivalent) with the judicial exception. This evaluation holds for the database disclosed in Claims 8 and 9 and any dependent claims under Claim 11. This additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of the database (of claims 8 and 9) amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept ("significantly more’). MPEP2106.05(I)(A) indicates that merely saying "apply it” or equivalent to the abstract idea cannot provide an inventive concept ("significantly more"). Accordingly, this additional element does not provide significantly more. Therefore, the dependent claims 2-5, 7-10, 12-14, and 16-19 are also directed to an abstract idea. Thus, Claims 1-5, 7-14, and 16-20 are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter. Subject Matter Free of Prior Art Carson (US PG Pub 2018/0296089 A1) the following: Para 10 discloses the sensor data comprises images acquired by a camera on the portable computing device. Para 21 discloses the application provides an evaluation of the vesitbulo-ocular reflex or oculomotor function of the subject, which can utilize additional equipment such as a headset, goggles, computers, or monitors (but it is not required). However, it does not disclose all five neuro-ophthalmic tests that are conducted via data processing circuitry based on at least one facial image that is captured using a mobile device. Edmonds (US PG Pub 2023/0238143 A1) discloses the following: Para 173-175 discloses these example neurocognitive test assessments, including oculomotor function tests [parametric values]… can be tested and collected using smart device 120 and associated application 124 to provide new and different inputs to the diagnosis and prognosis application 134 in the BRAINBox server 130 for use in generating a diagnostic score and prognostic risk scores… diagnostic score and prognostic risk scores for post-acute TBI symptom categories as measures of likelihood of patient outcomes. Fig. 2 and associated paragraph disclose to determine a prognostic score, the secondary classification determinations are made in block 265. In block 269, a secondary classification model is selected from classifiers 273 wherein examples of said classifiers include a random forest, logistic regression, logit boost or extreme gradient boosting as shown in block 273. Para 66-68 and Fig. 2 discloses one or more stratification algorithms… used to place patients determined to be in the TBI category into at least one of the following three risk strata: i) TBI positive low risk for one or more post-acute symptoms [wherein Fig 2. discloses the different “indicated symptoms” [impairment levels] as well as the symptom risk (measure of risk)]. Paras 169-170 discloses these systems are assessed by testing balance (vestibular component) or the way the eyes track a visual cue or stimulus, and test the smoothness of the oculomotor coordination and the ability to effectively accommodate while focusing on an object at changing distances. However, it does not disclose all five neuro-ophthalmic tests that are conducted via data processing circuitry based on at least one facial image that is captured using a mobile device. Zakariaie (US PG Pub 2022/0313083 A1) discloses the following limitations: Para 31 discloses During data collection, video frames and quantitative eye data (X, Y gaze position plus pupil diameter) were measured continuously and stored for additional post-hoc analysis. In addition, we developed a package of analysis software written in Python and Matlab to extract a host of different features from the data and controlled environmental manipulations. Our software estimated the time course of the following features: Eye Movement, Gaze location X, Gaze location Y… Gaze Deviation (Polar Angle), Gaze Deviation (Eccentricity)… Scan Path (gaze trajectory over time). Para 103 discloses in yet another embodiment, as shown in FIG. 4, a smart phone 26 could be used to record the necessary videos as many smart phones today have a forward-looking camera 16 and a rear looking camera 14… The mobile device 26 (smart phone or tablet) has a display screen 28 that can display to the user the various tasks. The computing device 20 can be one in the same as the mobile device 26 or the mobile device 26 can still send its video information for processing to an external computing device 20 as shown in FIG. 1. Port (US PG Pub 2016/0213301 A1) discloses the following limitations: Para 8 discloses the user screen and operator screen provide either an indication of likely concussed or likely not concussed [non-impaired risk assessment report] based on the difference between the values of at least one measured variable [non-impaired range if difference is small]. Para 61 discloses computing device 146 is capable of accepting user input commands and user input data, and is capable of outputting data to screens 130 and/or 144, or other computing devices by any combination of wired, wireless, and/or network connections. Paras 9-12 disclose some embodiments further include the step of providing a visualization unit for a user not suspected of suffering an mTBI which can track and record the user's eye movement data by a camera and a first computing device, wherein the user's eye movement data provides the user's unimpaired baseline score for the at least one variable… software-implemented logic to determine if the difference between the at least one measured variable of the user's eye movement between the user's unimpaired baseline score and the user's mTBI score is great enough to indicate a likelihood of an mTBI. Bal (US PG Pub 2013/0179191 A1) discloses the following limitations: Para 45-46 and Fig. 7 disclose the user has selected the Lab Data icon (FIG. 6, step 108), and has arrived at the Lab Data icon (step 108). The user may select the Lab Data icon (step 108), Get Lab Test Results icon (step 116), or else the Back icon (step 184). When the Lab Data icon (step 108) is pressed, the Lab Data Screen is displayed, (step 114) which contains the name of the patient, DOB, date of the test and location. The Display Lab Data Screen (step 114) also displays the Current Procedural Terminology (CPT) Codes, the actual test data with flags (normal, high, low), and the normal range. If the user selects the Get Lab Test Results icon (step 116), the Get Lab Test Results screen (step 118) is displayed… The Display Lab Icons screen (step 120) displays icons for current labs such as Lab A and Lab B, with an option to enter any Lab name. Gonzalez Garcia (US PG Pub 2022/0013228 A1) discloses the following limitation: Para 53 discloses the virtual assistant 203 provides guidance to the user or a practitioner to carry out a particular eye test as in step 207. For example, the virtual assistant 203 provides instructions or guidance for the user to self-administer a visual acuity test, a contrast sensitivity tests, a color sensitivity test or a perimetry test. In one or more further configurations, the virtual assistant 203 provides the test guidance in the form of a tutorial or instructional video.) Fink (US Patent 10,842,373 B2) discloses a smartphone-based handheld ophthalmic examination device, but it does not disclose any of the five neuro-ophthalmic tests claimed. PupilScreen: Using Smartphones to Assess Traumatic Brain Injury as taught by Mariakakis et. Al (Mariakakis, A., Baudin, J., Whitmire, E., Mehta, V., Banks, M. A., Law, A., McGrath, L., & Patel, S. N. (2017). Pupilscreen: Using smartphones to assess traumatic brain injury. – Ubicomp Lab – Ubiquitous Computing Lab at the University of Washington. https://ubicomplab.cs.washington.edu/publications/pupilscreen/) discloses using a smartphone to assess traumatic brain injury in order to address the need of identifying concussions immediately by using a technology that most people have within arm’s reach: a smartphone. However, it does not disclose any of the five neuro-ophthalmic tests that are conducted via data processing circuitry based on at least one facial image that is captured using a mobile device as claimed, it measures a pupillary light reflex. Mihali (US PG Pub 2020/0371587) discloses the following: Para 95 discloses after mild traumatic head injury (TBI) or concussion, common visual disorders that may ensue include convergence insufficiency (CI), accommodative insufficiency (AI), and mild saccadic dysfunction (SD). Since a mild concussion is frequently associated with abnormalities of saccades, pursuit eye movements, convergence, accommodation, and the vestibular-ocular reflex, testing or evaluating the vision system or eyes of an individual suspected of being cognitively impaired may be used to detect abnormalities in some of these aspects. For example, such tools may be highly beneficial, in some embodiments or applications, for a quick evaluation, assessment or screening (e.g. in a clinical environment, in the field and/or through other direct/remote configurations), especially when it may differentiate between mild and no concussion. Para 243 discloses the detection of a cognitive impairment in a patient may be based on running or executing a series of tests or assessments (for example assessments 3905 to 3925) on refractor 3801 and associating with each individual assessment a quantitative value or “score” based on the degree of departure from a known baseline which would correspond to the values expected from a “normal” or cognitively unimpaired individual. Para 265 discloses “method 4200, when in the context of a subjective vision test, begins once a refractor or phoropter 3001 is installed at a given location and a user or patient is given access to it. Then, at step 4205, phoropter 3001 establishes a network link to online portal 4105. In some embodiments, the user may be presented at 4010 with a choice of available service providers 4110 and/or lens maker 4120.” However, it does not disclose all five neuro-ophthalmic tests that are conducted via data processing circuitry based on at least one facial image that is captured using a mobile device, Mihali utilizes a refractor or a phoropter. In conclusion, while the above listed references list various embodiments related to various different neuro-ophthalmic tests and potential use of a mobile device, there is no obvious combination that discloses determining parametric values associated with neuro-ophthalmic tests (accommodative sufficiency, vestibular ocular reflex, saccadic eye movement, smooth pursuit eye movement, and optokinetic nystagmus) based on at least one facial image that is captured using a mobile device as claimed and determining risk scores corresponding to the facial image of the user based on baseline data comprising images of the user without any oculomotor impairment and in combination with parametric values associated with neuro-ophthalmic tests indicative of accommodative sufficiency, vestibular ocular reflex, saccadic eye movement, smooth pursuit eye movement, and optokinetic nystagmus and a measure of risk of neuro-ophthalmic impairment for the user based on the one or more risk scores. Response to Arguments Applicant’s arguments filed 12/08/2025 with respect to 35 U.S.C. § 112(a) have been fully considered, but are not persuasive. In regards to the risk scores of claims 1, 11, and 20, as previously presented, the specification does not disclose any sort of boundary, range, or threshold or any similar requirements the ML utilizes to determine the one or more risk scores based on the parametric values. The Applicant points to paragraphs 49 which discloses, “The first ML model 108-1 may be configured to determine a user feature metrics corresponding to the neuro-ophthalmic test(s) based on the parametric value(s), and the second ML model 108-2 may be configured to compare the user feature metrics with the baseline feature metrics to determine the risk score(s).” However, this paragraph does not disclose how the ML model determines the risk score by comparing the user feature metrics with a baseline feature metrics, it merely states that it does so. There is no disclosure of any boundary, range, threshold, or similar requirement used to categorize the comparison with the baseline into risk scores. At most, the specification indicates that there is only two possible risk scores which is “risk” or “no risk,” however, there still is no disclosure of any range (or equivalent) that is used to determine “risk” or “no risk.” A limitation may lack written description when the claims define the invention in functional language specifying a desired result but the specification does not sufficiently describe how the function is performed or the result is achieved. For software, this can occur when the algorithm or steps/procedure for performing the computer function are not explained at all or are not explained in sufficient detail (simply restating the function recited in the claim is not necessarily sufficient). In other words, the algorithm or steps/procedure taken to perform the function must be described with sufficient detail so that one of ordinary skill in the art would understand how the inventor intended the function to be performed. See MPEP §§ 2163.02 and 2181, subsection IV. It is not enough that one skilled in the art could write a program to achieve the claimed function, because the specification must explain how the inventor intends to achieve the claimed function to satisfy the written description requirement. See MPEP 2161.01 (citing Vasudevan Software, Inc. v. MicroStrategy, Inc., 782 F.3d 671, 681-683 (Fed. Cir. 2015)). The instant application does not disclose the algorithm that the ML model uses to determine the risk scores (such as the a boundary, range, or threshold or any similar requirement in relation to the risk scores). Thus, the 112(a) rejection of the risk scores is held. In regards to the impairment level (of claim 2 and 12), as previously presented, the specification does not disclose any sort of boundary, range, threshold, or any similar requirements the ML utilizes to determine the one or more impairment levels based on the parametric values. The Applicant points to paragraphs 75-77, while 76 discloses, “The non-impaired range may include a range of threshold value for each risk score of the risk score(s) corresponding to a neuro-ophthalmic test. The data comparator 308 may further determine whether risk score(s) are within the corresponding non-impaired range,” it does not disclose an impairment level. Claims 2 and 12 discloses, “identifying… based on parametric values using the one or more ML models, one or more impairment levels for one or more impairments…” There is no disclosure of any sort of boundary, range, threshold, or any similar requirements the ML model utilizes to determine the one or more impairment levels based on the parametric values. Instead, para 76 discloses support for determining either “non-impaired” or “impaired” based on a range of threshold value for each risk score (not for the parametric values). As presented above, a limitation may lack written description when the claims define the invention in functional language specifying a desired result but the specification does not sufficiently describe how the function is performed or the result is achieved. For software, this can occur when the algorithm or steps/procedure for performing the computer function are not explained at all or are not explained in sufficient detail (simply restating the function recited in the claim is not necessarily sufficient). The instant application does not disclose the algorithm that the ML model uses to determine the impairment level (such as the a boundary, range, or threshold or any similar requirement in relation to the impairment levels). Thus, the 112(a) rejection of the impairment levels is held for claims 2 and 12. Applicant’s arguments filed 12/08/2025 with respect to 35 U.S.C. § 112(b) have been fully considered, and are persuasive. The previous 35 U.S.C. § 112(b) rejection in regards to the “neuro-ophthalmic risk” has been withdrawn. Applicant’s arguments filed 12/08/2025 with respect to 35 U.S.C. § 101 have been fully considered, but are not persuasive. Applicant argues that the claims recite specific technological solutions that allow for detection of risks of neuro-ophthalmic impairment for a user using trained machine learning models and “baseline data comprising images of the user without any oculomotor impairment” and are therefore not directed to certain methods of organizing human activity. The Examiner respectfully disagrees. MPEP 2106. 04(a)(2)(II) states that a claimed invention is directed to certain methods of organizing human activity if the identified claim elements contain limitations that encompass fundamental economic principles or practices, commercial or legal interactions, or managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions). The Examiner submits that the identified claim elements represent a series of rules or instructions that a person or persons, with or without the aid of a computer, would follow to organize patients into different levels of risk. Applicant has not pointed to anything in the claims that fall outside of this characterization. Because the claim elements fall under a series of rules or instructions that a person or persons would follow to organize patients into different levels of risk, the claimed invention is directed to an abstract idea. Further, MPEP 2106.04(d)(1) and MPEP 2106.05(a) indicates that a practical application may be present where the claimed invention provides a technical solution to a technical problem. See, e.g., DDR Holdings, LLC. v. Hotels.com, L.P., 773 F.3d 1245, 1259 (Fed. Cir. 2014) (finding that claiming a website that retained the “look and feel” of a host webpage provided a technological solution to the problem of retention of website visitors by utilizing a website descriptor that emulated the “look and feel” of the host webpage, where the problem arose out of the internet and was thus a technical problem). Here, the Applicant’s argued problem is not a technological problem caused by the technological environment to which the claims are confined (the data processing circuitry). A need for more accurate and reliable detection of risks of neuro-ophthalmic impairment (on mobile devices) for a user is not a problem caused by the data processing circuitry that is involved in the process. At best, Applicant’s identified problem is a business problem. Because no technological problem is present, the claims do not provide a practical application. Therefore, this argument is not persuasive. Further, the Examiner notes that the Applicant does not claim that the mobile device detects the risk of neuro-ophthalmic impairment. The claim language merely states that the facial image is captured using a mobile device. Broadly, the facial image could be taken using a camera of a mobile device and transmitted to data processing circuitry located in a hospital. The Examiner wishes to draw the Applicant’s attention to this based on the conversation had in the interview on 11/12/2025. As noted in the Interview Summary, the Examiner suggested in the interview to clarify the claims to make it clear that this invention is for immediate evaluation of an athlete on the side of the field in a mobile application on a portable device and not for an athlete who has traveled to a hospital for analysis with the hospital machines as it would be done traditionally. As currently claimed, this is not clear. The limitation at the end of claim 1 displaying the output on the mobile device does not clarify this as the results from the hospital could be accessed through an email on a mobile phone (for example). It is not clear that the Applicant intends for this to be a mobile application for immediate evaluation of an athlete on the side of the field. Applicant’s arguments filed 12/08/2025 with respect to 35 U.S.C. § 103 have been fully considered and are persuasive regarding the newly added limitations. Therefore, the previous 35 U.S.C. § 103 rejection has been withdrawn. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SARA J MORICE DE VARGAS whose telephone number is (703)756-4608. The examiner can normally be reached M-F 8:30-5:30 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Peter H. Choi can be reached on (469)295-9171. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SARA JESSICA MORICE DE VARGAS/Examiner, Art Unit 3681 /PETER H CHOI/Supervisory Patent Examiner, Art Unit 3681
Read full office action

Prosecution Timeline

Dec 06, 2024
Application Filed
Feb 13, 2025
Non-Final Rejection — §101, §112
Jun 20, 2025
Response Filed
Jul 25, 2025
Final Rejection — §101, §112
Nov 12, 2025
Applicant Interview (Telephonic)
Nov 12, 2025
Examiner Interview Summary
Dec 08, 2025
Request for Continued Examination
Dec 17, 2025
Response after Non-Final Action
Jan 05, 2026
Non-Final Rejection — §101, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12512226
EFFECTIVE IMAGING EXAMINATION HANDOFFS BETWEEN USERS WITHIN A RADIOLOGY OPERATIONS COMMAND CENTER (ROCC) STRUCTURE
2y 5m to grant Granted Dec 30, 2025
Patent 12367979
METHOD AND APPARATUS FOR DETERMINING DEMENTIA RISK FACTORS USING DEEP LEARNING
2y 5m to grant Granted Jul 22, 2025
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
8%
Grant Probability
32%
With Interview (+24.2%)
3y 11m
Median Time to Grant
High
PTA Risk
Based on 26 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month