Prosecution Insights
Last updated: April 19, 2026
Application No. 18/594,836

Neurological and Motor Function Screening Apparatus

Non-Final OA §101§102§103§112
Filed
Mar 04, 2024
Examiner
LANE, DANIEL E
Art Unit
3715
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Reflexion Interactive Technologies Inc.
OA Round
3 (Non-Final)
4%
Grant Probability
At Risk
3-4
OA Rounds
3y 5m
To Grant
13%
With Interview

Examiner Intelligence

Grants only 4% of cases
4%
Career Allow Rate
12 granted / 290 resolved
-65.9% vs TC avg
Moderate +9% lift
Without
With
+8.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
42 currently pending
Career history
332
Total Applications
across all art units

Statute-Specific Performance

§101
29.0%
-11.0% vs TC avg
§103
19.2%
-20.8% vs TC avg
§102
17.8%
-22.2% vs TC avg
§112
29.7%
-10.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 290 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 22 December 2025 has been entered. This is a response to Applicant’s amendment filed on 22 December 2025, wherein: Claims 1, 10, 14, 21, 22, 25, and 28 are amended. Claims 4, 6-9, 12, 15-17, 24, 26, and 29 are canceled. Claims 2, 3, 5, 11, 13, 18-20, 23, and 27 are previously presented. Claim 30 is new. Claims 1-3, 5, 10, 11, 13, 14, 18-23, 25, 27, 28, and 30 are pending. Priority Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. Applicant has not complied with one or more conditions for receiving the benefit of an earlier filing date under 35 U.S.C. 120 as follows: The later-filed application must be an application for a patent for an invention which is also disclosed in the prior application (the parent or original nonprovisional application or provisional application). The disclosure of the invention in the parent application and in the later-filed application must be sufficient to comply with the requirements of 35 U.S.C. 112(a) or the first paragraph of pre-AIA 35 U.S.C. 112, except for the best mode requirement. See Transco Products, Inc. v. Performance Contracting, Inc., 38 F.3d 551, 32 USPQ2d 1077 (Fed. Cir. 1994) The disclosure of the prior-filed applications, US 62/393,337, US 15/621,068, or US 17/160,095, fails to provide adequate support or enablement in the manner provided by 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph for one or more claims of this application. In particular, the disclosure of the prior-filed applications fail to provide sufficient written description for “the control unit is configured to administer a neurocognitive test by: generating a sequence of stimuli on the display; and measuring a complex reaction time, wherein the complex reaction time comprises a duration required for the user to recognize a specific stimulus, decide whether to interact with the specific stimulus, and physically activate the input sensor” in claim 1, "including at least one additional sensor that records tactile, visual, or audio input generated by the user " in claim 10, "wherein the at least one sensor is a camera configured to record the user's eye and/or body movements" in claim 11, "including at least one external sensor environmental, physiological, and/or health data, wherein the environmental, physiological, and/or health data comprise one or more of body temperature, pulse, blood pressure, blood oxygen, ambient temperature, light, atmospheric pressure, lactic acid levels, and heart rate" in claim 13, “an interface having a display configured to selectively display at least one stimulus wherein the at least one stimulus is part of a neurocognitive test;… a control unit for measuring and recording input response to the at least one stimulus; a computer for receiving the input from the control unit and creating results responsive to the input, wherein the control unit is configured to administer the neurocognitive test by: generating a sequence of stimuli on the display; and measuring a complex reaction time, wherein the complex reaction time comprises a duration required for the user to recognize a specific stimulus, decide whether to interact with the specific stimulus, and physically activate the input sensor” in claim 14, “wherein the input is used to determine whether the user has suffered a concussion” in claims 20 and 28, “wherein the interface comprises a virtual or augmented reality interface” in claims 21 and 25, “wherein the input sensor is configured to record the user's reaction time and/or positional data in response to the stimuli” in claim 23, “wherein the at least one sensor is… integral to the headset” in claim 27, and “an interface having a display configured to selectively display at least one stimulus wherein the at least one stimulus is part of a neurocognitive test;… a control unit for measuring and recording input responsive to the at least one stimulus; and a computer for receiving the input from the control unit and creating results responsive to the input, wherein the control unit is configured to administer the neurocognitive test by[:] generating a sequence of stimuli on the display; measuring at least one of (a) a simple reaction time to the sequence through responsive input from the user, (b) user memory; (c) peripheral awareness and (d) depth perception, wherein measuring peripheral awareness includes determining a location accuracy of a user's interaction with the stimuli displayed at least approximately 60 degrees to the left or right of a user's forward line of sight, and wherein measuring user memory includes determining a sequence accuracy of the responsive input in comparison to an order of the sequence” in claim 30 to show one of ordinary skill in the art that Applicant had possession of the claimed invention. In particular, the specification of the prior-filed application, at best, merely recites similar language as the claims without providing any substantive description for the claimed limitations identified above for the same reasons that the instant specification also fails as identified in the rejections of the claims under 35 USC 112(a) below for the same claim limitations. Thus, claims 1-3, 5, 10, 11, 13, 14, 18-23, 25, 27, 28, and 30 do not gain benefit of priority to US 62/393,337, US 15/621,068, or US 17/160,095. Therefore, claims 1-3, 5, 10, 11, 13, 14, 18-23, 25, 27, 28, and 30 have an effective filing date of 20 August 2024. Claim Objections Claims 1-3, 5, 10, 11, 13, 14, 18-23, 25, 27, 28, and 30 are objected to because of the following informalities: Claims 1, 14, and 30 are inconsistently formatted. The term “and” ends the second limitation in claims 1 and 30 and the third limitation in claims 14 and 30 instead of only the second-to-last limitation. Limitations also inconsistently end with either a semi-colon or comma. Uniformity is recommended. Claim 30 recites a semi-colon where a colon should be at the end of the limitation “wherein the control unit is configured to administer the neurocognitive test by”. Claims 2, 3, 5, 10, 11, 13, 18-23, 25, 27, and 28 inherit the deficiencies of their respective parent claims, and are thus objected to under the same rationale. Appropriate correction is required. Claim Rejections - 35 USC § 112 The text of those sections of Title 35, U.S. Code 112(b) not included in this action can be found in a prior Office action. Claims 1-3, 5, 10, 11, 13, 14, 18-23, 25, 27, 28, and 30 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claims 1, 14, and 30, it is unclear which limitations are sub-limitations of previous limitations and which are their own limitations because all of the limitations are indented the same. Thus, one of ordinary skill in the art would not be apprised of the metes and bounds of the patent protection sought. For the purposes of compact prosecution, all of the limitations following “wherein the control unit is configured to administer the neurocognitive test by” are construed as sub-limitations of this limitation. Dependent claims 2, 3, 5, 10, 11, 13, 18-23, 25, 27, and 28 inherit the deficiencies of their respective parent claims, and are thus rejected under the same rationale. Regarding claims 1 and 14, it is unclear how an input sensor is “configured to accept input in response to the stimuli” in the system claimed to include a virtual or augmented reality interface. The disclosure is silent regarding an input sensor as well as regarding accepting input in response to the stimuli in this embodiment. In particular, para. 27 provides the only mention of an embodiment including a virtual or augmented reality interface with “[i]t is also contemplated that the display screen could be virtual, or augmented reality, such as by providing the user with a pair of virtual reality glasses or any suitable virtual reality headset and appropriate software/hardware to display stimuli and accept input in an analogous manner-in virtual space-to the use of the method and apparatus described and shown herein as occurring at least partially in physical space.” Thus, one of ordinary skill in the art would not be apprised of the metes and bounds of the patent protection sought. Dependent claims 2, 3, 5, 10, 11, 13, 18-23, 25, 27, and 28 inherit the deficiencies of their respective parent claims, and are thus rejected under the same rationale. Further regarding claims 1 and 14, it is unclear how the claimed measured reaction time is “complex”. Each claim recites “the complex reaction time comprises a duration required for the user to recognize a specific stimulus, decide whether to interact with the specific stimulus, and physically activate the input sensor”. The specification recites nearly identical language in para. 47. However, this is the general definition of a simple reaction time1. Thus, one of ordinary skill in the art would not be apprised of the metes and bounds of the patent protection sought. For the purposes of compact prosecution, the claimed “complex reaction time” is construed as the same as “simple reaction time”, same the simple reaction time claimed in claim 30. It is further noted there is no meaningful difference between “a duration required for the user to recognize a specific stimulus, decide whether to interact with the specific stimulus, and physically activate the input sensor” and “a simple reaction time to the sequence through responsive input from the user” in claim 30 further recited in para. 46 of the specification as “measuring the time it takes for a user to recognize a stimulus and interact with it.” Dependent claims 2, 3, 5, 10, 11, 13, 18-23, 25, 27, and 28 inherit the deficiencies of their respective parent claims, and are thus rejected under the same rationale. Claim 19 recites the limitation "a computer internal to the headset" in lines 2-3 of the claim. There is insufficient antecedent basis for this limitation in the claim. In particular, there is insufficient antecedent basis for “the headset”, and thus insufficient antecedent basis for anything to be “internal to the headset”, let alone a computer. Claim 27 recites the limitation "the at least one sensor" in line 1 of the claim. There is insufficient antecedent basis for this limitation in the claim. It is noted that claim 10 was amended to recite “at least one additional sensor” which removes antecedent basis for “the at least one sensor”. Further regarding claim 27, is in unclear what constitutes at least one sensor being “integral to a headset”. The disclosure is silent regarding this feature. Thus, one of ordinary skill in the art would not be apprised of the metes and bounds of the patent protection sought. Regarding claim 30, the claim recites “measuring at least one of (a) a simple reaction time to the sequence through responsive input from the user, (b) user memory; (c) peripheral awareness and (d) depth perception”. It is unclear what the variants are that are claimed to be measured due to the inconsistent grammatical marks and absence of an oxford comma. For instance, are the variants: 1. (a), (b), (c), and (d); 2. (a), (b), and [(c) and (d)]; 3. [(a) and (b)], (c), and (d); or 4. [(a) and (b)] and [(c) and (d)]? Thus, one of ordinary skill in the art would not be apprised of the metes and bounds of the patent protection sought. For the purposes of compact prosecution, the variants are construed as illustrated in (1) – (a), (b), (c), and (d) - such that the limitation recites “measuring at least one of (a) a simple reaction time to the sequence through responsive input from the user, (b) user memory, (c) peripheral awareness, and (d) depth perception”. The text of those sections of Title 35, U.S. Code 112(a) not included in this action can be found in a prior Office action. Claims 1-3, 5, 10, 11, 13, 14, 18-23, 25, 27, 28, and 30 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Regarding claims 1, 10, 11, 13, 14, and 30, the disclosure fails to provide sufficient written description for “the control unit is configured to administer a neurocognitive test by: generating a sequence of stimuli on the display; and measuring a complex reaction time, wherein the complex reaction time comprises a duration required for the user to recognize a specific stimulus, decide whether to interact with the specific stimulus, and physically activate the input sensor” in claim 1, "including at least one additional sensor that records tactile, visual, or audio input generated by the user " in claim 10, "wherein the at least one sensor is a camera configured to record the user's eye and/or body movements" in claim 11, "including at least one external sensor environmental, physiological, and/or health data, wherein the environmental, physiological, and/or health data comprise one or more of body temperature, pulse, blood pressure, blood oxygen, ambient temperature, light, atmospheric pressure, lactic acid levels, and heart rate" in claim 13, “an interface having a display configured to selectively display at least one stimulus wherein the at least one stimulus is part of a neurocognitive test;… a control unit for measuring and recording input response to the at least one stimulus; a computer for receiving the input from the control unit and creating results responsive to the input, wherein the control unit is configured to administer the neurocognitive test by: generating a sequence of stimuli on the display; and measuring a complex reaction time, wherein the complex reaction time comprises a duration required for the user to recognize a specific stimulus, decide whether to interact with the specific stimulus, and physically activate the input sensor” in claim 14, and “an interface having a display configured to selectively display at least one stimulus wherein the at least one stimulus is part of a neurocognitive test;… a control unit for measuring and recording input responsive to the at least one stimulus; and a computer for receiving the input from the control unit and creating results responsive to the input, wherein the control unit is configured to administer the neurocognitive test by[:] generating a sequence of stimuli on the display; measuring at least one of (a) a simple reaction time to the sequence through responsive input from the user, (b) user memory; (c) peripheral awareness and (d) depth perception, wherein measuring peripheral awareness includes determining a location accuracy of a user's interaction with the stimuli displayed at least approximately 60 degrees to the left or right of a user's forward line of sight, and wherein measuring user memory includes determining a sequence accuracy of the responsive input in comparison to an order of the sequence” in claim 30 to show one of ordinary skill in the art that Applicant had possession of the claimed invention. The claims lack written description when the claims define the invention in functional language specifying a desired result but the specification does not sufficiently describe how the function is performed or the result is achieved. For software, this can occur when the algorithm or steps/procedure for performing the computer function are not explained at all or are not explained in sufficient detail (simply restating the function recited in the claim is not necessarily sufficient). In other words, the algorithm or steps/procedure taken to perform the function must be described with sufficient detail so that one of ordinary skill in the art would understand how the inventor intended the function to be performed. It is not enough that one skilled in the art could write a program to achieve the claimed function because the specification must explain how the inventor intends to achieve the claimed function to satisfy the written description requirement. See MPEP 2161.01(I). The specification, at best, merely recites similar language as the claim without providing the steps, calculations, or algorithms necessary for a computer to perform the claimed functionality. In particular, the disclosure merely mentions a non-descript "neurocognitive test" that is a "sequences of stimuli displayed on the user interface screen 6, where the reactions of the user to such sequences of stimuli (represented by a sinusoidal line in Fig. 1 A) are correlated with at least one of the neurocognitive data types being chosen from data types including, but not limited to, psychomotor response, complex reaction time, memory, balance, peripheral awareness, and/or any other desired neurocognitive data type." See para. 45 of the specification. Para. 46 merely recites that the neurocognitive test can "measure at least one of a peripheral awareness and/or depth perception of the user responsive to the presence of the user interface screen 6 in the user's peripheral vision." With particular respect to claims 10 and 13, the disclosure does not provide any meaningful description for recording "tactile, visual, or audio input generated by the user " or "collecting at least one environmental, physiological, and/or health data, wherein the environmental, physiological, and/or health data comprise one or more of body temperature, pulse, blood pressure, blood oxygen, ambient temperature, light, atmospheric pressure, lactic acid levels, and heart rate" beyond merely reciting that these forms of data are collected in results-based language. Furthermore, para. 48 of the specification merely recites that the neurocognitive data is analyzed "using algorithms and creates results responsive to that neurocognitive data". But, there is no disclosure of what these algorithms actually are. Dependent claims 2, 3, 5, 10, 11, 13, 18-23, 25, 27, and 28 inherit the deficiencies of their respective parent claims, and are thus rejected under the same rationale. Regarding claims 20 and 28, the disclosure fails to provide sufficient written description “wherein the input is used to determine whether the user has suffered a concussion” to show one of ordinary skill in the art that Applicant had possession of the claimed invention. The claims lack written description when the claims define the invention in functional language specifying a desired result but the specification does not sufficiently describe how the function is performed or the result is achieved. For software, this can occur when the algorithm or steps/procedure for performing the computer function are not explained at all or are not explained in sufficient detail (simply restating the function recited in the claim is not necessarily sufficient). In other words, the algorithm or steps/procedure taken to perform the function must be described with sufficient detail so that one of ordinary skill in the art would understand how the inventor intended the function to be performed. It is not enough that one skilled in the art could write a program to achieve the claimed function because the specification must explain how the inventor intends to achieve the claimed function to satisfy the written description requirement. See MPEP 2161.01(I). In particular, the disclosure is silent regarding the steps, calculations, and algorithms necessary to perform the claimed functionality. The disclosure merely mentions a non-descript “neurocognitive test” that is a “sequences of stimuli displayed on the user interface screen 6, where the reactions of the user to such sequences of stimuli (represented by a sinusoidal line in Fig. 1A) are correlated with at least one of the neurocognitive data types being chosen from data types including, but not limited to, psychomotor response, complex reaction time, memory, balance, peripheral awareness, and/or any other desired neurocognitive data type.” See para. 45 of the specification. Para. 46 merely recites that the neurocognitive test can “measure at least one of a peripheral awareness and/or depth perception of the user responsive to the presence of the user interface screen 6 in the user’s peripheral vision.” However, this is a different embodiment from the claimed embodiment with a virtual or augmented reality interface. Furthermore, para. 48 of the specification merely recites that the neurocognitive data is analyzed “using algorithms and creates results responsive to that neurocognitive data”. But, there is no further disclosure of what these algorithms actually are. Regarding claims 21, 25, and 27, the disclosure fails to provide sufficient written description for “wherein the interface comprises a virtual or augmented reality interface” in claims 21 and 25, and “wherein the at least one sensor is… integral to the headset” in claim 27 which are claimed to further limit “an interface having a display configured to selectively display stimuli; an input sensor configured to accept input in response to the stimuli, wherein the input sensor is configured to accept tactile input;… the control unit is configured to administer a neurocognitive test by: generating a sequence of stimuli on the display; and measuring a complex reaction time, wherein the complex reaction time comprises a duration required for the user to recognize a specific stimulus, decide whether to interact with the specific stimulus, and physically activate the input sensor” in independent claim 1 and “an interface having a display configured to selectively display at least one stimulus wherein the at least one stimulus is part of a neurocognitive test; an input sensor configured to accept input in response to the stimulus, wherein the input sensor is configured to accept tactile input;… wherein the control unit is configured to administer the neurocognitive test by: generating a sequence of stimuli on the display; and measuring a complex reaction time, wherein the complex reaction time comprises a duration required for the user to recognize a specific stimulus, decide whether to interact with the specific stimulus, and physically activate the input sensor” in independent claim 14 to show one of ordinary skill in the art that Applicant had possession of the claimed invention. In particular, the disclosure is silent regarding how any sensor is incorporated into a system comprising a virtual or augmented reality interface that performs any of these claimed functions. (Italicized for emphasis). With particular respect to claim 27, the disclosure is particularly silent regarding “wherein the at least one sensor is… integral to the headset”. The closest disclosure is found in Fig. 5 which illustrates eyeglasses that have sensors 56 and 58 on them (not integral though), but the specification makes clear that this does not include the embodiment including a virtual or augmented reality interface and that sensors 56 and 58 are not configured to receive tactile input. See para. 41 of the specification which identifies 56 as a video camera that records what the use is looking at and 58 as an infrared camera that records pupil movements of the user. Furthermore, para. 41 of the specification recites that the apparatus includes at least one external sensor 52 as shown in Fig. 5. However, item 52 in Fig. 5 only designates a temple tip on eyeglasses, not any perceivable sensor. Additionally, the disclosure merely mentions a non-descript "neurocognitive test" that is a "sequences of stimuli displayed on the user interface screen 6, where the reactions of the user to such sequences of stimuli (represented by a sinusoidal line in Fig. 1 A) are correlated with at least one of the neurocognitive data types being chosen from data types including, but not limited to, psychomotor response, complex reaction time, memory, balance, peripheral awareness, and/or any other desired neurocognitive data type." See para. 45 of the specification. Para. 46 merely recites that the neurocognitive test can "measure at least one of a peripheral awareness and/or depth perception of the user responsive to the presence of the user interface screen 6 in the user's peripheral vision." However, this is a different embodiment from the claimed embodiment with a virtual or augmented reality interface. Furthermore, para. 48 of the specification merely recites that the neurocognitive data is analyzed "using algorithms and creates results responsive to that neurocognitive data". But, there is no further disclosure of what these algorithms actually are. Regarding claim 23, the disclosure fails to provide sufficient written description “wherein the input sensor is configured to record the user's reaction time and/or positional data in response to the stimuli” to show one of ordinary skill in the art that Applicant had possession of the claimed invention. The claims lack written description when the claims define the invention in functional language specifying a desired result but the specification does not sufficiently describe how the function is performed or the result is achieved. For software, this can occur when the algorithm or steps/procedure for performing the computer function are not explained at all or are not explained in sufficient detail (simply restating the function recited in the claim is not necessarily sufficient). In other words, the algorithm or steps/procedure taken to perform the function must be described with sufficient detail so that one of ordinary skill in the art would understand how the inventor intended the function to be performed. It is not enough that one skilled in the art could write a program to achieve the claimed function because the specification must explain how the inventor intends to achieve the claimed function to satisfy the written description requirement. See MPEP 2161.01(I). In particular, the disclosure is silent regarding the steps, calculations, or algorithms necessary to perform the claimed functionality. The disclosure merely mentions a non-descript “neurocognitive test” in results-based language that is a “sequences of stimuli displayed on the user interface screen 6, where the reactions of the user to such sequences of stimuli (represented by a sinusoidal line in Fig. 1A) are correlated with at least one of the neurocognitive data types being chosen from data types including, but not limited to, psychomotor response, complex reaction time, memory, balance, peripheral awareness, and/or any other desired neurocognitive data type.” See para. 45 of the specification. Para. 46 merely recites that “[s]imple reaction time can be measured using the user interface screen 6 by measuring the time it takes for a user to recognize a stimulus and interact with it” while para. 47 merely recites that “[c]omplex reaction time can be measured using the user interface screen 6 by the time it takes for a user to recognize a stimulus, decide whether to interact with the stimulus, and then interact with the stimulus” without any meaningful description of the steps, calculations, or algorithms necessary to perform these functions. Claim Rejections - 35 USC § 101 The text of those sections of Title 35, U.S. Code 101 not included in this action can be found in a prior Office action. Claims 1-3, 5, 10, 11, 13, 14, 18-23, 25, 27, 28, and 30 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without including additional elements that are sufficient to amount to significantly more than the judicial exception itself. Step 1 The instant claims are directed to products which fall under at least one of the four statutory categories (STEP 1: YES). Step 2A, Prong 2 Independent claim 1 recites: A system for collecting neurological data about a user, the system comprising: an interface having a display configured to selectively display stimuli; an input sensor configured to accept input in response to the stimuli, wherein the input sensor is configured to accept tactile input; and a control unit electronically connecting the system to a computer wherein the control unit is configured to administer a neurocognitive test by: generating a sequence of stimuli on the display; and measuring a complex reaction time, wherein the complex reaction time comprises a duration required for the user to recognize a specific stimulus, decide whether to interact with the specific stimulus, and physically activate the input sensor. Independent claim 14 recites: A system for collecting neurological data about a user, the system comprising: an interface having a display configured to selectively display at least one stimulus wherein the at least one stimulus is part of a neurocognitive test; an input sensor configured to accept input in response to the at least one stimulus, wherein the input sensor is configured to accept tactile input; control unit for measuring and recording input responsive to the at least one stimulus; and a computer for receiving the input from the control unit and creating results responsive to the input, wherein the control unit is configured to administer the neurocognitive test by: generating a sequence of stimuli on the display; and measuring a complex reaction time, wherein the complex reaction time comprises a duration required for the user to recognize a specific stimulus, decide whether to interact with the specific stimulus, and physically activate the input sensor. Independent claim 30: A system for collecting neurological data about a user, the system comprising: an interface having a display configured to selectively display at least one stimulus wherein the at least one stimulus is part of a neurocognitive test; an input sensor configured to accept input in response to the at least one stimulus, wherein the input sensor is configured to accept tactile input or audible input; and a control unit for measuring and recording input responsive to the at least one stimulus; and a computer for receiving the input from the control unit and creating results responsive to the input, wherein the control unit is configured to administer the neurocognitive test by; generating a sequence of stimuli on the display; measuring at least one of (a) a simple reaction time to the sequence through responsive input from the user, (b) user memory; (c) peripheral awareness and (d) depth perception, wherein measuring peripheral awareness includes determining a location accuracy of a user's interaction with the stimuli displayed at least approximately 60 degrees to the left or right of a user's forward line of sight, and wherein measuring user memory includes determining a sequence accuracy of the responsive input in comparison to an order of the sequence. All of the foregoing underlined elements identified above amount to the abstract idea grouping of a certain method of organizing human activity because they amount to managing personal behavior or interactions between people (including social activities, teaching, and following rules or instructions) by merely implementing a neurocognitive test between a tester and a testee. These elements are also interpreted as a series of steps that could reasonably be performed by mental processes with the aid of pen and paper because the claims, under their broadest reasonable interpretation, cover performance of the limitations in the mind (including observation, evaluation, judgment, opinion) but for the recitation of generic computer components. See MPEP 2106.04(a)(2)(III)(C) - A Claim That Requires a Computer May Still Recite a Mental Process. Even if humans would use a physical aid to help them complete the recited steps, the use of such physical aid does not negate the mental nature of these limitations. Dependent claims 10, 11, 13, 20, 22, 23, and 28 amount to merely further defining the judicial exception with additional elements addressed in Prong 2 and Step 2B. Dependent claims 2, 3, 5, 18, 19, 21, 25, and 27 recite additional elements that are addressed in Prong 2 and Step 2B. Therefore, the claims recite a judicial exception. (STEP 2A, PRONG 1: YES). Step 2A, Prong 2 This judicial exception is not integrated into a practical application because the independent and dependent claims do not include additional elements that are sufficient to integrate the exception into a practical application under the considerations set forth in MPEP 2106.04(d). The elements of the claims above that are not underlined constitute additional elements. The following additional elements, both individually and as a whole, merely generally link the judicial exception to a particular technological environment or field of use: a system comprising an interface having a display, an input sensor, and a control unit electronically connecting the system to a computer (claim 1); a system comprising an interface having a display, an input sensor, a control unit, and a computer (claims 14 and 30); the display comprises an organic light-emitting diode (OLED) display (claims 5 and 18) or an electronically controlled stimulus display (claim 2) including a plurality of light-emitting diodes (LEDs) (claim 3); at least one additional sensor (claim 10); the at least one sensor is a camera (claim 11); at least one sensor (claim 13); the computer is a desktop computer, a laptop computer, a tablet computer, a smartphone, a computer internal to the headset, or a hand-held computer device (claim 19); the interface comprises a virtual or augmented reality interface (claims 21 and 25); the interface includes a speaker (claim 22); and the at least one sensor is on or integral to a headset (claim 27). This is evidenced by the manner in which these elements are disclosed in the drawings and the instant specification. For example, the only element disclosed with any specificity is the embodiment of the portable user interface screen illustrated generically in Fig. 1A-4C and described in detail in para. 27-36 of the specification. However, the portable user interface screen is merely providing insignificant pre-solution data test presentation in the context of the claimed process. See, for example, at least para. 27 which recites that the "electronically controlled stimulus display 18 may use liquid crystal display (LCD), light emitting diode (LED), organic light emitting diode (OLEO), plasma display panel (PDP), or any other desired technology to create a two-dimensional video display that covers a substantial portion of the surface area of the front of the interface panel 2." The remaining elements are not illustrated in the drawings (excluding the sensors which are illustrated in Fig. 2 as a generic rectangle) and are merely described generically in the specification. See, for example, at least para. 26-28, 35, 37, 41, 43, 44, and 57 of the specification. This also evidences that the judicial exception is not implemented with, or used in, a particular machine or manufacture. Therefore, the claims do not recite any limitations that improve the functionality of the computer system. Thus, the computer components are merely an attempt to link the abstract idea to a particular technological environment, but do not result in an improvement to the technology or computer functions employed. The claims are silent regarding any specific rules with specific characteristics that improve the functionality of the computer system. For instance, the interface and sensors, as claimed and organized, merely add insignificant extra-solution activity to the judicial exception (e.g., mere pre-solution stimuli presentation and extra-solution data gathering in conjunction with a law of nature or abstract idea). It is noted that claim 30’s reciting that the stimuli is “displayed at least approximately 60 degrees to the left or right of a user's forward line of sight” is merely the identification of the conventional displaying of the stimuli in the peripheral vision range. See at least para. 30 and 46 of the specification. None of the hardware offer a meaningful limitation beyond generally linking the performance of the steps to a particular technological environment, that is, implementation via computers. Again, this is evidenced by the manner in which these elements are disclosed in the drawings and specification as identified above. It should be noted that because the courts have made it clear that mere physicality or tangibility of an additional element or elements is not a relevant consideration in the eligibility analysis, the physical nature of the additional elements does not affect this analysis. See MPEP 2106.05(I) for more information on this point, including explanations from judicial decisions including Alice Corp. Pty. Ltd. v. CLS Bank Int'l, 573 U.S. 208, 224-26 (2014). Additionally, the claims do not apply or use a judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition nor do they apply or use a judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment. For instance, while the claims identify that the claimed invention is for analyzing neurocognitive data, it is silent regarding any specific treatment or prophylaxis for any specific disease or medical condition. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. (STEP 2A, PRONG 2: YES). Step 2B The independent and dependent claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception under the considerations set forth in MPEP 2106.05. As identified in Step 2A, Prong 2, above, the claimed systems and the process they perform do not require the use of a particular machine, nor do they result in the transformation of an article. Although the claims recite elements, identified above, for performing at least some of the recited functions, these elements are recited at a high level of generality in a conventional arrangement for performing their basic computer functions (i.e., receiving, processing, outputting data). This is evidenced by the manner in which these elements are disclosed in the instant specification. For example, the only element disclosed with any specificity is the embodiment of the portable user interface screen illustrated generically in Fig. 1A-4C and described in detail in para. 27-36 of the specification. However, the portable user interface screen is merely providing insignificant pre-solution data test presentation in the context of the claimed process. See, for example, at least para. 27 which recites that the "electronically controlled stimulus display 18 may use liquid crystal display (LCD), light emitting diode (LED), organic light emitting diode (OLEO), plasma display panel (PDP), or any other desired technology to create a two-dimensional video display that covers a substantial portion of the surface area of the front of the interface panel 2." The remaining elements are not illustrated in the drawings (excluding the sensors which are illustrated in Fig. 2 as a generic rectangle) and are merely described generically in the specification. See, for example, at least para. 26-28, 35, 37, 41, 43, 44, and 57 of the specification. Thus, the computer components are merely an attempt to link the abstract idea to a particular technological environment, but do not result in an improvement to the technology or computer functions employed. The claims do not recite any specific rules with specific characteristics that improve the functionality of the computer system. For instance, the interface and sensors, as claimed and organized, merely add insignificant extra-solution activity to the judicial exception (e.g., mere pre-solution stimuli presentation and extra-solution data gathering in conjunction with a law of nature or abstract idea). It is noted that claim 30’s reciting that the stimuli is “displayed at least approximately 60 degrees to the left or right of a user's forward line of sight” is merely the identification of the conventional displaying of the stimuli in the peripheral vision range. See at least para. 30 and 46 of the specification. This further identifies that none of the hardware offer a meaningful limitation beyond, at best, generally linking the performance of the steps to a particular technological environment, that is, implementation via computers. Viewed as a whole, these additional claim elements do not provide meaningful limitation to transform the abstract idea into a patent eligible application of the abstract idea such that the claims amount to significantly more than the abstract idea of itself (STEP 2B: NO). Therefore, the claims are rejected under 35 USC 101 as being directed to non-statutory subject matter. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1, 2, 10, 13, 14, 19-23, 25, 28, and 30 are rejected under 35 U.S.C. 103 as being unpatentable over French et al. (US 9,078,598, hereinafter referred to as French) and French et al. (US 7,359,121, hereinafter referred to as TRAZER). Regarding claim 1, French teaches a system for collecting neurological data about a user (French, Title, Cognitive Function Evaluation and Rehabilitation Methods and Systems; Col. 6, lines 51-52, “Use of the system 10 to evaluate cognitive or neurological function”), the system comprising: an interface having a display configured to selectively display stimuli (French, Col. 4, lines 34-36, “The TRAZER system may include a monitor or display, of any of various types, for providing information to a user of the system."); an input sensor configured to accept input in response to the stimuli (French, Col. 4, lines 37-38, "and gather data by tracking body movement in any of a variety of ways"); and a control unit electronically connecting the system to a computer (French, Col. 4, lines 27-36, “The TRAZER system is a physical activity system (a testing, training, recreational, and/or evaluation system) that includes a tracking system for determining changes in overall physical locations of a user (person or subject), and a processor or computer operatively coupled to the tracking system for updating a user virtual locations in a virtual space, a physical locations of the user.”), wherein the control unit is configured to administer a neurocognitive test (French, Col. 6, lines 54-58, "The system 10 has been employed to evaluate/assess the athlete's global athletic performance capabilities which may be compromised in the concussed athlete, or with those who have otherwise suffered cognitive or neurological deficits.") by: generating a sequence of stimuli on the display (French, Col. 2, lines 46-48, "the prompting includes prompting the person to engage in a test of reaction time"; Col. 4, lines 36-37, "The system may prompt movement in any of a variety of ways, provide feedback in a display"); and measuring a complex reaction time, wherein the complex reaction time comprises a duration required for the user to recognize a specific stimulus, decide whether to interact with the specific stimulus, and physically activate the input sensor (French, Col. 2, lines 46-48, "the prompting includes prompting the person to engage in a test of reaction time"; Col. 8, lines 66-67, " observing reaction time (collecting data on reaction time)”). French does not explicitly teach wherein the input sensor is configured to accept tactile input. However, TRAZER teaches wherein the input sensor is configured to accept tactile input (TRAZER, Col. 9, lines 11-13, “The computer 22 may be coupled to a data inputting device 24. Such a device may be a… touch-sensitive video screen, or the like.”). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to incorporate the TRAZER system in the system of French because the TRAZER system is incorporated by reference in its entirety in French (French, Col. 4, lines 24-27, “A system for prompting user movement, tracking response, is the TRAZER system. An example of such a system is described in U.S. Pat. No. 7,359,121, which is incorporated herein by reference in its entirety.”). Regarding claim 14, French teaches a system for collecting neurological data about a user (French, Title, Cognitive Function Evaluation and Rehabilitation Methods and Systems; Col. 6, lines 51-52, “Use of the system 10 to evaluate cognitive or neurological function”), the system comprising: an interface having a display configured to selectively display at least one stimulus (French, Col. 4, lines 34-36, “The TRAZER system may include a monitor or display, of any of various types, for providing information to a user of the system.") wherein the at least one stimulus is part of a neurocognitive test (French, Col. 6, lines 54-58, "The system 10 has been employed to evaluate/assess the athlete's global athletic performance capabilities which may be compromised in the concussed athlete, or with those who have otherwise suffered cognitive or neurological deficits."), an input sensor configured to accept input in response to the stimuli, wherein the input sensor is configured to accept tactile input (French, Col. 4, lines 37-38, "and gather data by tracking body movement in any of a variety of ways"); and a control unit for measuring and recording input responsive to the at least one stimulus (French, Col. 4, lines 27-36, “The TRAZER system is a physical activity system (a testing, training, recreational, and/or evaluation system) that includes a tracking system for determining changes in overall physical locations of a user (person or subject), and a processor or computer operatively coupled to the tracking system for updating a user virtual locations in a virtual space, a physical locations of the user.”); and a computer for receiving the input from the control unit and creating results responsive to the input (French, Col. 4, lines 27-36, “The TRAZER system is a physical activity system (a testing, training, recreational, and/or evaluation system) that includes a tracking system for determining changes in overall physical locations of a user (person or subject), and a processor or computer operatively coupled to the tracking system for updating a user virtual locations in a virtual space, a physical locations of the user.”), wherein the control unit is configured to administer the neurocognitive test (French, Col. 6, lines 54-58, "The system 10 has been employed to evaluate/assess the athlete's global athletic performance capabilities which may be compromised in the concussed athlete, or with those who have otherwise suffered cognitive or neurological deficits.") by: generating a sequence of stimuli on the display (French, Col. 2, lines 46-48, "the prompting includes prompting the person to engage in a test of reaction time"; Col. 4, lines 36-37, "The system may prompt movement in any of a variety of ways, provide feedback in a display"); and measuring a complex reaction time, wherein the complex reaction time comprises a duration required for the user to recognize a specific stimulus, decide whether to interact with the specific stimulus, and physically activate the input sensor (French, Col. 2, lines 46-48, "the prompting includes prompting the person to engage in a test of reaction time"; Col. 8, lines 66-67, " observing reaction time (collecting data on reaction time)”). French does not explicitly teach wherein the input sensor is configured to accept tactile input. However, TRAZER teaches wherein the input sensor is configured to accept tactile input (TRAZER, Col. 9, lines 11-13, “The computer 22 may be coupled to a data inputting device 24. Such a device may be a… touch-sensitive video screen, or the like.”). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to incorporate the TRAZER system in the system of French because the TRAZER system is incorporated by reference in its entirety in French (French, Col. 4, lines 24-27, “A system for prompting user movement, tracking response, is the TRAZER system. An example of such a system is described in U.S. Pat. No. 7,359,121, which is incorporated herein by reference in its entirety.”). Regarding claim 30, French teaches a system for collecting neurological data about a user (French, Title, Cognitive Function Evaluation and Rehabilitation Methods and Systems; Col. 6, lines 51-52, “Use of the system 10 to evaluate cognitive or neurological function”), the system comprising: an interface having a display configured to selectively display at least one stimulus (French, Col. 4, lines 34-36, “The TRAZER system may include a monitor or display, of any of various types, for providing information to a user of the system.") wherein the at least one stimulus is part of a neurocognitive test (French, Col. 6, lines 54-58, "The system 10 has been employed to evaluate/assess the athlete's global athletic performance capabilities which may be compromised in the concussed athlete, or with those who have otherwise suffered cognitive or neurological deficits."), an input sensor configured to accept input in response to the stimuli, wherein the input sensor is configured to accept tactile input or audible input (French, Col. 4, lines 37-38, "and gather data by tracking body movement in any of a variety of ways"); and a control unit for measuring and recording input responsive to the at least one stimulus (French, Col. 4, lines 27-36, “The TRAZER system is a physical activity system (a testing, training, recreational, and/or evaluation system) that includes a tracking system for determining changes in overall physical locations of a user (person or subject), and a processor or computer operatively coupled to the tracking system for updating a user virtual locations in a virtual space, a physical locations of the user.”); and a computer for receiving the input from the control unit and creating results responsive to the input (French, Col. 4, lines 27-36, “The TRAZER system is a physical activity system (a testing, training, recreational, and/or evaluation system) that includes a tracking system for determining changes in overall physical locations of a user (person or subject), and a processor or computer operatively coupled to the tracking system for updating a user virtual locations in a virtual space, a physical locations of the user.”), wherein the control unit is configured to administer the neurocognitive test (French, Col. 6, lines 54-58, "The system 10 has been employed to evaluate/assess the athlete's global athletic performance capabilities which may be compromised in the concussed athlete, or with those who have otherwise suffered cognitive or neurological deficits.") by[:] generating a sequence of stimuli on the display (French, Col. 2, lines 46-48, "the prompting includes prompting the person to engage in a test of reaction time"; Col. 4, lines 36-37, "The system may prompt movement in any of a variety of ways, provide feedback in a display"); and measuring at least one of (a) a simple reaction time to the sequence through responsive input from the user (French, Col. 2, lines 46-48, "the prompting includes prompting the person to engage in a test of reaction time"; Col. 8, lines 66-67, " observing reaction time (collecting data on reaction time)”), (b) user memory; (c) peripheral awareness and (d) depth perception (French, Col. 8, lines 56-58, "aspects of depth perception, dynamic visual acuity, peripheral awareness and anticipation skills are assessed during realistic movement."), wherein measuring peripheral awareness includes determining a location accuracy of a user's interaction with the stimuli displayed at least approximately 60 degrees to the left or right of a user's forward line of sight (French, Col. 8, lines 56-58, "aspects of… peripheral awareness… are assessed during realistic movement." “At least approximately 60 degrees to the left or right of a user's forward line of sight” is merely the definition of peripheral vision. See at least para. 30 and 46 of the specification which make this explicit.), and wherein measuring user memory includes determining a sequence accuracy of the responsive input in comparison to an order of the sequence (French does not need to teach this limitation as it depends from an alternative limitation.). French does not explicitly teach wherein the input sensor is configured to accept tactile input or audible input. However, TRAZER teaches wherein the input sensor is configured to accept tactile input (TRAZER, Col. 9, lines 11-13, “The computer 22 may be coupled to a data inputting device 24. Such a device may be a… touch-sensitive video screen, or the like.”). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to incorporate the TRAZER system in the system of French because the TRAZER system is incorporated by reference in its entirety in French (French, Col. 4, lines 24-27, “A system for prompting user movement, tracking response, is the TRAZER system. An example of such a system is described in U.S. Pat. No. 7,359,121, which is incorporated herein by reference in its entirety.”). Regarding claim 2, French in view of TRAZER teaches the system of claim 1, wherein the display comprises an electronically controlled stimulus display (French, Col. 4, lines 34-36, “The TRAZER system may include a monitor or display, of any of various types, for providing information to a user of the system."). Regarding claim 10, French in view of TRAZER teaches the system of claim 1, including at least one additional sensor that records tactile, visual, or audio input generated by the user (French, Col. 4, lines 45-46, "Movement of the person 12 is detected and tracked by a camera or other sensor 20". Col. 4, line 56 - Col. 5, line 28 discuss a heart rate monitor and other sensor means for collecting input data.). Regarding claim 13, French in view of TRAZER teaches the system of claim 1, including at least one external sensor for collecting environmental, physiological, and/or health data, wherein the environmental, physiological, and/or health data comprise one or more of body temperature, pulse, blood pressure, blood oxygen, ambient temperature, light, atmospheric pressure, lactic acid levels, and heart rate (French, Col. 4, lines 45-46, "Movement of the person 12 is detected and tracked by a camera or other sensor 20". Col. 4, line 56 - Col. 5, line 28 discuss a heart rate monitor and other sensor means for collecting input data.). Regarding claim 19, French in view of TRAZER teaches the system of claim 14. French does not explicitly teach wherein the computer comprises a desktop computer, a laptop computer, a tablet computer, a smartphone, a computer internal to the headset, or a hand-held computer device. However, TRAZER teaches wherein the computer comprises a desktop computer, a laptop computer, a tablet computer, a smartphone, a computer internal to the headset, or a hand-held computer device (TRAZER, Fig. 1 illustrates the computer as a desktop computer. Col. 11, lines 47-50, “An acceptable computer is a Compaq Pentium PC (Examiner notes this is a desktop computer). Other computers using a Pentium processor, a Pentium II processor, or other suitable processors would also be acceptable.” It is noted that, as of the effective filing date of the claimed invention, most, if not all, laptop computers, tablet computers, smartphones, computers internal to a headset, and hand-held computer devices used “suitable processors” as the processors at the effective filing date of the claimed invention had exponentially greater performance than a Pentium processor.). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to incorporate the TRAZER system in the system of French because the TRAZER system is incorporated by reference in its entirety in French (French, Col. 4, lines 24-27, “A system for prompting user movement, tracking response, is the TRAZER system. An example of such a system is described in U.S. Pat. No. 7,359,121, which is incorporated herein by reference in its entirety.”). Regarding claims 20 and 28, French in view of TRAZER teaches the system of claim 14, wherein the input is used to determine whether the user has suffered a concussion (French, Col. 2, lines 42-43, "the comparing data includes determining whether the person is suffering from concussion symptoms"). Regarding claims 21 and 25, French in view of TRAZER teaches the system of claim 1 and the system of claim 14, wherein the interface comprises a virtual or augmented reality interface (French implies this at least at Col. 6, lines 66-67, “The system 10 provides the interactive virtual environment”. A virtual environment requires a virtual reality interface.). Regarding claim 22, French in view of TRAZER teaches the system of claim 1. While TRAZER teaches outputting audible stimuli (TRAZER, Col. 40, lines 21-22, “The cues may include auditory and/or visual cues.” Col. 44, lines 44-46, “The heart indication… may include an auditory signal, such as the simulated sound of a beating heart.”) French in view of TRAZER does not explicitly teach wherein the interface includes a speaker configured to output audible stimuli. However, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention for the interface of French in view of TRAZER to include a speaker configured to output audible stimuli because TRAZER explicitly teaches outputting audible stimuli as identified above and a speaker is a common audible output device for outputting audible stimuli in a computerized system. Thus, it is merely choosing from a finite number of identified, predictable solutions, with a reasonable expectation of success. Regarding claim 23, French in view of TRAZER teaches the system of claim 1, wherein the input sensor is configured to record the user's reaction time and/or positional data in response to the stimuli (French, Col. 4, lines 24-31, "A system for prompting user movement, tracking response, is the TRAZER system. An example of such a system is described in U.S. Pat. No. 7,359,121, which is incorporated herein by reference in its entirety. The TRAZER system is a physical activity system (a testing, training, recreational, and/or evaluation system) that includes a tracking system for determining changes in overall physical locations of a user (person or subject)”). Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over French et al. (US 9,078,598, hereinafter referred to as French) and French et al. (US 7,359,121, hereinafter referred to as TRAZER) as applied to claim 2, further in view of Donley (US 2011/0184498). Regarding claim 3, French in view of TRAZER teaches the system of claim 2, French in view of TRAZER does not explicitly teach wherein the display includes a plurality of light-emitting diodes (LEDs). However, in an analogous art, Donley teaches wherein the display includes a plurality of light-emitting diodes (LEDs) (Donley, Fig. 1, light 104; Fig. 9, light source 222; para. 66, “light source, such as a light emitting diode (LED)”. The light source 222 corresponds to light 104.) Claims 5, 11, 18, and 27 are rejected under 35 U.S.C. 103 as being unpatentable over French et al. (US 9,078,598, hereinafter referred to as French) and French et al. (US 7,359,121, hereinafter referred to as TRAZER) as applied to claims 1 and 10, further in view of Haddick et al. (US 2013/0127980, hereinafter referred to as Haddick). Regarding claims 5 and 18, French in view of TRAZER teaches the system of claim 1 and the system of claim 14, French does not explicitly teach wherein the display comprises an organic light-emitting diode (OLED) display. However, in an analogous art, Haddick teaches wherein the display comprises an organic light-emitting diode (OLED) display (Haddick, para. 292, “the display may be… OLED”). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed for the display in French to comprise an OLED display as disclosed by Haddick because “display technologies such as OLED… may allow for flexible displays”. Thus, it is merely a simple substitution of one known element for another to obtain predictable results. Regarding claim 11, French in view of TRAZER teaches the system of claim 10, wherein the at least one sensor is a camera configured to record the user's eye or body movements (French, Col. 4, lines 45-46, "Movement of the person 12 is detected and tracked by a camera or other sensor 20") French in view of TRAZER does not explicitly teach wherein the at least one sensor is a camera configured to record the user's eye and/or body movements However, in an analogous art, Haddick teaches wherein the at least one sensor is a camera configured to record the user's eye and/or body movements (Haddick, para. 665, “certain optical configurations described herein, such as the frontlight LCoS, enable insertion of a camera in many locations along the optical train to put the camera directly on axis with the eye. For example, a camera sensor may be placed adjacent to the LCoS, such as the camera 10232 in FIG. 102B. This in turn enables measurement of the location, diameter, velocity and direction of the pupil and imaging of the iris directly.” Para. 682, “the eyepiece may have a camera that views outward (e.g. forward, to the side, down) and interprets gestures or movements of the hand of the wearer as control signals… Although hand motions have been used in the preceding examples, any portion of the body or object held or worn by the wearer may also be utilized for gesture recognition by the eyepiece.”). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention for system of French in view of TRAZER to include the eye movement tracking camera in Haddick because “[t]his information may then be used to help correlate the user's line of sight with respect to the projected image, a camera view, the external environment, and the like, and used in control techniques as described herein.” See Haddick at para. 659. Regarding claim 27, French in view of TRAZER teaches the system of claim 10. French in view of TRAZER does not explicitly teach wherein the at least one sensor is on or integral to a headset. However, in an analogous art, Haddick teaches wherein the at least one sensor is on or integral to a headset (Haddick, para. 29, “facilities internal and external to the eyepiece, such as… sensing devices, user action capture devices,… camera, sensors, microphone, through a transceiver, through a tactile interface”). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention for the system as taught by French in view of TRAZER to include wherein the at least one sensor is on or integral to a headset as taught by Haddick because Haddick provides a headset employed to display a view of the virtual reality space that includes additional head-worn sensing that are useful in the system of French in view of TRAZER (see Haddick at least at para. 1031-1033) and TRAZER teaches that “other display devices, such as… virtual reality goggles or headsets, may also be employed to display a view of the virtual reality space”. See TRAZER at Col. 11, lines 38-42. Response to Arguments Applicant's arguments with respect to the denial of benefit of priority have been fully considered but they are not persuasive. In pg. 8, Applicant asserts that the disclosure of the present application is sufficient for claiming priority under 35 USC 120 and addresses whether such explicit disclosure is sufficient for meeting the requirements of 35 USC 112 when addressing rejections under that section. Examiner respectfully disagrees. The disclosure is insufficient for claiming priority under 35 USC 120 as it fails written description requirement under 35 USC 112(a) for the same reasons that the instant specification also fails as identified in the rejections of the claims under 35 USC 112(a) for the same claim limitations. Applicant is directed to the response to Applicant’s arguments with respect to the rejections of the claims under 35 USC 112(a) which identify that Applicant’s arguments are not persuasive. Applicant's arguments with respect to the objections to the specification have been fully considered. The amendments obviate the objections. Thus, these objections are withdrawn. Applicant is reminded that the lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification. Applicant's arguments with respect to the objections to the claims have been fully considered but they are not persuasive. In pg. 8, Applicant asserts that the objected claims have been amended to overcome the objections. Examiner respectfully disagrees. While the claims been amended to obviate the associated objections, the objections have been updated to address the amendments to the claims. Applicant's arguments with respect to the rejections of the claims under 35 USC 112(b) have been fully considered but are moot in light of the amendments to the claims. Applicant is directed to the rejections have been updated to address the amendments to the claims. Applicant's arguments with respect to the rejections of the claims under 35 USC 112(a) have been fully considered but they are not persuasive. In pg. 9-13, Applicant asserts that a recitation from para. 27 of the specification provides support for using a virtual or augmented reality display and that VR headsets, such as Facebook Oculus Rift, generally use a variety of sensors, and thus one skilled in the art would have been able to provide an input sensor configured to accept input in response to the stimuli as part of a system having a virtual reality or augmented reality interface having a display configured to selectively display stimuli. Examiner is not persuaded. Para. 27 is explicitly identified as insufficient as it merely recites the alternative use of virtual reality (VR) or augmented reality (AR) without any description of the elements of a VR or AR system. It is also noted that, even now let alone in 2016, VR headsets do not necessarily include any sensors. Applicant acknowledges this with asserting the VR headsets “generally use” a variety of sensors. Even though some headsets that do exist as exampled in the arguments (Oculus Rift and HTC Vive), no such headsets are identified in the disclosure. Thus, any assertion that the claimed VR headset is an Oculus Rift or HTC Vive would amount to new matter. Applicant’s arguments with respect to the rejections of the claims under 35 USC 102 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. The rejections stand. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANIEL LANE whose telephone number is (303)297-4311. The examiner can normally be reached Monday - Friday 8:00 - 4:30 MT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xuan Thai can be reached at (571) 272-7147. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DANIEL LANE/Examiner, Art Unit 3715 1 Zajdel, R., & Nowak, D. (2007). Simple and complex reaction time measurement. Computers in Biology and Medicine, 37(12), 1724–1730. https://doi.org/10.1016/j.compbiomed.2007.04.008
Read full office action

Prosecution Timeline

Mar 04, 2024
Application Filed
Aug 20, 2024
Response after Non-Final Action
Dec 13, 2024
Non-Final Rejection — §101, §102, §103
Apr 23, 2025
Response Filed
Jun 17, 2025
Final Rejection — §101, §102, §103
Dec 22, 2025
Request for Continued Examination
Dec 30, 2025
Response after Non-Final Action
Jan 03, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 11810474
SYSTEMS AND METHODS FOR NEURAL PATHWAYS CREATION/REINFORCEMENT BY NEURAL DETECTION WITH VIRTUAL FEEDBACK
2y 5m to grant Granted Nov 07, 2023
Patent 11398160
SYSTEM, APPARATUS, AND METHOD FOR EDUCATING AND REDUCING STRESS FOR PATIENTS WITH ILLNESS OR TRAUMA USING AN INTERACTIVE LOCATION-AWARE TOY AND A DISTRIBUTED SENSOR NETWORK
2y 5m to grant Granted Jul 26, 2022
Patent 11250723
VISUOSPATIAL DISORDERS DETECTION IN DEMENTIA USING A COMPUTER-GENERATED ENVIRONMENT BASED ON VOTING APPROACH OF MACHINE LEARNING ALGORITHMS
2y 5m to grant Granted Feb 15, 2022
Patent 11210961
SYSTEMS AND METHODS FOR NEURAL PATHWAYS CREATION/REINFORCEMENT BY NEURAL DETECTION WITH VIRTUAL FEEDBACK
2y 5m to grant Granted Dec 28, 2021
Patent 11004551
SLEEP IMPROVEMENT SYSTEM, AND SLEEP IMPROVEMENT METHOD USING SAID SYSTEM
2y 5m to grant Granted May 11, 2021
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
4%
Grant Probability
13%
With Interview (+8.7%)
3y 5m
Median Time to Grant
High
PTA Risk
Based on 290 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month