DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The amendments filed 11 August 2025 have been entered. Claims 1 – 7, 9 – 12, and 14 - 15 are pending. Applicant’s amendments have overcome each and every rejection under 35 U.S.C. 112 previously applied in the office action dated 12 March 2025.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1 – 7, 9 – 12, and 14 - 15 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Regarding Claim 1, the claim recites "an act or step, or series of acts or steps" to detect neurological impairment of a user using examination of sensorimotor control, and is therefore a process, which is a statutory category of invention (Step 1). The claims are then analyzed to determine whether they are directed to any judicial exception (Step 2A, Prong 1).
Each of Claims 1 – 7, 9 – 12, and 14 - 15 has been analyzed to determine whether it is directed to any judicial exceptions.
Step 2A, Prong 1
Each of Claims 1 – 7, 9 – 12, and 14 - 15 recites at least one step or instruction for observations, evaluations, judgments, and opinions, which are grouped as a mental process under the 2019 PEG. The claimed invention involves making observations, evaluations, judgments, and opinions, which are concepts performed in the human mind under the 2019 PEG. Additionally, the claimed invention involved managing interactions between people, namely, humans following rules, which is one of certain methods of organizing human activity under the 2019 PEG.
Accordingly, each of Claims 1 – 7, 9 – 12, and 14 - 15 recites an abstract idea.
Specifically, Claims 1 – 7, 9 – 12, and 14 - 15 recite (underlined are observations, judgements, evaluations, or opinions, which are grouped as a mental process and certain methods of organizing human activity under the 2019 PEG) (additional elements bolded, see Step 2A, prong 2);
Claim 1:
A method to detect neurological impairment of a user using examination of sensorimotor control, the method comprising:
positioning a head-mounted display on the user's head and a hand tracking sensor in a hand of the user, the head-mounted display placing the user in a virtual or augmented reality environment and including an eye tracking system and a head movement sensor, the hand tracking sensor for tracking position and movement of the hand of the user relative to other objects displayed in the virtual or augmented reality environment;
presenting, on the head-mounted display by a processor executing software stored on a hardware storage device, the user with a software-generated object in the virtual or augmented reality environment;
providing, on the head-mounted display by the processor executing the software, instructions directing the user to execute one or more sensorimotor activities relating to the software- generated object within the virtual or augmented reality environment;
recording, on the hardware storage device by the processor executing the software, data at between 45 and 500 measurements per second from the hand tracking sensor, the eye tracking system, and the head movement sensor during execution of the one or more sensorimotor activities, the data indicating eye, head, and hand movement of the user during the one or more sensorimotor activities;
generating, by the processor executing the software using machine learning artificial intelligence computation, a user sensorimotor index from the data, wherein the sensorimotor index is a score based on the eye, head, and hand movement of the user during the one or more sensorimotor activities; and
determining, by the processor executing the software, neurological impairment of the user by comparing the sensorimotor index with an expected value.
(observation, judgment or evaluation, which is grouped as a mental process and certain methods of organizing human activity under the 2019 PEG);
These underlined limitations describe a mathematical calculation and/or a mental process, as a skilled practitioner is capable of performing the recited limitations and making a mental assessment thereafter. Examiner notes that nothing from the claims suggests that the limitations cannot be practically performed by a human with the aid of a pen and paper, or by using a generic computer as a tool to perform mathematical calculations and/or mental process steps in real time. Examiner additionally notes that nothing from the claims suggests and undue level of complexity that the mathematical calculations and/or the mental process steps cannot be practically performed by a human with the aid of a pen and paper, or using a generic computer as a tool to perform mathematical calculations and/or mental process steps. For example, in Independent Claim 1, these limitations include:
providing instructions on the head-mounted display by the processor executing the software directing the user with rules to execute one or more sensorimotor activities relating to the software- generated object within the virtual or augmented reality environment;
(involves managing interactions between people, namely, humans following rules, which is grouped as a certain method of organizing activity under 2019 PEG and/or a judgement or evaluation, which is grouped as a mental process under 2019 PEG)
observing and judgment to evaluate, by the processor executing the software using machine learning artificial intelligence computation (or human brain neural network), a user sensorimotor index from the data, wherein the sensorimotor index is a score based on the eye, head, and hand movement of the user during the one or more sensorimotor activities
observing and judgment to determine neurological impairment of the user by observation and judgment to compare the sensorimotor index with an expected value.
(observation, judgment or evaluation, which is grouped as a mental process under the 2019 PEG);
Similarly, Dependent Claims 2 – 7, 9 – 12, and 14 - 15 include the following abstract limitations, in addition the aforementioned limitations in Independent Claims 1 (underlined observation, judgment or evaluation, which is grouped as a mental process under the 2019 PEG):
assigning a test grade based upon the sensorimotor index;
observation and judgment of a test grade based upon the sensorimotor index;
determining if the user has a neurological impairment based upon the test grade.
observation and judgment of if the user has a neurological impairment based upon the test grade.
comparing the user sensorimotor index to at least one other sensorimotor index;
observation and judgment to compare the user sensorimotor index to at least one other sensorimotor index
determining if the user has a neurological impairment based upon the comparison with the at least one other sensorimotor index.
Observation and judgment to diagnose if the user has a neurological impairment based upon the comparison with the at least one other sensorimotor index.
at least one other sensorimotor index ids generated using statistical computation or machine learning artificial intelligence computation.
observation and judgment of at least one other sensorimotor index generated using statistical computation evaluation
the machine learning artificial intelligence makes a determination about the type of impairment for each sensorimotor activity or a combination of one or more of the sensorimotor activities when completed together
observation and judgment with machine learning artificial intelligence (or human brain neural network) to make a determination about the type of impairment for each sensorimotor activity or a combination of one or more of the sensorimotor activities when completed together;
As claimed, the aforementioned limitations are mental processes or mathematical algorithms under the 2019 PEG that would be performed by a biomedical or engineering professional using their education, background, and experience. Accordingly, as indicated above, each of the above-identified claims recite an abstract idea.
Step 2A, Prong 2
The above-identified abstract ideas in Independent Claim 1 (and its dependent Claims) are not integrated into a practical application under 2019 PEG because the additional elements (identified above in Claims 1 – 7, 9 – 12, and 14 - 15), either alone or in combination, generally link the use of the above-identified abstract ideas to a particular technological environment or field of use. More specifically, the additional elements of:
head-mounted display
hand tracking sensor
eye tracking system
head movement sensor
processor
software
hardware storage device
screen for each eye
machine learning artificial intelligence
accelerometer sensor
gyroscope sensor
magnetometer
Additional elements recited include a head-mounted display to place the user in a virtual or augmented environment, one or more sensor to record, one or more processor, and one or more hardware storage device. Each of these components is recited at a high level of generality. These generic hardware component limitations for the “head-mounted display”, “hand tracking sensor”, “eye tracking system”, “head movement sensor”, “processor”, “software”, “hardware storage device”, “screen for each eye”, “machine learning artificial intelligence”, “accelerometer sensor”, “gyroscope sensor”, and “magnetometer” are no more than mere instructions to apply the exception using generic computer components. As such, these additional elements do not impose any meaningful limits on practicing the abstract idea.
Further additional elements from independent claim 1 include pre-solution activity limitations, such as:
positioning a head-mounted display on the user's head and a hand tracking sensor in a hand of the user, the head-mounted display placing the user in a virtual or augmented reality environment and including an eye tracking system and a head movement sensor, the hand tracking sensor for tracking position and movement of the hand of the user relative to other objects displayed in the virtual or augmented reality environment;
presenting, on the head-mounted display by a processor executing software stored on a hardware storage device, the user with a software-generated object in the virtual or augmented reality environment;
recording, on the hardware storage device by the processor executing the software, data at between 45 and 500 measurements per second from the hand tracking sensor, the eye tracking system, and the head movement sensor during execution of the one or more sensorimotor activities, the data indicating eye, head, and hand movement of the user during the one or more sensorimotor activities;
In addition the aforementioned extra-solution activity limitations in Independent Claim 1, additional extra-solution activity limitations recited in the Dependent Claims include:
wherein the sensorimotor activities are selected from the group consisting of: smooth pursuit; convergence eye movement; saccadic eye movement; peripheral visual acuity; object discrimination; gaze stability; head-eye coordination; cervical neuromotor control; and combinations thereof.
wherein the instructions are provided to the user as audio or visual instructions.
wherein the data includes one or more of object data, response data, or symptom data.
wherein the placing of the user in a virtual or augmented reality environment includes displaying a three-dimensional environment to the user through the head- mounted display.
wherein the head-mounted display includes a screen for each eye.
from the group consisting of a previous sensorimotor index generated from the user's prior data, a designated population without a known neurological impairment, and a designated population with a known neurological impairment.
wherein each of the hand tracking sensor and the head movement sensor comprise an accelerometer sensor, a gyroscope sensor, a magnetometer, or combination thereof.
wherein the user is neurologically impaired due to trauma, vascular aging, or other physiological processes.
These pre-solution measurement elements are insignificant extra-solution activity, setting up the parameters of the system, and serve as data-gathering for the subsequent steps.
The “head-mounted display”, “hand tracking sensor”, “eye tracking system”, “head movement sensor”, “processor”, “software”, “hardware storage device”, “screen for each eye”, “machine learning artificial intelligence”, “accelerometer sensor”, “gyroscope sensor”, and “magnetometer” as recited in independent Claim 1 and its dependent claims are generically recited computer and hardware elements which do not improve the functioning of a computer, or any other technology or technical field. Nor do these above-identified additional elements serve to apply the above-identified abstract idea with, or by use of, a particular machine, effect a transformation or apply or use the above-identified abstract idea in some other meaningful way beyond generally linking the use thereof to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception. Furthermore, the above-identified additional elements do not add a meaningful limitation to the abstract idea because they amount to simply implementing the abstract idea on a computer. For at least these reasons, the abstract ideas identified above in independent Claim 1 (and its dependent claims) is not integrated into a practical application under 2019 PEG.
Moreover, the above-identified abstract idea is not integrated into a practical application under 2019 PEG because the claimed method and system merely implements the above-identified abstract idea (e.g., mental process and certain method of organizing human activity) using rules (e.g., computer instructions) executed by a computer processor as claimed. In other words, these claims are merely directed to an abstract idea with additional generic computer elements which do not add a meaningful limitation to the abstract idea because they amount to simply implementing the abstract idea on a computer. Additionally, Applicant’s specification does not include any discussion of how the claimed invention provides a technical improvement realized by these claims over the prior art or any explanation of a technical problem having an unconventional technical solution that is expressed in these claims. That is, like Affinity Labs of Tex. v. DirecTV, LLC, the specification fails to provide sufficient details regarding the manner in which the claimed invention accomplishes any technical improvement or solution. Thus, for these additional reasons, the abstract idea identified above in in independent Claim 1 (and its dependent claims) is not integrated into a practical application under the 2019 PEG.
Accordingly, independent Claim 1 (and its dependent claims) are each directed to an abstract idea under 2019 PEG.
Step 2B –
None of Claims 1 – 7, 9 – 12, and 14 - 15 include additional elements that are sufficient to amount to significantly more than the abstract idea for at least the following reasons.
These claims require the additional elements of: “head-mounted display”, “hand tracking sensor”, “eye tracking system”, “head movement sensor”, “processor”, “software”, “hardware storage device”, “screen for each eye”, “machine learning artificial intelligence”, “accelerometer sensor”, “gyroscope sensor”, and “magnetometer” as recited in independent Claim 1 and its dependent claims.
The additional elements of the “head-mounted display”, “hand tracking sensor”, “eye tracking system”, “head movement sensor”, “processor”, “software”, “hardware storage device”, “screen for each eye”, “machine learning artificial intelligence”, “accelerometer sensor”, “gyroscope sensor”, and “magnetometer” in claims 1 – 7, 9 – 12, and 14 - 15, as discussed with respect to Step 2A Prong Two, amounts to no more than mere instructions to apply the exception using generic computer and hardware components. The same analysis applies here in 2B, i.e., mere instructions to apply an exception using a generic computer component cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B.
The above-identified additional elements are generically claimed computer components which enable the above-identified abstract idea(s) to be conducted by performing the basic functions of automating mental tasks. The courts have recognized such computer functions as well understood, routine, and conventional functions when claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. See, Versata Dev. Group, Inc. v. SAP Am., Inc. , 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); and OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93.
Per applicant’s specification, the “head mounted display” is described generically in [0054] “also includes a head-mounted display (HMD) 108, which is also referred to interchangeably herein as a head-mounted extended reality device, configured to display a virtual immersive environment to the user when worn by the user. The term HMD should be understood to be synonymous with similar terms referring to similar display devices such as "headset,"VR device,""VR display,""AR device,""AR display," and the like. In some embodiments, the HMD covers substantially all of the user's visual field and may permit simultaneous visual input and interaction with the physical environment and the virtual environment”. The “head mounted display” 108 is shown as a generic black-box component in Figure 2.
Per applicant’s specification, the “hand tracking sensor” is described generically in [0055] …accelerometers (e.g. to detect postural sway or reaction movement of the head and/or hand)” and [0060] “Optionally, one or more hand controllers with sensors measuring hand movements can be included with the HMD 108”.
Per applicant’s specification, the “eye tracking system” is described as an off-the-shelf component in [0056] “…an eye tracking system 112; commercially available system by HTC with the trade name Vive Pro Eye that contains a Tobii eye tracker”. The “eye tracking device” is shown in Figure 1 as black-box “Eye Tracking System” 112.
Per applicant’s specification, the “head movement sensor” is described generically in [0055] …accelerometers (e.g. to detect postural sway or reaction movement of the head and/or hand)” and [0075] “head tracker”; and “gyroscope head sensor” is described generically in [0055] “gyroscopes (e.g. to measure the position of the head and/or hand).
Per applicant’s specification, the “processor” is described generically in [0054] “includes one or more processors 104”; “Suitable processors 104 include, but are not limited to, microprocessors, video processors, application specific integrated circuits (ASICs), and systems on a chip (SOACs).” The “processor” is shown in Figure 1 as black-box “Processors” 104 and 134.
Per applicant’s specification, the “software” is described generically in [0059] with “Operating the user system 102 (e.g., by running appropriate software)” describing creating a generic “virtual immersive environment”; [0076] “voice recognition software”; [0077] “the computer software program is executed by a computer processor on the HMD, at a remote computer device, or both.”;
Per applicant’s specification, the “hardware storage device” is described generically in [0054] “…a memory 106 (e.g. in the form of one or more hardware storage devices); “Suitable types of memory 106 include RAM, ROM, DRAM, SRAM, and MRAM, which may be stored on a hardware storage device such as disk media, electronic, or other like bulk, long-term storage or high-capacity storage medium.” There is nothing particular to the structure of the “hardware storage device” that deems it more than well-understood, routine, or conventional. The “hardware storage device” is shown generically as both “memory” 136 and 106 black-box rectangles in Figure 1.
Per applicant’s specification, the “screen” is described generically in [0060] as part of a “virtual reality (VR) headset, with one display screen per eye”. The images displayed on the screen during sensorimotor activities are described in detail, but the hardware associated with the screen itself is associated with a generic BR headset device.
Per applicant’s specification, the “machine learning artificial intelligence” is described generically to be performed by [0097] “neural networks, gradient boosting, or support vector machines”.
Per applicant’s specification, the “sensors” are described generically in [0055] “Suitable sensors include, but are not limited to, sensors for measuring the position and/or movement of the head, hand, and/or other body parts” and are listen to be “accelerometers”, “gyroscopes”, and “magnetometers”. For named “sensors” in applicant’s specification, the “magnetometer” is described generically in [0055] “…and magnetometers” (e.g. to measure physical orientation of the user”; “accelerometer sensor” is described generically in [0055] …accelerometers (e.g. to detect postural sway or reaction movement of the head and/or hand)” and [0075] “head tracker”; and “gyroscope sensor” is described generically in [0055] “gyroscopes (e.g. to measure the position of the head and/or hand). There is nothing particular to the described structure of the “magnetometer”, “accelerometer sensor”, or “gyroscope sensor”, that deem them more than well-understood, routine, or conventional. In combination, the claimed terms associated with “sensors” consist of well-understood, routine, or conventional components used in a well-understood manner to perform a routine function of measuring body positioning with a VR headset device.
Accordingly, in light of Applicant’s specification, the claimed terms “head-mounted display”, “hand tracking sensor”, “eye tracking system”, “head movement sensor”, “processor”, “software”, “hardware storage device”, “screen for each eye”, “machine learning artificial intelligence”, “accelerometer sensor”, “gyroscope sensor”, and “magnetometer” are reasonably construed as a generic computing device or hardware. Like SAP America vs Investpic, LLC (Federal Circuit 2018), it is clear, from the claims themselves and the specification, that these limitations require no improved computer resources, just already available computers, with their already available basic functions, to use as tools in executing the claimed process.
Furthermore, Applicant’s specification does not describe any special programming or algorithms required for the “head-mounted display”, “hand tracking sensor”, “eye tracking system”, “head movement sensor”, “processor”, “software”, “hardware storage device”, “screen for each eye”, “machine learning artificial intelligence”, “accelerometer sensor”, “gyroscope sensor”, and “magnetometer”. This lack of disclosure is acceptable under 35 U.S.C. §112(a) since this hardware performs non-specialized functions known by those of ordinary skill in the computer arts. By omitting any specialized programming or algorithms, Applicant's specification essentially admits that this hardware is conventional and performs well understood, routine and conventional activities in the computer industry or arts. In other words, Applicant’s specification demonstrates the well-understood, routine, conventional nature of the above-identified additional elements because it describes these additional elements in a manner that indicates that the additional elements are sufficiently well-known that the specification does not need to describe the particulars of such additional elements to satisfy 35 U.S.C. § 112(a) (see Berkheimer memo from April 19, 2018, (III)(A)(1) on page 3). Adding hardware that performs “‘well understood, routine, conventional activit[ies]’ previously known to the industry” will not make claims patent-eligible (TLI Communications).
The recitation of the above-identified additional limitations in Claims 1 – 7, 9 – 12, and 14 - 15 amounts to mere instructions to implement the abstract idea on a computer. Simply using a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general-purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not provide significantly more. See Affinity Labs v. DirecTV, 838 F.3d 1253, 1262, 120 USPQ2d 1201, 1207 (Fed. Cir. 2016) (cellular telephone); and TLI Communications LLC v. AV Auto, LLC, 823 F.3d 607, 613, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (computer server and telephone unit). Moreover, implementing an abstract idea on a generic computer, does not add significantly more, similar to how the recitation of the computer in the claim in Alice amounted to mere instructions to apply the abstract idea of intermediated settlement on a generic computer.
A claim that purports to improve computer capabilities or to improve an existing technology may provide significantly more. McRO, Inc. v. Bandai Namco Games Am. Inc., 837 F.3d 1299, 1314-15, 120 USPQ2d 1091, 1101-02 (Fed. Cir. 2016); and Enfish, LLC v. Microsoft Corp., 822 F.3d 1327, 1335-36, 118 USPQ2d 1684, 1688-89 (Fed. Cir. 2016). However, a technical explanation as to how to implement the invention should be present in the specification for any assertion that the invention improves upon conventional functioning of a computer, or upon conventional technology or technological processes. That is, the disclosure must provide sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing an improvement. Here, Applicant’s specification does not include any discussion of how the claimed invention provides a technical improvement realized by these claims over the prior art or any explanation of a technical problem having an unconventional technical solution that is expressed in these claims. Instead, as in Affinity Labs of Tex. v. DirecTV, LLC 838 F.3d 1253, 1263-64, 120 USPQ2d 1201, 1207-08 (Fed. Cir. 2016), the specification fails to provide sufficient details regarding the manner in which the claimed invention accomplishes any technical improvement or solution.
For at least the above reasons, the method of Claims 1 – 7, 9 – 12, and 14 - 15 is directed to applying an abstract idea as identified above on a general-purpose computer without (i) improving the performance of the computer itself, or (ii) providing a technical solution to a problem in a technical field. None of Claims 1 – 7, 9 – 12, and 14 - 15 provides meaningful limitations to transform the abstract idea into a patent eligible application of the abstract idea such that these claims amount to significantly more than the abstract idea itself.
Taking the additional elements individually and in combination, the additional elements do not provide significantly more. Specifically, when viewed individually, the above-identified additional elements for Step 2A Prong 2 in independent Claim 1 (and its dependent claims) do not add significantly more because they are simply an attempt to limit the abstract idea to a particular technological environment. That is, neither the general computer elements nor any other additional element adds meaningful limitations to the abstract idea because these additional elements represent insignificant extra-solution activity. When viewed as a combination, these above-identified additional elements simply instruct the practitioner to implement the claimed functions with well-understood, routine and conventional activity specified at a high level of generality in a particular technological environment. As such, there is no inventive concept sufficient to transform the claimed subject matter into a patent-eligible application. When viewed as whole, the above-identified additional elements do not provide meaningful limitations to transform the abstract idea into a patent eligible application of the abstract idea such that the claims amount to significantly more than the abstract idea itself. Thus, Claims 1 – 7, 9 – 12, and 14 - 15 merely apply an abstract idea to a computer and do not (i) improve the performance of the computer itself (as in Bascom and Enfish), or (ii) provide a technical solution to a problem in a technical field (as in DDR).
Therefore, none of the Claims 1 – 7, 9 – 12, and 14 - 15 amounts to significantly more than the abstract idea itself. Accordingly, Claims 1 – 7, 9 – 12, and 14 - 15 are not patent eligible and rejected under 35 U.S.C. 101.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1 – 7, 9 – 12, and 14 - 15 are rejected under 35 U.S.C. 103 as being unpatentable over Zidan et al., (US 2020/0397288 A1), hereinafter Zidan, as evidenced by Jost of Baylor University, “Quantitative Evaluation of the HTC Vive Virtual Reality System in Controlled Movement”, hereinafter Baylor University, in view of Blaha, et. al., (United States Patent US 9 ,706,910 B1), hereinafter Blaha,
Regarding Claim 1, Zidan discloses A method to detect neurological impairment of a user using examination of sensorimotor control (Title: “Medical system and method operable to control sensor-based wearable devices for examining eyes”; “medical assembly” 110; [0207] “eye tests”, Table 1; [0085] “term ‘abnormality’…encompasses…an abnormality, dysfunction, pathology, or impairment in any part, activity or condition of any anatomical part or system of the body”; [0105] “non-limiting list of symptoms related to or applicable to the use…medical assembly 110 or examination output 127”; [0111] “diplopia…neurologic disease…”), the method comprising:
positioning a head-mounted display (Fig 32, “3D medical headset) on the user's head ([0256] “…the subject 112 mounts the 3D medical headset 284 to the head 160 of the subject 112.”) and a hand tracking sensor in a hand of the user ([0203] “ViveTM product includes a plurality of handheld or hand-holdable devices that enable users to provide inputs to such ViveTM product…”), the head-mounted display (Fig 32, “3D medical headset) placing the user in a virtual or augmented reality environment ([0194] “The 3D medical headset 284 incorporates Virtual Reality hardware and software (VR), Augmented Reality hardware and software…” “the 3D medical headset 284 is configured and operable to generate a 3D visual effect 286 that provides a 3D viewing experience to users”), and including an eye tracking system ([0193] “…and other electronic or electromechanical devices operable to sense, track, monitor, record, or detect movement of or changes in the eye 254”), and a head movement sensor ([0193] “…accelerometers”; “and other electronic or electromechanical devices operable to sense, track, monitor, record, or detect movement of or changes in…the head 160”), the hand tracking sensor ([0203] “ViveTM … handheld or hand-holdable devices that enable users to provide inputs to such ViveTM product…”) for tracking position and movement of the hand of the user relative to other objects displayed in the virtual or augmented reality environment ([0219] “dissociating test…The graphical elements…differently colored, such as a red dot for the right eye 161 and a white dot for the left eye 163. The medical assembly 110…receive repositioning inputs from an accessory 255 (e.g., a handheld controller) operated by the subject 112. In this way, the subject 112 can select and drag, pull or slide one or each of the graphical elements 354, 356…subject 112 can cause (or attempt to cause) the graphical elements 354, 354 to overlap”)(Examiner notes that both position and movement of the hand are necessary to properly select and move displayed).
presenting, on the head-mounted display by a processor executing software stored on a hardware storage device, the user with a software-generated object in the virtual or augmented reality environment ([0260] “…the 3D medical headset 284 displays various types of voluntary prompting graphics 357, involuntary stimulating graphics 359 and vision blocking graphics 361…The particular type of graphic that is presented depends on the type of eye test being conducted”; [0009] “…one or more data storage devices storing a plurality of computer-readable instructions….instructions…executable by one or more processors…operatively coupled to a wearable device.”)
providing, on the head-mounted display by the processor executing the software ([0009]) instructions ([0257] “…the 3D medical headset 284 is activated to begin generating visual or audible instructions to the eyes of the subject 112”) directing the user to execute one or more sensorimotor activities ([0207] “eye tests”, Table 1; As an example, for the pursuit test, [0258] “Follow the dot with your eyes.”) relating to the software-generated object ([0258] “the dot” as an example; [260] “..the particular type of graphic that is presented depends on the type of eye test being conducted”) within the virtual or augmented reality environment (Fig 34, [0046] “Fig 34 is a rear view of the 3d medical headset of Fig 32, illustrating the first frame of an example of a pursuit graphic”; Fig 33, [0045] “Fig 34 is a rear view of the 3d medical headset of Fig 32, illustrating an example of a 3D visual effect generated by the 3D medical headset)
recording, on the hardware storage device by the processor executing the software [0009] “…one or more data storage devices storing a plurality of computer-readable instructions….instructions…executable by one or more processors…operatively coupled to a wearable device.”),
data at between 45 and 500 measurements per second (from the hand tracking sensor from the hand tracking sensor ([0203] “Vive™ virtual reality…plurality of handheld or hand-holdable device that enable user to provide inputs…”)(Examiner notes that regarding the frame rate of data collection, Zidan discloses that the VIve system can be used for the tracking functions. Jost at Baylor University performed a study regarding the frame rate capabilities of the Vive controllers ([Page 16, 1st Full Paragraph] “The purpose of this study was to evaluate the HTC Vive controllers for use in clinic, industry, and research.”. As evidenced by Baylor University, the Vive system has a data acquisition frame rate of 230 Hz – 250 Hz (where frame rate Hz is the number of measurements per second): [Page 57, Paragraph 2] “Vive acquisition code…Some tests were performed to determine the sampling rate, and it was found to be around 230 Hz.”; [Page 21, Paragraph 2] “The system was set to collect at 250 Hz, which was the theorized sampling rate of the Vive output program.”) the eye tracking system ([0203] “Vive™ eye-tracker takes snapshots of the eyes…with high frequency to capture the eye movements…rate over sixty hertz”)(Examiner notes that “hertz” is a unit that describes frames per second, or measurements per second.), and the head movement sensor ([0203] “Vive™ virtual reality headset products made by the HTC Corporation”; [0220] “inner-ear and vestibulo-ocular function test…prompts the subject 112 to move the subject’s head 160…head movements can include yaw rotation 278, pitch rotation 280, roll rotation 282…”; [0009])(Examiner notes that as evidenced by Baylor University [Page 4, Bottom] “The base system comes with an HMD with a resolution of 1080x1200 pixels per eye[7] and has a refresh rate of 90 Hz.”),
during execution of the one or more sensorimotor activities ([0233] “…dissociating test”; [0009] “…sense a plurality of head positions of the head relative to the environment…any head movement that occurs during the ophthalmological examination.”), the data indicating eye ([0233] “During the dissociating test, the medical assembly 110 is operable to receive or generate sensed parameters 366, including the movement and variable positions of the Eyelids”), head ([0220] “inner-ear and vestibulo-ocular function test…prompts the subject 112 to move the subject’s head 160…head movements can include yaw rotation 278, pitch rotation 280, roll rotation 282…”), and hand movement of the user ([0219] “dissociating test…The graphical elements…differently colored, such as a red dot for the right eye 161 and a white dot for the left eye 163. The medical assembly 110…receive repositioning inputs from an accessory 255 (e.g., a handheld controller) operated by the subject 112. In this way, the subject 112 can select and drag, pull or slide one or each of the graphical elements 354, 356…subject 112 can cause (or attempt to cause) the graphical elements 354, 354 to overlap”)(Examiner notes that the user’s hand will exhibit a particular position and movement in order to specifically select and move dots that are displayed in the virtual environment). during the one or more sensorimotor activities ([0223] “…during each of the eye tests described above…medical assembly 110 detects or senses a plurality of eye functions 358 and a plurality of head movements 360 relative to the physical environment 253”; [0219])(Examiner notes that one of the tests “described above” is the “dissociating test”, which includes movement while holding a handheld control to overlap dots);
generating, by the processor executing the software [0009] “…one or more data storage devices storing a plurality of computer-readable instructions….instructions…executable by one or more processors…operatively coupled to a wearable device.”) using machine learning artificial intelligence computation ([0189] “AI module 396 can include a machine learning algorithm”), a user sensorimotor index from the data ([0227] – [0239], “sensed parameters 366”; As an example, [0227] “During the pursuit eye movements test, the medical assembly 110 is operable to receive or generate sensed parameters 366, including…a deviation between the path of eye movement and the path of the moving target displayed in the background image 296…any latency or delay in eye movement”) (Examiner notes that sensorimotor control index is given in the applicant’s specification as [0080] “a numeric representation or score on a sensorimotor test, which is calculated from object data and the subject’s response data.” In accordance with this description, Zidan’s disclosed “sensed parameters 366” incorporates data components that are comparisons between sensor-gathered eye movement measurements relative to the displayed test environment.), wherein the sensorimotor index is a score based on the eye, head, and hand movement of the user ([0271] “sensed parameters 366”; [0240] “deviation 376 of the sensed parameter 366 compared to the associated benchmark parameter 372; and (f) a severity ranking, severity score or severity indicator 378 depending, at least in part, on the associated deviation 376.”) during the one or more sensorimotor activities [0265] “the system logic 118 includes an artificial intelligence (AI) module 396”; [0271] “…the medical assembly 110 applies or executes various kinds of machine learning algorithms to identify or recognize data patterns derived from the sensed parameters 366 to produce examination outputs 127.”); and
determining, by the processor executing the software ([0009]), neurological impairment of the user by comparing the sensorimotor index with an expected value ([0251] “…compare the sensed parameter 366 of any eye characteristic category or any other health category with the historical parameter related to the same category…generate or indicate one or more abnormalities…process the subject health history data 389 to monitor any particular subject’s progression of any identified abnormality”; [0085] “term ‘abnormality’…encompasses…an abnormality, dysfunction, pathology, or impairment in any part, activity or condition of any anatomical part or system of the body”; [0105] “non-limiting list of symptoms related to or applicable to the use…medical assembly 110 or examination output 127”; [0111] “diplopia…neurologic disease…”)
Zidan does not specifically disclose for tracking position and movement of the hand of the user relative to other objects displayed in the virtual or augmented reality environment. However, Zidan does broadly disclose that a handheld controller-activated hand interaction can occur during tests to provide inputs (like moving the dots in the “dissociating test” [0219]) with a handheld controller such as the Vive [0203] (which does motion tracking using an IMU, as evidenced by Baylor University at [Page 3, 1st Full Paragraph] “HTC Vive… two components for tracking. The first is an inertial measurement unit (IMU), which is a combination of an accelerometer and gyroscope. The IMU is located on the neck of the controller.”).
However, for a more specific teaching of using the tracking the hand movement relative to other objects displayed in the virtual or augmented reality environment, Blaha teaches methods, systems, and computer program products to detect, assess, and treat vision disorders in subjects, that incorporates VR and hand tracking [Col 6, Lines 10 – 49]. Specifically for Claim 1, Blaha teaches the hand tracking sensor for tracking position and movement of the hand of the user relative to other objects displayed in the virtual or augmented reality environment ([Col 6, Lines 40 – 43] “User input can be acquired with respect to the displayed images, in response to user's operation of an input device or by using sensors monitoring movement of the user’s eye, head, hands, or other body parts.”; [Col 6, Lines 26 – 29] “a head mountable virtual reality (VR) device that creates a virtual reality environment for a user wearing the VR device…”)
Zidan discloses virtual reality based tests for determining visual function, including using handset control to move dots in a displayed virtual environment. Blaha also teaches virtual reality based tests for determining visual function, including using hand tracking for users to provide input during tests to interact with objects in a displayed virtual environment. Blaha provides a motivation to combine at [Col 8, Lines 18 – 22] with “The described system can also be equipped with gesture-recognition sensors such that no input device can be required and the user input can be received based on movements of the user’s hand(s)…” A person having ordinary skill in the art before the effective filing date of the claimed invention would recognize that using a hand tracking sensor would allow for a user to interact with their virtual reality environment directly in an immersive way using hand movement and gestures, which would be useful for researchers seeking more dynamic motor-associated data in sensorimotor testing.
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the ViveTM hand-held, hand-tracking-capable controller for executing dot movement in sensorimotor vision tests disclosed in Zidan with the hand movement tracking relative to the presented virtual environment during sensorimotor tests to taught by Blaha, creating a single sensorimotor test-performing device that incorporates virtual environment hand orientation tracking into sensorimotor function testing, allowing for more comprehensive sensorimotor data informing the visual function assessment.
Regarding Claim 2, Zidan discloses, as described above, The method of claim 1. For the remainder of Claim 2, Zidan discloses wherein the sensorimotor activities ([0207] “eye tests”, Table 1) are selected from the group ([0207] “..the medical assembly 110 (whether including the wearable device 124, the medical headset 162 or the 3d medical headset 284) is configured and operable to conduct a plurality of eye tests”) consisting of:
smooth pursuit ([0208] “For a pursuit eye movements test, listed in Table 1 below…”);
convergence eye movement ([0215] “For a vergence eye test, listed in Table 1 below…”);
saccadic eye movement ([0214] “For a saccadic eye movements test, listed in Table 1 below…”);
peripheral visual acuity; object discrimination; gaze stability; head-eye coordination;
cervical neuromotor control; and
combinations thereof ([0207] “the medical assembly 110 (whether including the wearable device 124, the medical headset 162 or the 3D medical headset 284) is configured and operable to conduct a plurality of eye tests.”).
Regarding Claim 3, Zidan discloses, as described above, The method of claim 1. For the remainder of Claim 3, Zidan discloses
wherein the instructions are provided to the user as audio or visual instructions ([0257] “…the 3D medical headset 284 is activated to begin generating visual or audible instructions to the eyes of the subject 112”); ([0208] “The medical assembly 110 or health care provider audibly or visually prompts the subject 112 to use the subject’s eyes to follow the movement of the traveling stimulus 298”.)
Regarding Claim 4, Zidan discloses, as described above, The method of claim 1. For the remainder of Claim 4, Zidan discloses
wherein the data includes one or more of object data (Fig 1, “Control Data” 152; [0159] “processing of input/output signals and the generation of audio, visual, audiovisual, vibratory, tactile, and other outputs”), response data (Fig 1, “Sensed parameters” 366, [0159] “…derived from sensor signals generated by the wearable device”), or symptom data (Fig 1, “Abnormality data” 156; [0159] “..associated with difference severities of eye abnormalities”).
Regarding Claim 5, Zidan discloses, as described above, The method of claim 1, wherein the placing of the user in a virtual or augmented reality environment. For the remainder of Claim 5, Zidan discloses
includes displaying a three-dimensional environment to the user through the head- mounted display ([0194] “the 3D medical headset 284 is configured and operable to generate a 3D visual effect 286 that provides a 3D viewing experience to users”).
Regarding Claim 6, Zidan discloses, as described above, The method of claim 1. For the remainder of Claim 6, Zidan discloses
wherein the head-mounted display includes a screen for each eye ([0032] “Fig 20 is a rear isometric view of an embodiment of a display unit of the medical headset of Fig 10 illustrating a plurality of display devices”; Fig 20, “left screen” 238, “right screen 236”; [0176] “…the right screen 236 is operable to generate an image A, for example, to the right eye 161 while the left screen 238 simultaneously generates an image B, for example, to the left eye 163.”)
Regarding Claims 7 and 9, Zidan discloses, as described above, The method of claim 1, wherein the step of determining by the processor executing the software ([0009]) neurological impairment of the user (All of [0264], see Claim 1; [0105]; [0111]; [0085]) comprises:
Specifically for the remainder of Claim 7, Zidan discloses assigning a test grade based upon the sensorimotor index (Fig 58, “benchmark parameter” 372, “percentile severity indicator” 374, “deviation” 376, “severity indicator” 378) and
determining if the user has a neurological impairment based upon the test grade (Figs 58 and 59; [0244] “Referring to Figs 58 – 59, for each possible diagnosis 382, the medical analysis data 158 includes one or more medical analysis factors 384 that correlate to such diagnosis 382; [0085]; [0105]; [0111]) (Examiner notes that the results tabulated in Figure 58 are used to determine the diagnoses in Figure 59. For example, for a diagnosis of “Disorder D1”, the Medical Analysis Factors used were the A2 Eye Abnormality in Category C2, and its associated Deviation of 95.2%. This is shown in Fig. 59 as “A2 D