DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Notice of Pre-AIA or AIA Status
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Information Disclosure Statement
The information disclosure statements (IDS) submitted on 8/29/2025, 11/26/2024 and 12/18/2023 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements have been considered by the examiner.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention.
Regarding claim 1, claim cites “measuring functional visual capabilities of a user” (line 1) and “…quantifying a functional visual capability of the user…” (line 13-14). However, it appears that none of a visual process/function of a user is specified, none of a component, sensor and/or parameter related to the visual process/function of a user is cited or specified in the claim. It is unclear how to obtain the functional visual capability without performing visual (measuring/monitoring) processes.
Claims 2-7 are rejected as containing the deficiencies of claim 1 through their dependency from claim 1.
Claim 8 has same undefined issue as that of claim 1.
Claims 9-14 are rejected as containing the deficiencies of claim 8 through their dependency from claim 8.
Claim 15 has same undefined issue as that of claim 1.
Claims 16-20 are rejected as containing the deficiencies of claim 15 through their dependency from claim 15.
Further, regarding claim 1, cited term of “to map a first set of coordinates representing movements in the virtual reality environment directed by the user” (line 9-10) is vague and renders the claims indefinite. There are many objects in the claim, it is unclear the cited “movements” refers to which on of objects, the plurality of virtual objects? the sensors? the head-mountable display? or the user?
Claims 2-7 are rejected as containing the deficiencies of claim 1 through their dependency from claim 1.
Claim 9 has same undefined issue of “movements” (in line 3) as that of claim 1.
Claims 10-11 and 13 are rejected as containing the deficiencies of claim 9 through their dependency from claim 9.
Claim 15 has same undefined issue of “movements” (in line 9) as that of claim 1.
Claims 16-20 are rejected as containing the deficiencies of claim 15 through their dependency from claim 15.
Therefore proper amendments are required in order to clarify the scopes of the claims and overcome the rejections.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Leung et al (US 20170273552).
Regarding Claim 1, Leung teaches a method for measuring functional visual capabilities of a user through a virtual reality environment (abstract; figs. 1 and 7; also see claim 1), the method comprising:
identifying a task to be executed in the virtual reality environment (fig. 7, 702; ¶[0042], line 1-8, The virtual reality simulation can be, for example, navigating in a busy city area, walking up or down one or more flights of stairs, driving a vehicle, and locating one or more objects of interest, etc.; ¶[0045], line 12-14, --VR objects related to the event, and/or number of correctly located objects of interest in the virtual reality simulation), the virtual reality environment is displayed by a head-mountable display (fig. 1, 110; fig. 2, 210; fig. 7, 704), and
where display of the virtual reality environment includes at least one optical setting that is dynamically modified during execution of the task (¶[0043], line 1-5, at least one of a contrast level or a brightness level of the virtual reality simulation being displayed on the head-mounted display can be adjusted to simulate different lighting conditions),
facilitating execution of the task, wherein execution of the task includes displaying a plurality of virtual objects in the display of the virtual reality environment by the head-mountable display (¶[0040], line 1-12, the subject is required to locate objects of interest (e.g. a book, a bottle, a pin, etc.) from a shelf or a container containing mixtures of objects. Head and/or body motion data are measured and monitored in real-time during VR simulation with motion sensors in the HMD);
obtaining, during execution of the task, a set of sensor data from a set of sensors (fig. 7, 706, sensor system; ¶[0040], line 1-12, . Head and/or body motion data are measured and monitored in real-time during VR simulation with motion sensors in the HMD);
processing the set of sensor data to map a first set of coordinates representing movements in the virtual reality environment directed by the user with a second set of coordinates specifying locations of the plurality of virtual objects in the virtual reality environment (fig. 3, PROGRAM COMPUTATION; INPUT TO COMPUTING DEVICES; PROGRAM OUTPUT; INTERACTION BETWEEN SUBJECT AND VR ENVIRONMENT; SUBJECT RESPONSES; RECORDING OF RESPONSE; -- movements of subject and movements of VR environment);
deriving a first performance metric based on the mapped coordinates (fig. 7, 708; ¶[0045], line 1-14, performance scores based on the voluntary and the involuntary responses of the user to the virtual reality simulation can be computed); and
generating an output based on the first performance metric, the output quantifying a functional visual capability of the user with dynamically modified optical settings in the virtual reality environment (fig. 7, 710; ¶[0045], line 1-14, visual disability metrics can be determined based on the performance scores. In some embodiments, the visual disability metrics can be determined based on one or more measurements recorded from the virtual reality simulation; speed of collisions with virtual reality objects in the virtual reality simulation, size, color and/or contrast of VR objects related to the event; ¶[0035], line 1-6, visual performance and visual disability may vary with the lighting conditions of the environment, the virtual reality simulation can be administered in different brightness and contrast levels, simulating different lighting conditions).
Regarding Claim 2, Leung teaches the method of claim 1, wherein at least one optical setting is dynamically modified from a first setting to a second setting during execution of the task, the optical setting including any of a light intensity setting, a virtual object contrast setting, and a dynamically-modified luminance setting (¶[0043], line 1-5, at least one of a contrast level or a brightness level of the virtual reality simulation being displayed on the head-mounted display can be adjusted to simulate different lighting conditions; ¶[0035], line 1-6, visual performance and visual disability may vary with the lighting conditions of the environment, the virtual reality simulation can be administered in different brightness and contrast levels, simulating different lighting conditions).
Regarding Claim 3, Leung teaches the method of claim 1, further comprising:
processing the set of sensor data to derive spatial movements of the head-mountable display during execution of the task, the spatial movements indicating head movements of the user when interacting with the virtual objects (fig. 2, 210, 220, 222; ¶[0031], line 1-17, Sensor system 220 may include motion sensors 222 such as gyroscope, accelerometer, and/or magnetometer to sense the voluntary responses of the user. These responses may include, for example, orientation and movement of the head, the body trunk such as chest or waist, the eyeball, and the upper and lower limbs),
generating a second performance metric based on the derived spatial movements; and updating the output to represent both the first performance metric and the second performance metric, the output quantifying the functional visual capability of the user and the spatial movements of the user interacting with the virtual objects (fig. 7, 710; ¶[0045], line 1-14, visual disability metrics can be determined based on the performance scores. In some embodiments, the visual disability metrics can be determined based on one or more measurements recorded from the virtual reality simulation; speed of collisions with virtual reality objects in the virtual reality simulation, size, color and/or contrast of VR objects related to the event; page 7, claim2, wherein the real life activity being simulated includes at least one of navigating in a city area, walking up or down one or more flights of stairs, driving a vehicle, or locating one or more objects of interest from a shelf).
Regarding Claim 4, Leung teaches the method of claim 1, wherein the set of sensors include:
eye tracking sensors disposed in the head-mountable display and configured to track eye movements of the user; base station sensors disposed in the head-mountable display and configured to identify spatial movements of the head-mountable display; and hand controller sensors configured to track hand movements of the user and/or a triggering event (fig. 2, 210, 220, 222; ¶[0031], line 1-17, Sensor system 220 may include motion sensors 222 such as gyroscope, accelerometer, and/or magnetometer to sense the voluntary responses of the user. These responses may include, for example, orientation and movement of the head, the body trunk such as chest or waist, the eyeball, and the upper and lower limbs; ¶[0044], line 1-8, voluntary and involuntary responses of the user can be monitored and recorded via a sensor system during the virtual reality simulation. For example, motion sensors can be used to sense one or more of eye motion, head motion, limb motion, or body motion of the user).
Regarding Claim 5, Leung teaches the method of claim 1, wherein the task is identified from a set of tasks, each task of the set of tasks relates to a particular optical condition relating to the user (¶[0035], line 1-6, visual performance and visual disability may vary with the lighting conditions of the environment, the virtual reality simulation can be administered in different brightness and contrast levels, simulating different lighting conditions; ¶[0042], line 1-8, The virtual reality simulation can be, for example, navigating in a busy city area, walking up or down one or more flights of stairs, driving a vehicle, and locating one or more objects of interest, etc.).
Regarding Claim 6, Leung teaches the method of claim 1, wherein the task comprises an object selection task, wherein the object selection task maps movements by the user in the virtual reality environment display to locations comprising virtual objects to identify each virtual object (¶[0040], line 1-12, the subject is required to locate objects of interest (e.g. a book, a bottle, a pin, etc.) from a shelf or a container containing mixtures of objects. The subject uses a controller or hand and body gestures detected by motion sensors to locate the targeted object. The subject directly interacts with the VR environment and the VR graphics would change in response to the subject's responses).
Regarding Claim 7, Leung teaches the method of claim 1, wherein the task comprises an object interaction task, wherein the object interaction task maps movements by the user in the virtual reality environment display to select a location of virtual objects moving toward a position of the user in the virtual reality environment display (¶[0038], line 1-19, The subject navigates in the VR environment by changing head and/or body orientation and the navigation speed can be adjusted with a controller or a motion detector of the lower limbs. The subject directly interacts with the VR environment and the VR graphics change in response to the subject's responses).
Regarding Claim 8, Leung teaches a virtual environment system (abstract; figs. 1 and 7; also see claim 1) comprising:
a head-mountable display configured to display a virtual reality environment (fig. 1, 110; fig. 2, 210; fig. 7, 704); and a computing device comprising:
one or more data processors; and a non-transitory computer-readable storage medium containing instructions which, when executed on the one or more data processors, cause the one or more data processors to perform a method (page 7, claim 1, a visual disability detection method implemented by computer, or a processor executing non-transitory computer readable storage medium comprising code; instructing the user to perform a task; computing performance scores) comprising:
identifying a task to be executed in the virtual reality environment by the head-mountable display (fig. 1, 110; fig. 2, 210; fig. 7, 702, 704; ¶[0042], line 1-8, The virtual reality simulation can be, for example, navigating in a busy city area, walking up or down one or more flights of stairs, driving a vehicle, and locating one or more objects of interest, etc.; ¶[0045], line 12-14, --VR objects related to the event, and/or number of correctly located objects of interest in the virtual reality simulation);
facilitating execution of the task, wherein execution of the task includes displaying a plurality of virtual objects in the display of the virtual reality environment by the head-mountable display (¶[0040], line 1-12, the subject is required to locate objects of interest (e.g. a book, a bottle, a pin, etc.) from a shelf or a container containing mixtures of objects. Head and/or body motion data are measured and monitored in real-time during VR simulation with motion sensors in the HMD);
obtaining, during execution of the task, a set of sensor data from a set of sensors (fig. 7, 706, sensor system; ¶[0040], line 1-12, Head and/or body motion data are measured and monitored in real-time during VR simulation with motion sensors in the HMD);
processing the set of sensor data to identify a subset of the plurality of virtual objects interacted with by a user and a time of interacting with each of the subset of the plurality of virtual objects (fig. 3, PROGRAM COMPUTATION; INPUT TO COMPUTING DEVICES; PROGRAM OUTPUT; INTERACTION BETWEEN SUBJECT AND VR ENVIRONMENT; SUBJECT RESPONSES; RECORDING OF RESPONSE; -- movements of subject and movements of VR environment; ¶[0040], line 1-12, The subject directly interacts with the VR environment and the VR graphics would change in response to the subject's responses. The duration required to complete the task and the number of correctly located items are used to generate a visual performance score for measurement of visual disability);
deriving a first performance metric based on the subset of the plurality of virtual objects interacted with by the user (fig. 7, 708; ¶[0045], line 1-14, performance scores based on the voluntary and the involuntary responses of the user to the virtual reality simulation can be computed) and the time of interacting with each of the subset of the plurality of virtual objects (¶[0040], line 1-12, The subject directly interacts with the VR environment and the VR graphics would change in response to the subject's responses. The duration required to complete the task and the number of correctly located items are used to generate a visual performance score for measurement of visual disability); and
generating an output based on the first performance metric, the output quantifying a functional visual capability of the user with dynamically modified optical settings in the virtual reality environment (fig. 7, 710; ¶[0045], line 1-14, visual disability metrics can be determined based on the performance scores. In some embodiments, the visual disability metrics can be determined based on one or more measurements recorded from the virtual reality simulation; speed of collisions with virtual reality objects in the virtual reality simulation, size, color and/or contrast of VR objects related to the event; ¶[0035], line 1-6, visual performance and visual disability may vary with the lighting conditions of the environment, the virtual reality simulation can be administered in different brightness and contrast levels, simulating different lighting conditions).
Regarding Claim 9, Leung teaches the virtual environment system of claim 8, wherein processing the set of sensor data to identify the subset of the virtual objects interacted with by the user further comprises: mapping a first set of coordinates representing movements in the virtual reality environment directed by the user with a second set of coordinates specifying locations of the plurality of virtual objects in the virtual reality environment (fig. 3, PROGRAM COMPUTATION; INPUT TO COMPUTING DEVICES; PROGRAM OUTPUT; INTERACTION BETWEEN SUBJECT AND VR ENVIRONMENT; SUBJECT RESPONSES; RECORDING OF RESPONSE; -- movements of subject and movements of VR environment).
Regarding Claim 10, Leung teaches the virtual environment system of claim 9, wherein the method further comprises:
detecting a trigger action at hand controller sensors configured to track hand movements of the user (fig. 3, 220/222, 210, 230; ¶[0031], line 1-19, Sensor system 220 may include motion sensors 222 such as gyroscope, accelerometer, and/or magnetometer to sense the voluntary responses of the user. These responses may include, for example, orientation and movement of the head, the body trunk such as chest or waist, the eyeball, and the upper and lower limbs; the movement detected by the motion sensors 222 can be used to control the movement in the virtual reality simulations),
the trigger action indicating an identification of one of the plurality of virtual objects (¶[0045], line 12-14, --VR objects related to the event, and/or number of correctly located objects of interest in the virtual reality simulation),
wherein processing the set of sensor data to identify the subset of the plurality of virtual objects interacted with by the user includes both mapping the first set of coordinates with the second set of coordinates and detecting the trigger action (fig. 3, PROGRAM COMPUTATION; INPUT TO COMPUTING DEVICES; PROGRAM OUTPUT; INTERACTION BETWEEN SUBJECT AND VR ENVIRONMENT; SUBJECT RESPONSES; RECORDING OF RESPONSE; -- movements of subject and movements of VR environment).
Regarding Claim 11, Leung teaches the virtual environment system of claim 10, wherein the method further comprises:
eye tracking sensors disposed in the head-mountable display and configured to track eye movements of the user; and base station sensors disposed in the head-mountable display and configured to track spatial movements of the head-mountable display, wherein the eye tracking sensors and base station sensors are configured to acquire the set of sensor data (fig. 2, 210, 220, 222; ¶[0031], line 1-17, Sensor system 220 may include motion sensors 222 such as gyroscope, accelerometer, and/or magnetometer to sense the voluntary responses of the user. These responses may include, for example, orientation and movement of the head, the body trunk such as chest or waist, the eyeball, and the upper and lower limbs; ¶[0044], line 1-8, voluntary and involuntary responses of the user can be monitored and recorded via a sensor system during the virtual reality simulation. For example, motion sensors can be used to sense one or more of eye motion, head motion, limb motion, or body motion of the user).
Regarding Claim 12, Leung teaches the virtual environment system of claim 8, wherein the task comprises an object selection task, wherein the object selection task maps movements by the user in the virtual reality environment display to locations comprising virtual objects of a specified virtual object type to identify each virtual object of the specified virtual object type within a scene comprising the plurality of virtual objects (¶[0040], line 1-12, the subject is required to locate objects of interest (e.g. a book, a bottle, a pin, etc.) from a shelf or a container containing mixtures of objects. The subject uses a controller or hand and body gestures detected by motion sensors to locate the targeted object. The subject directly interacts with the VR environment and the VR graphics would change in response to the subject's responses).
Regarding Claim 13, Leung teaches the virtual environment system of claim 9, wherein the task comprises an object interaction task, wherein the object interaction task maps movements by the user in the virtual reality environment display as matching a location of the virtual objects in the virtual reality environment display as specified in the second set of coordinates (¶[0038], line 1-19, The subject navigates in the VR environment by changing head and/or body orientation and the navigation speed can be adjusted with a controller or a motion detector of the lower limbs. The subject directly interacts with the VR environment and the VR graphics change in response to the subject's responses).
Regarding Claim 14, Leung teaches the virtual environment system of claim 8, wherein the method further comprises:
processing the set of sensor data to derive spatial movements of the head-mountable display during execution of the task, the spatial movements indicating head movements of the user to interact with the virtual objects during the task (fig. 2, 210, 220, 222; ¶[0031], line 1-17, Sensor system 220 may include motion sensors 222 such as gyroscope, accelerometer, and/or magnetometer to sense the voluntary responses of the user. These responses may include, for example, orientation and movement of the head, the body trunk such as chest or waist, the eyeball, and the upper and lower limbs);
generating a second performance metric based on the derived spatial movements; and updating the output to represent both the first performance metric and the second performance metric, the output quantifying the functional visual capability of the user and the spatial movements of the user interacting with the virtual objects (fig. 7, 710; ¶[0045], line 1-14, visual disability metrics can be determined based on the performance scores. In some embodiments, the visual disability metrics can be determined based on one or more measurements recorded from the virtual reality simulation; speed of collisions with virtual reality objects in the virtual reality simulation, size, color and/or contrast of VR objects related to the event; page 7, claim2, wherein the real life activity being simulated includes at least one of navigating in a city area, walking up or down one or more flights of stairs, driving a vehicle, or locating one or more objects of interest from a shelf).
Regarding Claim 15, Leung teaches a computer-implemented method (fig. 1, 110; fig. 2, 210; fig. 7, 704; and also see claim 1 on page 7) comprising:
identifying a task to be executed in a virtual reality environment (fig. 7, 702; ¶[0042], line 1-8, The virtual reality simulation can be, for example, navigating in a busy city area, walking up or down one or more flights of stairs, driving a vehicle, and locating one or more objects of interest, etc.; ¶[0045], line 12-14, --VR objects related to the event, and/or number of correctly located objects of interest in the virtual reality simulation), where the virtual reality environment is configured to be displayed in a head-mountable display (fig. 1, 110; fig. 2, 210; fig. 7, 704), and
where the display of the virtual reality environment includes at least one optical setting that is dynamically modified during execution of the task (¶[0043], line 1-5, at least one of a contrast level or a brightness level of the virtual reality simulation being displayed on the head-mounted display can be adjusted to simulate different lighting conditions),
facilitating execution of the task, wherein execution of the task includes displaying a plurality of virtual objects in the display of the virtual reality environment by the head-mountable display (¶[0040], line 1-12, the subject is required to locate objects of interest (e.g. a book, a bottle, a pin, etc.) from a shelf or a container containing mixtures of objects. Head and/or body motion data are measured and monitored in real-time during VR simulation with motion sensors in the HMD);
obtaining, during execution of the task, a set of sensor data from a set of sensors (fig. 7, 706, sensor system; ¶[0040], line 1-12, . Head and/or body motion data are measured and monitored in real-time during VR simulation with motion sensors in the HMD);
processing the set of sensor data to map a first set of coordinates representing movements in the virtual reality environment directed by a user with a second set of coordinates specifying locations of the plurality of virtual objects in the virtual reality environment (fig. 3, PROGRAM COMPUTATION; INPUT TO COMPUTING DEVICES; PROGRAM OUTPUT; INTERACTION BETWEEN SUBJECT AND VR ENVIRONMENT; SUBJECT RESPONSES; RECORDING OF RESPONSE; -- movements of subject and movements of VR environment);
deriving a first performance metric based on the mapped coordinates (fig. 7, 708; ¶[0045], line 1-14, performance scores based on the voluntary and the involuntary responses of the user to the virtual reality simulation can be computed);
processing the set of sensor data to derive spatial movements of the head-mountable display during execution of the task, the spatial movements indicating head movements of the user to interact with the virtual objects during the task (fig. 3, PROGRAM COMPUTATION; INPUT TO COMPUTING DEVICES; PROGRAM OUTPUT; INTERACTION BETWEEN SUBJECT AND VR ENVIRONMENT; SUBJECT RESPONSES; RECORDING OF RESPONSE; -- movements of subject and movements of VR environment; ¶[0040], line 1-12, The subject directly interacts with the VR environment and the VR graphics would change in response to the subject's responses. The duration required to complete the task and the number of correctly located items are used to generate a visual performance score for measurement of visual disability);
deriving a second performance metric based on the derived spatial movements (fig. 7, 708; ¶[0040], line 1-12, The subject directly interacts with the VR environment and the VR graphics would change in response to the subject's responses. The duration required to complete the task and the number of correctly located items are used to generate a visual performance score for measurement of visual disability); and
generating an output based on the first performance metric and the second performance metric, the output quantifying a functional visual capability of the user to interact with the virtual objects with dynamically modified optical settings in the virtual reality environment (fig. 7, 710; ¶[0045], line 1-14, visual disability metrics can be determined based on the performance scores. In some embodiments, the visual disability metrics can be determined based on one or more measurements recorded from the virtual reality simulation; speed of collisions with virtual reality objects in the virtual reality simulation, size, color and/or contrast of VR objects related to the event; ¶[0035], line 1-6, visual performance and visual disability may vary with the lighting conditions of the environment, the virtual reality simulation can be administered in different brightness and contrast levels, simulating different lighting conditions).
Regarding Claim 16, Leung teaches the computer-implemented method of claim 15, wherein the at least one optical setting is dynamically modified from a first setting to a second setting during execution of the task, the optical setting including any of a light intensity setting, a virtual object contrast setting, a dynamically-modified luminance setting, a number of the plurality of virtual objects displayed in the virtual reality environment, a trajectory of movement of the plurality of virtual objects displayed in the virtual reality environment, and locations of the plurality of virtual objects in the virtual reality environment (¶[0043], line 1-5, at least one of a contrast level or a brightness level of the virtual reality simulation being displayed on the head-mounted display can be adjusted to simulate different lighting conditions; ¶[0035], line 1-6, visual performance and visual disability may vary with the lighting conditions of the environment, the virtual reality simulation can be administered in different brightness and contrast levels, simulating different lighting conditions).
Regarding Claim 17, Leung teaches the computer-implemented method of claim 15, wherein the set of sensors include:
eye tracking sensors disposed in the head-mountable display and configured to track eye movements of the user; base station sensors disposed in the head-mountable display and configured to track spatial movements of the head-mountable display; and hand controller sensors configured to track hand movements of the user (fig. 2, 210, 220, 222; ¶[0031], line 1-17, Sensor system 220 may include motion sensors 222 such as gyroscope, accelerometer, and/or magnetometer to sense the voluntary responses of the user. These responses may include, for example, orientation and movement of the head, the body trunk such as chest or waist, the eyeball, and the upper and lower limbs; ¶[0044], line 1-8, voluntary and involuntary responses of the user can be monitored and recorded via a sensor system during the virtual reality simulation. For example, motion sensors can be used to sense one or more of eye motion, head motion, limb motion, or body motion of the user).
Regarding Claim 18, Leung teaches the computer-implemented method of claim 15, wherein the task is identified from a set of tasks, each task of the set of tasks relates to a particular optical condition relating to the user (¶[0035], line 1-6, In any of the VR environments, because visual performance and visual disability may vary with the lighting conditions of the environment, the virtual reality simulation can be administered in different brightness and contrast levels, simulating different lighting conditions).
Regarding Claim 19, Leung teaches the computer-implemented method of claim 15, wherein the task comprises an object selection task, wherein the object selection task maps movements by the user in the virtual reality environment display to locations comprising virtual objects to identify each virtual object (¶[0040], line 1-12, the subject is required to locate objects of interest (e.g. a book, a bottle, a pin, etc.) from a shelf or a container containing mixtures of objects. The subject uses a controller or hand and body gestures detected by motion sensors to locate the targeted object. The subject directly interacts with the VR environment and the VR graphics would change in response to the subject's responses).
Regarding Claim 20, Leung teaches the computer-implemented method of claim 15, wherein the task comprises an object interaction task, wherein the object interaction task maps movements by the user in the virtual reality environment display as avoiding the location of the virtual objects in the virtual reality environment display (¶[0038], line 1-19, The subject navigates in the VR environment by changing head and/or body orientation and the navigation speed can be adjusted with a controller or a motion detector of the lower limbs. The subject directly interacts with the VR environment and the VR graphics change in response to the subject's responses; ¶[0039], line 1-28, The subject is required to drive from location A to location B without colliding with any objects in the VR environment. Head and/or body motion data are measured and monitored in real-time during VR simulation with motion sensors in the HMD. The subject can tum a wheel controller to change the direction of navigation and the navigation speed can be changed with an accelerator and a brake controller).
Examiner’s Note
Regarding the references, the Examiner cites particular figures, paragraphs, columns and line numbers in the reference(s), as applied to the claims above. Although the particular citations are representative teachings and are applied to specific limitations within the claims, other passages, internally cited references, and figures may also apply. In preparing a response, it is respectfully requested that the Applicant fully consider the references, in their entirety, as potentially disclosing or teaching all or part of the claimed invention, as well as fully consider the context of the passage as taught by the reference(s) or as disclosed by the Examiner.
Conclusion
Any inquiry concerning this communication or earlier communication from the examiner should be directed to Jie Lei whose telephone number is (571) 272 7231. The examiner can normally be reached on Mon.-Thurs. 8:00 am to 5:30 pm.
If attempts to reach the examiner by the telephone are unsuccessful, the examiner's supervisor, Thomas Pham can be reached on (571) 272 3689.The Fax number for the organization where this application is assigned is (571) 273 8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published application may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Services Representative or access to the automated information system, call 800-786-9199(In USA or Canada) or 571-272-1000.
/JIE LEI/Primary Examiner, Art Unit 2872