DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 03/17/2026 has been entered.
Response to Arguments
Applicant’s arguments with respect to claim(s) 1, 7, and 17 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-3, 5-7, and 9-16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Maschmeyer et al (U.S. Pat # 11,995,232) in view of Kim et al (pub # 20210176383) further in view of Komogortsev (pub # 20170364732) further in view of Oami (pub # 20230214010) and further in view of Sztuk et al (pub # 20210173474).
Consider claim 1. Maschmeyer et al teaches A system (Fig. 1 and col. 3 line 54, system 300). comprising:
a controller having a memory; (Fig. 1, processors 310 and 334 and memories 312 and 336).
and program instructions, stored in the memory, (col. 5 lines 48-50, instructions stored in the memory 336. The instructions, when executed by the processor 334, cause the computing device 332 to perform the operations of the computing device 332).
that upon execution cause the system to perform operations comprising:
displaying, via a head-mountable display (HMD), a virtual reality or mixed reality (XR) environment, (Fig. 3 and the abstract, virtual reality and augmented reality).
wherein the XR environment comprises a plurality of virtual windows (Figs. 3 and 5 as well as col. 7 lines 20-29, UI 502 and painting 504).
with respective levels of visual transparency that are displayed at respective virtual distances in the XR environment; (Figs. 3 and 5, UI 502 and painting 504 displayed at two different distances d1 and d2 respectively. Also see at least col. 6 lines 48-49, UI 502 is semi-transparent).
receiving, from one or more eye-tracking sensors of the HMD, eye-tracking data during a time interval; determining, based on the processed eye-tracking data, a characteristic gaze depth during the time interval; (col. 7 lines 7-10, at step 404 eye tracking is performed to detect a gaze depth of a gaze of a human).
selecting at least one virtual window based on the characteristic gaze depth and eye-tracking data. (Fig. 9 and col. 10 lines 49-54, with reference to FIG. 9, based on the gaze depth of the human's gaze it is determined that the human is no longer peering deep into the room but is instead viewing the UI 502 in front of the view. In response, the UI 502 changes to display different content in the form of a menu, thus selecting the virtual window, UI 502, based on the gaze depth and eye-tracking data).
Maschmeyer et al does not specifically disclose applying a noise-reduction model to the eye-tracking data so as to provide processed eye-tracking data;. However Kim et al in at least Fig. 1B and paragraph 0040 discloses an eye tracking system 100B comprising an image filter 106 configured to remove noise, blurriness, haziness, or other types of interference in an image captured by the camera 104. This filtered image data is then sent to the eye tracking module 108. In at least paragraph 0081 Kim et al discloses that the image filter could be a neural network model, thus applying a noise-reduction model to the eye-tracking data so as to provide processed eye-tracing data. Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the noise-reduction method and system of Kim et al with the eye-tracking method and system of Maschmeyer et al in order to reduce noise on the eye-tracking data and provide a more accurate system and method of tracking the user’s eyes.
Maschmeyer et al in view of Kim et al does not specifically disclose wherein the eve tracking data contains information indicative of one or more saccades; wherein the processed eye tracking data is filtered to reduce the information indicative of one or more saccades. However Komogortsev in at least paragraph 0063 discloses a classification algorithm for an eye-tracking system wherein “After the classification saccades with amplitudes smaller than 0.5° (microsaccades) may be filtered out”. Therefore it would have been obvious to one of ordinary skill in the art to combine the system and method of Komogortsev with the system and method of Maschmeyer et al in view of Kim et al in order to reduce the amount of noise in the recorded data (Komogortsev paragraph 0063).
Maschmeyer et al in view of Kim et al and further in view of Komogortsev does not specifically disclose and eyelid openness. However Oami in at least paragraph 0090 discloses a system and method of detecting a degree of opening of a user’s eyelid. Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the system and method of Oami with the system and method of Maschmeyer et al in view of Kim et al and further in view of Komogortsev in order to improve the the accuracy of iris matching of a subject who tends to cover and hide a part of the iris on the eyelid, the eyelashes or the like, such as a person whose eyes are narrow or a person whose upper eyelid position is lowered by ptosis (Oami paragraph 0080).
Maschmeyer et al in view of Kim et al and further in view of Komogortsev and further in view of Oami does not specifically disclose modifying a shape of the at least one virtual window based on the characteristic gaze depth and eye-tracking data. However Sztuk et al in at least paragraph 0153 discloses a method of modifying the shape of a high-resolution display region based on the predicted eye tracking data and gaze location of the user. Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the system and method of Sztuk et al with the system and method of Maschmeyer et al in view of Kim et al and further in view of Komogortsev and further in view of Oami in order to reduce the impact of system latency and prevent visual errors or disruptions for the user (Sztuk et al paragraph 0155).
Consider claim 7. Maschmeyer et al teaches A method comprising:
displaying, by a head-mountable display (HMD), (Fig. 1 and co. 3 lines 60-62, HMD 302).
a virtual reality or mixed reality (XR) environment, (Fig. 3 and the abstract, virtual reality and augmented reality).
wherein the XR environment comprises a plurality of virtual windows (Figs. 3 and 5 as well as col. 7 lines 20-29, UI 502 and painting 504).
with respective levels of visual transparency that are displayed at respective virtual distances in the XR environment; (Figs. 3 and 5, UI 502 and painting 504 displayed at two different distances d1 and d2 respectively. Also see at least col. 6 lines 48-49, UI 502 is semi-transparent).
receiving, from one or more eye-tracking sensors of the HMD, eye-tracking data during a time interval; determining, based on the processed eye-tracking data, a characteristic gaze depth during the time interval; (col. 7 lines 7-10, at step 404 eye tracking is performed to detect a gaze depth of a gaze of a human).
selecting at least one virtual window based on the characteristic gaze depth and eye-tracking data. ((Fig. 9 and col. 10 lines 49-54, with reference to FIG. 9, based on the gaze depth of the human's gaze it is determined that the human is no longer peering deep into the room but is instead viewing the UI 502 in front of the view. In response, the UI 502 changes to display different content in the form of a menu, thus selecting the virtual window, UI 502, based on the gaze depth and eye-tracking data).
Maschmeyer et al does not specifically disclose applying a noise-reduction model to the eye-tracking data so as to provide processed eye-tracking data. However Kim et al in at least Fig. 1B and paragraph 0040 discloses an eye tracking system 100B comprising an image filter 106 configured to remove noise, blurriness, haziness, or other types of interference in an image captured by the camera 104. This filtered image data is then sent to the eye tracking module 108. In at least paragraph 0081 Kim et al discloses that the image filter could be a neural network model, thus applying a noise-reduction model to the eye-tracking data so as to provide processed eye-tracing data. Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the noise-reduction method and system of Kim et al with the eye-tracking method and system of Maschmeyer et al in order to reduce noise on the eye-tracking data and provide a more accurate system and method of tracking the user’s eyes.
Maschmeyer et al in view of Kim et al does not specifically disclose wherein the eve tracking data contains information indicative of one or more saccades; wherein the processed eye tracking data is filtered to reduce the information indicative of one or more saccades. However Komogortsev in at least paragraph 0063 discloses a classification algorithm for an eye-tracking system wherein “After the classification saccades with amplitudes smaller than 0.5° (microsaccades) may be filtered out”. Therefore it would have been obvious to one of ordinary skill in the art to combine the system and method of Komogortsev with the system and method of Maschmeyer et al in view of Kim et al in order to reduce the amount of noise in the recorded data (Komogortsev paragraph 0063).
Maschmeyer et al in view of Kim et al and further in view of Komogortsev does not specifically disclose and eyelid openness. However Oami in at least paragraph 0090 discloses a system and method of detecting a degree of opening of a user’s eyelid. Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the system and method of Oami with the system and method of Maschmeyer et al in view of Kim et al and further in view of Komogortsev in order to improve the the accuracy of iris matching of a subject who tends to cover and hide a part of the iris on the eyelid, the eyelashes or the like, such as a person whose eyes are narrow or a person whose upper eyelid position is lowered by ptosis (Oami paragraph 0080).
Maschmeyer et al in view of Kim et al and further in view of Komogortsev and further in view of Oami does not specifically disclose modifying a shape of the at least one virtual window based on the characteristic gaze depth and eye-tracking data. However Sztuk et al in at least paragraph 0153 discloses a method of modifying the shape of a high-resolution display region based on the predicted eye tracking data and gaze location of the user. Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the system and method of Sztuk et al with the system and method of Maschmeyer et al in view of Kim et al and further in view of Komogortsev and further in view of Oami in order to reduce the impact of system latency and prevent visual errors or disruptions for the user (Sztuk et al paragraph 0155).
Consider claim 2. Maschmeyer et al further teaches The system of claim 1, wherein the operations further comprise: modifying a shape, a position, contents, or other characteristics of the at least one virtual window based on the characteristic gaze depth and gaze position data. (See at least Figs. 3 and 4 as well as col. 7 lines 37-39, he UI 502 may be overlaid on top of the painting 504 and provide information about the painting 504, thus changing a position of the window).
Consider claim 3. Maschmeyer et al further teaches The system of claim 1, wherein at least a portion of the controller is disposed within the HMD. (See at least Fig. 1, processor 310 disposed within HMD 302).
Consider claims 4 and 8. Maschmeyer et al in view of Kim et al and further in view of Komogortsev and further in view of Oami does not specifically disclose wherein the noise-reduction model comprises: a machine learning (ML) model trained on prior gaze depth data, wherein the prior gaze depth data comprises customized gaze depth data from one or more individuals. However Sztuk et al in at least paragraph 0135 discloses a machine-learning model may be provided (e.g., in saccade filter 806 of eye prediction model 722) that has been trained (e.g., by (i) providing eye tracking data corresponding to known saccade movements for the user and/or one or more prior users of a device having eye tracking capabilities to a machine-learning model as input training data, (ii) providing the known saccade movements to the machine-learning model as output training data, and (iii) adjusting parameters of the model using the input training data and the output training data to generate a trained model) to output a predicted gaze location and a gaze location confidence level for a new input set of eye tracking data. Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the system and method of Sztuk et al with the system and method of Maschmeyer et al in view of Kim et al and further in view of Komogortsev and further in view of Oami so that the predicted gaze location and the gaze location confidence level may be generated (Sztuk et al paragraph 0135).
Consider claim 5. Maschmeyer et al further teaches The system of claim 1, wherein determining the characteristic gaze depth is further based on information indicative of at least one of: gaze dwell time, gaze convergence, eyelid openness, blink detection, verbal commands, or physical controllers. (See at least Fig. 13 as well as col. 12 lines 54-59, FIG. 13 illustrates a gaze depth d1 at a depth corresponding to that of the UI 502. The gaze depth d1 is computed based on convergence of the vector 606a representing the gaze direction of the left eye and the vector 606b representing the gaze direction of the right eye).
Consider claim 6. Maschmeyer et al further teaches The system of claim 1, wherein the eye-tracking data comprises gaze position data. (See at least Figs. 7-11 where the system is detecting the user’s eyes gazing on different subjects disposed in the VR/AR environment at different locations, thus position data).
Consider claim 9. Maschmeyer et al further teaches The method of claim 7, wherein displaying the XR environment comprises displaying at least one visual cue, wherein the visual cue comprises a strong visual cue, wherein the strong visual cue comprises: a central visual element with high contrast. (See at least Fig. 9, UI 502 displayed in center of user’s vision, thus a strong visual cue with high contrast).
Consider claim 10. Maschmeyer et al further teaches The method of claim 7, wherein displaying the XR environment comprises displaying at least one visual cue, wherein the visual cue comprises a weak visual cue, wherein the weak visual cue comprises: a visual element that is located at a center or along edges of a given virtual window. (See at least Figs. 7 and 8 where UI 502 is displayed at an edge of the user’s view, thus a weak visual cue).
Consider claim 11. Maschmeyer et al further teaches The method of claim 7, wherein displaying the XR environment comprises displaying at least one visual cue, wherein the visual cue is displayed based on one or more adjustable display settings. (Figs. 8 and 9 as well as col. 10 lines 54-58, When the human is finished reading the menu, the human may peer through the menu to view the room, which is at a greater gaze depth. In response, the menu disappears and, with reference to FIG. 10, the UI 502 changes and instead displays a smaller “learn more about this room” box, thus the menu changes depending on the user’s gaze therefore the user’s gaze can read on adjustable display settings).
Consider claim 12. Maschmeyer et al further teaches The method of claim 7, wherein selecting the at least one virtual window comprises modifying at least one component of the at least one selected virtual window. (col. 10 lines 49-52, based on the gaze depth of the human's gaze it is determined that the human is no longer peering deep into the room but is instead viewing the UI 502 in front of the view. In response, the UI 502 changes to display different content in the form of a menu).
Consider claim 13. Maschmeyer et al further teaches The method of claim 7, wherein selecting the at least one virtual window comprises activating a detail layer based on the characteristic gaze depth. (col. 10 lines 49-52, based on the gaze depth of the human's gaze it is determined that the human is no longer peering deep into the room but is instead viewing the UI 502 in front of the view. In response, the UI 502 changes to display different content in the form of a menu, thus a detail layer).
Consider claim 14. Maschmeyer et al further teaches The method of claim 13, wherein activating the detail layer comprises displaying at least one visual cue. (col. 10 lines 49-52, based on the gaze depth of the human's gaze it is determined that the human is no longer peering deep into the room but is instead viewing the UI 502 in front of the view. In response, the UI 502 changes to display different content in the form of a menu, thus at least one visual cue).
Consider claim 15. Maschmeyer et al further teaches The method of claim 13, wherein activating the detail layer comprises: displaying, by the HMD, an expanded view of the at least one selected virtual window; (Figs. 8 and 9, an expanded menu of UI 502 is displayed).
and adjusting the visual transparency of the selected virtual window. (col. 7 lines 57-59, In embodiments in which the UI 502 is modified to be less visually prominent, the modifying may include at least one of: increasing transparency of the UI 502).
Consider claim 16. Maschmeyer et al further teaches The method of claim 7, wherein the eye-tracking data comprises gaze position data. (See at least Figs. 7-11 where the system is detecting the user’s eyes gazing on different subjects disposed in the VR/AR environment at different locations, thus position data).
Claim(s) 17, 18, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Josephson et al (pub # 20220270509) in view of Maschmeyer et al (U.S. Pat # 11,995,232) further in view of Komogortsev (pub # 20170364732) further in view of Oami (pub # 20230214010).
Consider claim 17. Josephson et al teaches A method comprising:
displaying, by a head-mounted display, a virtual reality or mixed reality (XR) training environment, (See at least paragraph 0428, generating a VR environment, an AR/MR/XR environment, or a VR and an AR/MR/XR environment for the training program).
and providing, by the XR training environment, a set of pre-determined training tasks over a pre-determined time period, (paragraph 0429, The methods also include generating hot spots for one, some or all aspects, features, attributes, properties, and/or characteristics associated with the training environment in a hot spot generating step 1116, and populating the environment with the generated hot spots in a populating step 1118. The hot spots may be associated with (a) specific aspects, features, attributes, properties, and/or characteristics of the training program or (b) may be general information hot spots associated with global environmental aspects, features, attributes, properties, and/or characteristics or (c) aspects, features, attributes, properties, and/or characteristics of training program routines and/or training program tasks).
Josephson et al does not specifically disclose wherein the XR training environment comprises a plurality of virtual windows with respective levels of visual transparency, and wherein each of the virtual windows in the plurality of virtual windows comprises a visual cue of a differing visual transparency
wherein providing each pre-determined training task comprises: displaying one or more virtual windows with a visual cue that is activated when gaze depth data indicate that the visual cue is being focused on.
However Maschmeyer et al does disclose wherein the XR training environment comprises a plurality of virtual windows (Figs. 3 and 5 as well as col. 7 lines 20-29, UI 502 and painting 504).
with respective levels of visual transparency, see at least col. 6 lines 48-49, UI 502 is semi-transparent).
and wherein each of the virtual windows in the plurality of virtual windows comprises a visual cue of a differing visual transparency (col. 7 lines 57-59, In embodiments in which the UI 502 is modified to be less visually prominent, the modifying may include at least one of: increasing transparency of the UI 502).
wherein providing each pre-determined training task comprises: displaying one or more virtual windows with a visual cue that is activated when gaze depth data indicate that the visual cue is being focused on. (col. 10 lines 49-52, based on the gaze depth of the human's gaze it is determined that the human is no longer peering deep into the room but is instead viewing the UI 502 in front of the view. In response, the UI 502 changes to display different content in the form of a menu).
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the system and method of Maschmeyer et al with the system and method of Josephson et al in order to improve user experience by allowing the user to focus more clearly on what they are looking at.
Josephson et al in view of Maschmeyer et al does not specifically disclose wherein the gaze depth data is based on processed eye tracking data from a noise-reduction model, and wherein the noise-reduction model filters eye tracking data to reduce information indicative of one or more saccades. However Komogortsev in at least paragraph 0063 discloses a classification algorithm for an eye-tracking system wherein “After the classification saccades with amplitudes smaller than 0.5° (microsaccades) may be filtered out”. Therefore it would have been obvious to one of ordinary skill in the art to combine the system and method of Komogortsev with the system and method of Josephson et al in view of Maschmeyer et al in order to reduce the amount of noise in the recorded data (Komogortsev paragraph 0063).
Josephson et al in view of Maschmeyer et al and further in view of Komogortsev does not specifically disclose eyelid openness. However Oami in at least paragraph 0090 discloses a system and method of detecting a degree of opening of a user’s eyelid. Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the system and method of Oami with the system and method of Josephson et al in view of Maschmeyer et al and further in view of Komogortsev in order to improve the the accuracy of iris matching of a subject who tends to cover and hide a part of the iris on the eyelid, the eyelashes or the like, such as a person whose eyes are narrow or a person whose upper eyelid position is lowered by ptosis (Oami paragraph 0080).
Consider claim 18. Josephson et al further teaches The method of claim 17, further comprising: providing feedback and performance information on completed training tasks, (paragraph 0440, The methods also include providing feedback to the trainee including deficient task performance data in a providing trainee feedback).
and adjusting one or more aspects of the training environment based on the feedback and performance information. (paragraph 0441, if the trainee does not pass the training program or any routine or task or aspect of the training program, then the methods proceed along a NO branch to the collecting/capturing step 1146 so that the trainee may repeat the entire program or deficient routines or tasks or aspects thereof and repeating all intermediate steps).
Consider claim 20. Maschmeyer et al further teaches The method of claim 17, wherein the visual cue has an adaptive level of visual transparency that is based on a pre-determined depth range that is larger than confines of the one or more virtual windows. (col. 7 lines 57-59, In embodiments in which the UI 502 is modified to be less visually prominent, the modifying may include at least one of: increasing transparency of the UI 502).
Claim(s) 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Josephson et al (pub # 20220270509) in view of Maschmeyer et al (U.S. Pat # 11,995,232) and further in view of Komogortsev (pub # 20170364732) further in view of Oami (pub # 20230214010) as applied to claim 18 above, and further in view of Sztuk et al (pub # 20210173474).
Consider claim 19. Josephson et al in view of Maschmeyer et al and further in view of Komogortsev further in view of Oami does not specifically disclose wherein the feedback and performance information is used to train a machine-learning (ML) model, wherein adjusting the one or more aspects of the training environment is based on the trained ML model. However Sztuk et al in at least paragraph 0135 discloses a machine-learning model may be provided (e.g., in saccade filter 806 of eye prediction model 722) that has been trained (e.g., by (i) providing eye tracking data corresponding to known saccade movements for the user and/or one or more prior users of a device having eye tracking capabilities to a machine-learning model as input training data, (ii) providing the known saccade movements to the machine-learning model as output training data, and (iii) adjusting parameters of the model using the input training data and the output training data to generate a trained model) to output a predicted gaze location and a gaze location confidence level for a new input set of eye tracking data. Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the system and method of Sztuk et al with the system and method of Josephson et al in view of Maschmeyer et al and further in view of Komogortsev further in view of Oami so that the predicted gaze location and the gaze location confidence level may be generated (Sztuk et al paragraph 0135).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHAYCE R BIBBEE whose telephone number is (571)270-7222. The examiner can normally be reached Mon-Thurs 8:00-6:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Eason can be reached at 571-270-7230. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHAYCE R BIBBEE/Examiner, Art Unit 2624