DETAILED ACTION
1. This office action is in response to U.S. Patent Application No.: 18/897,630 filed on 10/9/2024 with effective filing date 7/29/2021. Claims 2-21 are pending.
Claim Rejections - 35 USC § 103
2. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
3. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
4. Claim(s) 2-4, 6-9, 11-18, & 19-21 are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al. US 2021/0081047 A1 (IDS) in view of Schaefer US 11047693 B1 (IDS).
Per claims 2 & 8, Wang et al. discloses one or more non-transitory computer-readable media storing computer-readable instructions that, when executed by at least one processor, cause the at least one processor to execute operations comprising: capturing image data from a first forward-looking camera disposed on a head-worn device, wherein the image data comprises a forward-image with a field of view (para: 26 & 127, e.g. the providing 1010 of the visual passthrough of the environment may be performed passively as an optical passthrough or actively as a video passthrough. In the case of the optical passthrough, the user is able to view the environment directly, such as through a transparent lens); determining, based on data from a sensor on the head-worn device, a gaze direction of a user of the device (para: 46, e.g. the sensors 122 of the head-mounted display 100 monitor conditions of the environment and/or the user. Those sensors 122 that monitor the environment may include, but are not limited to, one or more outward-facing cameras 122a, one or more depth sensors 122b, one or more ultrasonic sensors 122c, one or more position sensors 122d); identifying a region of interest of the user, wherein the region of interest is identified based at least in part on the gaze direction of the user (para: 102, e.g. the detecting 720 of the environmental feature of interests (e.g., an object and/or the events of the environment) is performed by the controller 116 or other processing apparatus according to the observing).
Wang et al. to explicitly disclose the remaining claim limitation.
Schaefer however in the same field of endeavor teaches providing an indication that the field of view of the first forward-looking camera is in misalignment with the region of interest, wherein the misalignment with the region of interest is determined based at least in part on a comparison of the region of interest to the field of view (col. 4, line 49-56, e.g. (a) Provide a tool to quickly guide a person's gaze to a direction of interest, (b) In particular, guide a person with tunnel vision to be able to see an object of interest, (c) Provide effective audio and haptic guidance to efficiently guide the user's head toward the target, (d) Provide effective audio guidance toward the target even with a single-ear headset.. audio and visual feedback are also taught in col. 12, line 64 to col. 14. Line 2).
Therefore, in view of disclosures by Schaefer, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention was made to combine Wang et al. and Schaefer in order to provides a tool to quickly guide a person's gaze to a direction of interest, guides a person with tunnel vision to be able to see an object of interest, provides effective audio and haptic guidance to efficiently guide the user's head toward the target.
Per claims 3, 9 & 16, Schaefer further teaches the one or more non-transitory computer-readable media storing computer-readable instructions of claim 2 that, when executed by the at least one processor, cause the processor to execute operations further comprising: identifying the gaze direction of the user based at least in part on a position of a pupil of the user (col. 8, line 41-45, e.g. if the user moves about in space, the direction to a target of interest is likely to change, particularly if the target object is located near the user).
Per claims 4 & 13, Schaefer further teaches the one or more non-transitory computer-readable media storing computer-readable instructions of claim 2 that, when executed by the at least one processor, cause the processor to execute operations further comprising: centering the region of interest on an object of interest, wherein the object of interest is determined based at least in part on the gaze direction (col 15, line 17-27, e.g. Face—invoked by tapping on Face button 502, which causes the app to orient toward, for example, turn the gaze direction toward, the direction of a target of interest. The preferred embodiment allows the user to scroll through recently-marked targets, access a target by a speech or text label, or to scroll through nearby targets. (c) Go to—invoked by tapping on Goto button 503, which causes the app to give feedback to orient toward the target of interest and continue updating the direction as the user proceeds toward the target)
Per claim 6, Schaefer further teaches the one or more non-transitory computer-readable media storing computer-readable instructions of claim 2 that, when executed by the at least one processor, cause the processor to execute operations further comprising: adjusting a position and/or an orientation of the first forward-looking camera to at least partially correct the misalignment with the region of interest (col. 4, line 49-56, e.g. (a) Provide a tool to quickly guide a person's gaze to a direction of interest, (b) In particular, guide a person with tunnel vision to be able to see an object of interest, (c) Provide effective audio and haptic guidance to efficiently guide the user's head toward the target, (d) Provide effective audio guidance toward the target even with a single-ear headset.. audio and visual feedback are also taught in col. 12, line 64 to col. 14. Line 2).
Per claim 7, Wang et al. further teaches the one or more non-transitory computer-readable media storing computer-readable instructions of claim 2, wherein the indication comprises an instruction to the user to move the first forward-looking camera (para: 34 & 103, e.g. the location and/or directionality of the haptic output pattern of the environment haptic output may be determined according to the location of the environmental feature of interest, such as the haptic output pattern including the front, right, left, or rear haptic output devices 118 corresponding to the object and/or the event being in front, left, right, or rear of the user. The frequency and/or strength of the haptic output pattern may be determined according to the proximity of the user to the maneuver and/or the speed of the user).
Per claim 11 & 18, Wang et al. further teaches the method of claim 10, further comprising: instructing the user to move the first forward-looking camera (para: 34 & 103, e.g. the location and/or directionality of the haptic output pattern of the environment haptic output may be determined according to the location of the environmental feature of interest, such as the haptic output pattern including the front, right, left, or rear haptic output devices 118 corresponding to the object and/or the event being in front, left, right, or rear of the user).
Per claims 12 & 19, Wang et al. further teaches the method of claim 8, further comprising: adjusting a position and/or an orientation of the first forward-looking camera to at least partially correct the misalignment with the region of interest para: 34 & 103, e.g. the location and/or directionality of the haptic output pattern of the environment haptic output may be determined according to the location of the environmental feature of interest, such as the haptic output pattern including the front, right, left, or rear haptic output devices 118 corresponding to the object and/or the event being in front, left, right, or rear of the user. The frequency and/or strength of the haptic output pattern may be determined according to the proximity of the user to the maneuver and/or the speed of the user).
Per claims 14 & 21, Wang et al. further teaches the method of claim 8, further comprising: displaying, with a display disposed on an eyepiece of the device, the forward-image to the user (para: 46, e.g. the sensors 122 of the head-mounted display 100 monitor conditions of the environment and/or the user).
Per claim 15, Wang et al. discloses a head-worn device comprising: a first forward-looking camera disposed on the head-worn device and configured to capture image data that comprises a forward-image with a field of view (para: 46, e.g. the sensors 122 of the head-mounted display 100 monitor conditions of the environment and/or the user. Those sensors 122 that monitor the environment may include, but are not limited to, one or more outward-facing cameras 122a, one or more depth sensors 122b, one or more ultrasonic sensors 122c, one or more position sensors 122d); an eye tracking mechanism comprising a sensing device disposed on the head-worn device, the eye tracking mechanism configured to determine a gaze direction of a user (para: 26 & 127, e.g. the providing 1010 of the visual passthrough of the environment may be performed passively as an optical passthrough or actively as a video passthrough. In the case of the optical passthrough, the user is able to view the environment directly, such as through a transparent lens); a processor configured to identify a region of interest of the user, wherein the region of interest is identified based at least in part on the gaze direction of the user (para: 102, e.g. the detecting 720 of the environmental feature of interests (e.g., an object and/or the events of the environment) is performed by the controller 116 or other processing apparatus according to the observing).
Wang et al. to explicitly disclose the remaining claim limitation.
Schaefer however in the same field of endeavor teaches an interface device configured to indicate to the user that a field of view of the first forward-looking camera is in misalignment with the region of interest, wherein the misalignment with the region of interest is determined based at least in part on a comparison of the region of interest to the field of view (col. 4, line 49-56, e.g. (a) Provide a tool to quickly guide a person's gaze to a direction of interest, (b) In particular, guide a person with tunnel vision to be able to see an object of interest, (c) Provide effective audio and haptic guidance to efficiently guide the user's head toward the target, (d) Provide effective audio guidance toward the target even with a single-ear headset.. audio and visual feedback are also taught in col. 12, line 64 to col. 14. Line 2).
Therefore, in view of disclosures by Schaefer, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention was made to combine Wang et al. and Schaefer in order to provides a tool to quickly guide a person's gaze to a direction of interest, guides a person with tunnel vision to be able to see an object of interest, provides effective audio and haptic guidance to efficiently guide the user's head toward the target.
Per claim 20, Schaefer further teaches the head-worn device of claim 15, wherein the region of interest is centered on an object of interest (col 15, line 17-27, e.g. Face—invoked by tapping on Face button 502, which causes the app to orient toward, for example, turn the gaze direction toward, the direction of a target of interest).
Allowable Subject Matter
5. Claims 5, 10 & 17 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
6. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Jorasch et al. US 11,128,636 B1, e.g. In accordance with some embodiments, systems, apparatus, interfaces, methods, and articles of manufacture are provided for ascertaining aspects of a user, such as the user's identity, competence, health, and state of mind. In various embodiments, data is captured about a user via a headset worn by the user. Based on the data, a determination may be made about an aspect of the user, and the user may accordingly be granted or denied access to a resource.
Davami US 2018/0157045 A1, e.g. A system and method for ocular stabilization of video images is disclosed. While capturing video images in a forward field of view with a forward-facing video camera of a wearable head-mountable device (HMD), binocular eye-gaze directions of left and right eyes of a user of the HMD may be obtained with an eye-tracking device of the HMD. Based on the obtained binocular eye-gaze directions of left and right eyes of the user of the HMD, convergent gaze directions of the user may be determined as a function of time during an interval concurrent with the capturing of the video images. The captured video images may then be stabilized by compensating for motion of the forward-facing video camera with an intersection of the convergent gaze directions of the user with an image plane of the forward-facing video camera.
7. Any inquiry concerning this communication or earlier communications from the examiner should be directed to IRFAN HABIB whose telephone number is (571)270-7325. The examiner can normally be reached Mon-Th 9AM-7PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jay Patel can be reached on 5712722988. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Irfan Habib/Examiner, Art Unit 2485