Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-14 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 2, 9-11,17, 18, 20 of U.S. Patent No. 11614797. Although the claims at issue are not identical, they are not patentably distinct from each other because claims 1-14 of instant invention are broader than claims 1, 2, 9-11,17, 18, 20 of U.S. Patent No. 11614797.
Claims 1-14 of instant invention
Claims 1, 2, 9-11,17, 18, 20 of U.S. Patent No. 11614797
1. A device, comprising:
a display configured to provide a graphical user interface (GUI) to a user;
a camera configured to capture at least one of a pupil location of the user or eye movement of the user; and
a processor configured to:
output a user interface element of the GUI to the display, the user interface element indicative of a setting level of one or more parameters of the GUI;
identify at least one of a visual focal point of the user or an eye gesture of the user based on at least one of the captured pupil location or the eye movement; and
control the setting level of the user interface element shown on the display based at least partially on at least one of the visual focal point or the eye gesture of the user, wherein a tactile feedback is provided to the user via at least a portion of the device that is worn by the user.
2. The device of claim 1, wherein the user interface element shown on the display relates to an auditory user interface.
4. The device of claim 1, wherein the user interface element shown on the display relates to a visual user interface.
1. An apparatus, comprising:
a computing device;
a display, the display connected to the computing device, the display configured to provide a graphical user interface (GUI);
a camera connected to at least one of the computing device, the display, or a combination thereof, the camera configured to capture pupil location of a user;
and a processor in the computing device, configured to:
identify a visual focal point of the user relative to the GUI based on the captured pupil location and based at least partially on a viewing angle of at least one pupil of the user relative to the GUI;
control one or more parameters of the display, the GUI, or a combination thereof based at least partially on the identified visual focal point of the user;
control distribution of at least one of screen coloring, screen brightness, screen shading, or screen contrast of the display based at least partially on the identified visual focal point of the user;
and control a parameter of an auditory user interface shown on the display and based at least partially on the identified visual focal point of the user.
3. The device of claim 1, wherein the user interface element shown on the display relates to a tactile user interface.
20. An apparatus, comprising: a computing device with a display configured to provide a graphical user interface (GUI); a camera in or attached to the computing device, configured to capture pupil location and eye movement of a user; and a processor in the computing device, configured to: identify a visual focal point of the user relative to a screen of the display based on the captured pupil location and based at least partially on a viewing angle of at least one pupil of the user relative to the GUI; identify a type of eye movement based on the captured eye movement of the user; control one or more parameters of the display, the GUI, or a combination thereof based on the identified type of eye movement, the identified visual focal point, or a combination thereof; control distribution of at least one of screen coloring, screen brightness, screen shading, or screen contrast of the display based at least partially on the identified type of eye movement, the identified visual focal point, or a combination thereof; and control a parameter of a tactile user interface shown on the display and based at least partially on the identified visual focal point of the user.
7. The device of claim 1, wherein the processor is further configured to identify the visual focal point of the user based at least partially on a distance of the at least one pupil of the user relative to the GUI.
2. The apparatus of claim 1, wherein the processor is further configured to identify the visual focal point of the user based at least partially on a distance of the at least one pupil of the user relative to the GUI.
8. The device of claim 1, wherein the processor is configured to control at least one of:
skew or angle of output of the display based at least partially on the identified visual focal point of the user;
distribution of screen resolution of the display based at least partially on the identified visual focal point of the user;
distribution of screen coloring of the display based at least partially on the identified visual focal point of the user;
distribution of screen brightness of the display based at least partially on the identified visual focal point of the user;
distribution of screen shading of the display based at least partially on the identified visual focal point of the user; or
distribution of screen contrast of the display based at least partially on the identified visual focal point of the user.
11. The apparatus of claim 9, wherein the processor is configured to further change skew or angle of output of the display based at least partially on the identified saccade.
9. The device of claim 1, wherein the camera is configured to: capture eye movement of the user; and wherein the processor is configured to: identify a saccade based on the captured eye movement of the user; and further control the one or more parameters of the user interface element shown on the display based at least partially on the identified saccade.
9. The apparatus of claim 1, wherein the camera is configured to: capture eye movement of the user; and wherein the processor is configured to: identify a saccade based on the captured eye movement of the user; and further control the one or more parameters of the display, the GUI, or a combination thereof based at least partially on the identified saccade.
10. The device of claim 9, wherein the processor is configured to:
approximate eye movement of the user from one focal point to another focal point according to the identified saccade; and
further control the one or more parameters of the user interface element shown on the display based at least partially on the identified approximated eye movement.
10. The apparatus of claim 9, wherein the processor is configured to: approximate eye movement of the user from one focal point to another focal point according to the identified saccade; and further control the one or more parameters of the display, the GUI, or a combination thereof based at least partially on the identified approximated eye movement.
11. The device of claim 1, further comprising a wearable structure, the wearable structure comprising or connected to at least one of the display, the camera, the processor, or a combination thereof.
16. The apparatus of claim 1, comprising a wearable structure, the wearable structure comprising or connected to at least one of the computing device, the display, the camera, or a combination thereof.
12. The device of claim 11, wherein the wearable structure is configured to be worn on a head, around a neck, or around a wrist or forearm of the user.
17. The apparatus of claim 16, wherein the wearable structure is configured to be worn on a head, around a neck, or around a wrist or forearm of the user.
13. The device of claim 12, wherein the wearable structure comprises either a cap, a wristband, a neck strap, a necklace, or a contact lens.
18. The apparatus of claim 17, wherein the wearable structure comprises either a cap, a wristband, a neck strap, a necklace, or a contact lens.
14. A wearable computing device, comprising: a display configured to provide a graphical user interface (GUI) to a user, wherein the display is head-mountable such that the display stays visible to the user while the user moves or while the user’s head moves; a camera configured to capture at least one of a pupil location of the user or eye movement of the user; and a processor configured to: output a user interface element of the GUI to the display, the user interface element indicative of a setting level of one or more parameters of the GUI, wherein the user interface element shown on the display relates to an equilibria user interface; identify a saccade of the user based on at least one of the captured pupil location or the eye movement; and control the setting level of the user interface element shown on the display based at least partially on at least one of the visual focal point or the eye gesture of the user, wherein the user interface element shown on the display changes as the one or more parameters is controlled based on the at least one of the visual focal point or the eye gesture of the user, wherein the wearable computing device is wearable by a human user of the wearable computing device.
1. An apparatus, comprising: a computing device; a display, the display connected to the computing device, the display configured to provide a graphical user interface (GUI); a camera connected to at least one of the computing device, the display, or a combination thereof, the camera configured to capture pupil location of a user; and a processor in the computing device, configured to: identify a visual focal point of the user relative to the GUI based on the captured pupil location and based at least partially on a viewing angle of at least one pupil of the user relative to the GUI; control one or more parameters of the display, the GUI, or a combination thereof based at least partially on the identified visual focal point of the user; control distribution of at least one of screen coloring, screen brightness, screen shading, or screen contrast of the display based at least partially on the identified visual focal point of the user; and control a parameter of an auditory user interface shown on the display and based at least partially on the identified visual focal point of the user.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Note: In order to better show what is and is not taught by the references, Examiner shows some words underlined. Words that are underlined indicate teachings of the cited reference, and may not specifically be claimed.
Claims 1-4, 11, 12, 21, 22 are rejected under 35 U.S.C. 103 as being unpatentable over Mulcahy et al. (US20140372944, Mulcahy), further in view of Mayama et al. (US20170219833, Mayama).
As to claim 1:
Mulcahy shows a device, comprising:
a display (e.g., television 2234) configured to provide a graphical user interface (GUI) (e.g., spokes menu 808) to a user (¶ [0048]), [0147]);
a camera (e.g., camera 134b) configured to capture at least one of a pupil location of the user or eye movement of the user (e.g.., including an eye tracking mechanism) (¶ [0062]);
and a processor configured to:
output a user interface element of the GUI to the display, the user interface element indicative of a setting level of one or more parameters of the GUI (e.g., output a spoke display menu to the television, the spoke display menu used to increase or decrease volume) (¶ [0048]), [0147], [0164]);
identify at least one of a visual focal point of the user or an eye gesture of the user based on at least one of the captured pupil location or the eye movement (fig. 7 and associated description);
and control the setting level of the user interface element shown on the display based at least partially on at least one of the visual focal point or the eye gesture of the user (¶ [0181]) (e.g., The volume might be controlled by the position of the user's focus along the spoke.).
Mulcahy fails to specifically show: wherein a tactile feedback is provided to the user via at least a portion of the device that is worn by the user.
In the same field of invention, Mayama teaches: head-mounted display (HMD) used during moving or exercising. Mayama further teaches: a tactile feedback is provided to the user via at least a portion of the device that is worn by the user (¶ [0104]) (e.g., If it takes a time to achieve focusing, the main unit of the display unit or the body part shakes during that time ).
Thus, it would have been obvious to one of ordinary skill in the art, having the teachings of Mulcahy and Mayama before the effective filing date of the invention, to have combined the teachings of Mayama with the device as taught by Mulcahy.
One would have been motivated to make such combination because a way to indicate to a user that the focusing is not going well would have been obtained and desired, as expressly taught by Mayama (¶ [0104]).
As to claim 2, Mulcahy further shows:
wherein the user interface element shown on the display relates to an auditory user interface (¶ [0181]) (e.g., The volume might be controlled by the position of the user's focus along the spoke.).
As to claim 3, Mulcahy further shows:
wherein the user interface element shown on the display relates to a tactile user interface (¶ [0166]) (e.g., the system 100 determines whether an additional select event occurs while the user's eye gaze is near the menu item 810 (or other selection spot). For example, the user could use a voice command to select, use a hand motion, touch a region of the HMD 2 (e.g., temple), etc.).
As to claim 4, Mulcahy further shows:
wherein the user interface element shown on the display relates to a visual user interface (fig. 7 and associated description).
As to claim 11, Mulcahy further shows:
further comprising a wearable structure (fig. 3), the wearable structure comprising or connected to at least one of the display (fig. 3, el. 120), the camera (fig. 3, el. 137), the processor (fig. 3, el. 136), or a combination thereof.
As to claim 12, Mulcahy further shows:
wherein the wearable structure is configured to be worn on a head (fig. 1, el. 2; fig. 3), around a neck, or around a wrist or forearm of the user.
As to claim 21, Mulcahy further shows:
The device of claim 1, wherein the processor is further configured to change how the user interface element is displayed on the display based at least partially on the visual focal point or the eye gesture of the user (¶ [0147]) (e.g., as the user rotates their head such that a vector that "shoots straight out" from between their eyes moves from the hub 816 to the right towards the "louder" menu item, the spoke 812 is progressively filled; NOTE: fig. 11B shows that the spoke menu 812 also can become empty in associated with a user rotating the head in the left direction, with the effect of action “Quieter” being executed).
As to claim 22, Mayama further teaches:
The device of claim 1, wherein the tactile feedback comprises a vibration (¶ [0104]) (e.g., If it takes a time to achieve focusing, the main unit of the display unit or the body part shakes during that time ).
One would have been motivated to make such combination because a way to indicate to a user that the focusing is not going well would have been obtained and desired, as expressly taught by Mayama (¶ [0104]).
Claims 5, 6 are rejected under 35 U.S.C. 103 as being unpatentable over Mulcahy et al. (US20140372944, Mulcahy) in view of Mayama et al. (US20170219833, Mayama), further in view of Miller, III (US20210141453, Miller).
As to claims 5, 6:
Mulcahy, Mayama show a device substantially as claimed, as specified above.
Mulcahy, Mayama fail to specifically show: wherein the user interface element shown on the display relates to a gustatory user interface; wherein the user interface element shown on the display relates to an equilibria user interface.
In the same field of invention, Miller teaches: system for gaze interaction. Miller further teaches: wherein the user interface element shown on the display relates to a gustatory user interface; wherein the user interface element shown on the display relates to an equilibria user interface (¶ [0061]) (e.g., the system 100 may provide interface actions 142 to a user 105 via an auditory, cutaneous, kinesthetic, olfactory, and gustatory display, or any combination thereof).
Thus, it would have been obvious to one of ordinary skill in the art, having the teachings of Mulcahy, Mayama, Miller before the effective filing date of the invention, to have combined the teachings of Miller the device as taught by Mulcahy, Mayama.
One would have been motivated to make such combination because a way to automatically carry out a interface action 142 on the user's 105 behalf once an interface action 142 is assigned to a user 105 by the processor would have been obtained and desired, as expressly taught by Miller (¶ [0061]).
Claims 7 is rejected under 35 U.S.C. 103 as being unpatentable over Mulcahy et al. (US20140372944, Mulcahy) in view of Mayama et al. (US20170219833, Mayama), further in view of Lemmelson et al. (US20020105482, Lemmelson).
As to claim 7:
Mulcahy, Mayama show a device substantially as claimed, as specified above.
Mulcahy, Mayama fail to specifically show:
wherein the processor is further configured to identify the visual focal point of the user based at least partially on a distance of the at least one pupil of the user relative to the GUI.
In the same field of invention, Lemmelson teaches: system and method for controlling automatic scrolling. Lemmelson further teaches:
wherein the processor is further configured to identify the visual focal point of the user based at least partially on a distance of the at least one pupil of the user relative to the GUI (¶ [0087], [0080]) (e.g., range distance measurements are used to calculate the screen gaze coordinates; a vector CB 180 as a screen gaze direction from the user’s pupil to the screen; gimbaled sensor system 16 also includes a distance range finder 88 to find the distance from which the user's head 18 is to the gimbaled sensor system).
Thus, it would have been obvious to one of ordinary skill in the art, having the teachings of Mulcahy, Mayama, Lemmelson before the effective filing date of the invention, to have combined the teachings of Lemmelson with the device as taught by Mulcahy, Mayama.
One would have been motivated to make such combination because a way to manipulate a display of information when the user is in an environment which requires simultaneous use of hands for other purposes would have been obtained and desired, as expressly taught by Lemmelson (¶ [0003]).
Claims 8-10 are rejected under 35 U.S.C. 103 as being unpatentable over Mulcahy et al. (US20140372944, Mulcahy) in view of Mayama et al. (US20170219833, Mayama), further in view of Eash et al. (20180284451, Eash).
As to claim 8:
Mulcahy, Mayama show a device substantially as claimed, as specified above.
Mulcahy, Mayama fail to specifically show:
wherein the processor is configured to control at least one of: skew or angle of output of the display based at least partially on the identified visual focal point of the user; distribution of screen resolution of the display based at least partially on the identified visual focal point of the user; distribution of screen coloring of the display based at least partially on the identified visual focal point of the user; distribution of screen brightness of the display based at least partially on the identified visual focal point of the user; distribution of screen shading of the display based at least partially on the identified visual focal point of the user; or distribution of screen contrast of the display based at least partially on the identified visual focal point of the user.
In the same field of invention, Eash teaches: steerable high-resolution display. Eash further teaches, wherein the processor is configured to control at least one of:
skew or angle of output of the display based at least partially on the identified visual focal point of the user (¶ [0064]) (e.g., the adjustable positioning elements 226, 236 are used to adjust the foveal display 220, 230 to position the foveal image to be directed primarily toward the center of the field of view of the user's eye. In one embodiment, the direction of the image is adjusted by changing the angle of a mirror, one of the position elements 226, 236. In one embodiment, the angle of the mirror is changed by using electromagnetic forces. In one embodiment, the angle of the mirror is changed by using electrostatic forces);
distribution of screen resolution of the display based at least partially on the identified visual focal point of the user (¶ [0072])(e.g., ¶ [0072]) (e.g., . Cut-out logic 250 defines the location of the foveal display 220, 230 and provides the display information with the cut-out to the associated field display 280. The field display 280 renders this data to generate the lower resolution field display image including the cut out the corresponding portion of the image in the field display);
distribution of screen coloring of the display based at least partially on the identified visual focal point of the user (¶ [0077]) (e.g., the roll-off may be designed to roll off into “nothing,” that is gradually decreased from the full brightness/contrast to gray or black or environmental colors);
distribution of screen brightness of the display based at least partially on the identified visual focal point of the user (¶ [0077]) (e.g., the roll-off may be designed to roll off into “nothing,” that is gradually decreased from the full brightness/contrast to gray or black or environmental colors);
distribution of screen shading of the display based at least partially on the identified visual focal point of the user (¶ [0077]) (e.g., the roll-off may be designed to roll off into “nothing,” that is gradually decreased from the full brightness/contrast to gray or black or environmental colors);
or distribution of screen contrast of the display based at least partially on the identified visual focal point of the user (¶ [0077]) (e.g., the roll-off may be designed to roll off into “nothing,” that is gradually decreased from the full brightness/contrast to gray or black or environmental colors).
Thus, it would have been obvious to one of ordinary skill in the art, having the teachings of Mulcahy, Mayama, Eash before the effective filing date of the invention, to have combined the teachings of Eash with the device as taught by Mulcahy, Mayama.
One would have been motivated to make such combination because a way to enable new applications for augmented and virtual reality systems would have been obtained and desired, as expressly taught by Eash (¶ [0003]).
As to claim 9:
Mulcahy, Mayama show a device substantially as claimed, as specified above.
Mulcahy further shows: wherein the camera is configured to: capture eye movement of the user (e.g.., a camera and an associated an eye tracking mechanism) (¶ [0062]).
Mulcahy, , Mayama fail to specifically show: wherein the camera is configured to: wherein the processor is configured to: identify a saccade based on the captured eye movement of the user; and further control the one or more parameters of the user interface element shown on the display based at least partially on the identified saccade.
In the same field of invention, Eash teaches: steerable high-resolution display. Eash further teaches:
The device of claim 1, wherein the camera is configured to:
capture eye movement of the user;
and wherein the processor is configured to:
identify a saccade based on the captured eye movement of the user;
and further control the one or more parameters of the user interface element shown on the display based at least partially on the identified saccade (¶ [0133]) (e.g., the location of the foveal region of the user's field of view is determined 2015. At block 2020, the user's eye movement is classified. FIG. 21 illustrates some exemplary eye movements that may be identified. The eye movements include fixated, blinking, micro-saccade, slow pursuit, and fast movement/saccade).
Thus, it would have been obvious to one of ordinary skill in the art, having the teachings of Mulcahy, Mayama, Eash before the effective filing date of the invention, to have combined the teachings of Eash with the device as taught by Mulcahy, Mayama.
One would have been motivated to make such combination because a way to enable new applications for augmented and virtual reality systems would have been obtained and desired, as expressly taught by Eash (¶ [0003]).
As to claim 10:
Mulcahy shows a device substantially as claimed, as specified above.
Mulcahy fails to specifically show:
wherein the processor is configured to: approximate eye movement of the user from one focal point to another focal point according to the identified saccade; and further control the one or more parameters of the user interface element shown on the display based at least partially on the identified approximated eye movement.
In the same field of invention, Eash teaches: steerable high-resolution display. Eash further shows:
The device of claim 9, wherein the processor is configured to:
approximate eye movement of the user from one focal point to another focal point according to the identified saccade (¶ [0133]) (e.g., the location of the foveal region of the user's field of view is determined 2015. At block 2020, the user's eye movement is classified. FIG. 21 illustrates some exemplary eye movements that may be identified. The eye movements include fixated, blinking, micro-saccade, slow pursuit, and fast movement/saccade).
and further control the one or more parameters of the user interface element shown on the display based at least partially on the identified approximated eye movement (¶ [0077]) (e.g., the roll-off may be designed to roll off into “nothing,” that is gradually decreased from the full brightness/contrast to gray or black or environmental colors).
Thus, it would have been obvious to one of ordinary skill in the art, having the teachings of Mulcahy, Eash before the effective filing date of the invention, to have combined the teachings of Eash with the device as taught by Mulcahy.
One would have been motivated to make such combination because a way to enable new applications for augmented and virtual reality systems would have been obtained and desired, as expressly taught by Eash (¶ [0003]);
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Mulcahy et al. (US20140372944, Mulcahy) in view of Mayama et al. (US20170219833, Mayama), further in view of Eash et al. (20180284451, Eash), further in view of George-Svahn et al. (US20160109947, GS).
As to claim 13:
Mulcahy, Mayama, Eash show a device substantially as claimed, as specified above.
Mulcahy, Mayama, Eash fail to specifically show: wherein the wearable structure comprises either a cap, a wristband, a neck strap, a necklace, or a contact lens.
In the same field of invention, GS teaches: system for gaze interaction. GS further teaches: wherein the wearable structure comprises either a cap, a wristband, a neck strap, a necklace, or a contact lens (¶ [0073]) (e.g., the gaze tracking module and the information presentation area are implemented in a wearable head mounted display that may be designed to look as a pair of glasses (such as the solution described in U.S. Pat. No. 8,235,529). The user input means may include a gyro and be adapted to be worn on a wrist, hand or at least one finger.)
Thus, it would have been obvious to one of ordinary skill in the art, having the teachings of Mulcahy, Mayama, Eash, GS before the effective filing date of the invention, to have combined the teachings of GS the apparatus as taught by Mulcahy, Mayama, Eash.
One would have been motivated to make such combination because a way to detect movements representing gesture data that may then wirelessly be communicated to the glasses would have been obtained and desired, as expressly taught by GS (¶ [0073]).
Claims 14-17 are rejected under 35 U.S.C. 103 as being unpatentable over Mulcahy et al. (US20140372944, Mulcahy), in view of Eash et al. (20180284451, Eash), in view of Fateh (US20160131908).
As to claim 14:
Mulcahy shows a wearable computing device (HMD, fig. 1, el. 2), comprising:
a display (fig. 3, el. 120) configured to provide a graphical user interface (GUI) to a user (e.g., virtual image being displayed on micro display 120) (¶ [0068]),
wherein the display is head-mountable such that the display stays visible to the user while the user moves or while the user's head moves (e.g., a near-eye display such as a head mounted display (HMD) may be worn by a user to view the mixed imagery of virtual and real objects) (¶ [0002]);
a camera configured to capture at least one of a pupil location of the user or eye movement of the user (e.g.., a camera and an associated an eye tracking mechanism) (¶ [0062]);
and a processor configured to:
output a user interface element of the GUI to the display, the user interface element indicative of a setting level of one or more parameters of the GUI (e.g., output a spoke display menu to the television, the spoke display menu used to increase or decrease volume) (¶ [0048]), [0147], [0164]);
identify (fig. 7 and associated description);
and control the setting level of the user interface element shown on the display based at least partially on at least one of the visual focal point or the eye gesture of the user (¶ [0181]) (e.g., The volume might be controlled by the position of the user's focus along the spoke.),
wherein the user interface element shown on the display changes as the one or more parameters is controlled based on the at least one of the visual focal point or the eye gesture of the user (¶ [0181]) (e.g., The volume might be controlled by the position of the user's focus along the spoke.),
wherein the wearable computing device is wearable by a human user of the wearable computing device (HMD, fig. 1, el. 2).
Mulcahy fails to specifically show:
identify a saccade of the user based on at least one of the captured pupil location or the eye movement;
wherein the user interface element shown on the display relates to an equilibria user interface.
In the same field of invention, Eash teaches: steerable high-resolution display. Eash further teaches: identify a saccade of the user based on at least one of the captured pupil location or the eye movement (¶ [0133]) (e.g., the location of the foveal region of the user's field of view is determined 2015. At block 2020, the user's eye movement is classified. FIG. 21 illustrates some exemplary eye movements that may be identified. The eye movements include fixated, blinking, micro-saccade, slow pursuit, and fast movement/saccade).
Thus, it would have been obvious to one of ordinary skill in the art, having the teachings of Mulcahy, Eash before the effective filing date of the invention, to have combined the teachings of Eash with the device as taught by Mulcahy.
One would have been motivated to make such combination because a way to enable new applications for augmented and virtual reality systems would have been obtained and desired, as expressly taught by Eash (¶ [0003]).
In the same field of invention, Fateh teaches: visual stabilization system for head-mounted displays. Fateh further teaches: wherein the user interface element shown on the display relates to an equilibria user interface (¶ [0094]) (e.g., In some embodiments, the visual stabilizers are distinct geometric shapes that are integrated into the digital content. Additionally or alternatively, elements already or naturally present in the digital content (e.g., icons, elements of a user interface) may be used as visual stabilizers. Such visual stabilizers may be indistinguishable from the rest of the digital content by the user.).
Thus, it would have been obvious to one of ordinary skill in the art, having the teachings of Mulcahy, Eash, Fateh before the effective filing date of the invention, to have combined the teachings of Fateh with the device as taught by Mulcahy, Eash.
One would have been motivated to make such combination because a way to provide visual cues that help the user merge or “lock” the images together. would have been obtained and desired, as expressly taught by Fateh (¶ [0094]).
As to claim 15, Mulcahy further shows:
The device of claim 14, wherein the user interface element shown on the display further relates to at least one of
an auditory user interface (¶ [0181]) (e.g., The volume might be controlled by the position of the user's focus along the spoke.),
a tactile user interface (¶ [0166]) (e.g., the system 100 determines whether an additional select event occurs while the user's eye gaze is near the menu item 810 (or other selection spot). For example, the user could use a voice command to select, use a hand motion, touch a region of the HMD 2 (e.g., temple), etc.),
a visual user interface (¶ [0181]) (e.g., The volume might be controlled by the position of the user's focus along the spoke.),
or a gustatory user interface.
As to claim 16, Mulcahy further shows:
The device of claim 14, further comprising a wearable structure (fig. 1, el. 2), the wearable structure comprising or connected to at least one of the display (fig. 3, el. 120), the camera (fig. 3, el. 134), the processor (fig. 3, el. 136), or a combination thereof.
As to claim 17, Mulcahy further shows:
The device of claim 14, wherein the wearable structure is configured to be worn on a head (fig. 1, el. 2; fig 3), around a neck, or around a wrist or forearm of the user.
Claims 18-20, 23 are rejected under 35 U.S.C. 103 as being unpatentable over Mulcahy et al. (US20140372944, Mulcahy), in view of Bickerstaff et al (US20130293447, Bickerstaff).
As to claim 18:
Mulcahy shows a device, comprising:
a display (fig. 3, el. 120) configured to provide a graphical user interface (GUI) to a user (e.g., virtual image being displayed on micro display 120) (¶ [0068]);
a camera configured to capture at least one of a pupil location of the user or eye movement of the user (e.g.., a camera and an associated an eye tracking mechanism) (¶ [0062]); and
a processor configured to:
output a user interface element of the GUI to the display, the user interface element indicative of a setting level of a parameter of the GUI (e.g., output a spoke display menu to the television, the spoke display menu used to increase or decrease volume) (¶ [0048]), [0147], [0164]);
identify a first predetermined eye gesture of the user based on at least one of the captured pupil location or the eye movement (fig. 7 and associated description);
control a parameter of the user interface element shown on the display based at least partially on the first predetermined eye gesture (¶ [0181]) (e.g., The volume might be controlled by the position of the user's focus along the spoke.);
identify a second predetermined eye gesture of the user based on at least one of the captured pupil location or the eye movement (fig. 7 and associated description);
control the parameter setting level of the user interface element shown on the display based at least partially on the second predetermined eye gesture (¶ [0181]) (e.g., The volume might be controlled by the position of the user's focus along the spoke.);
repeat the output of the user interface element of the GUI to the display over time while continuing to identify additional predetermined eye gestures based on the pupil location or the eye movement (¶ [0161]) (e.g., The eye tracking may be performed continuously throughout process 1200.) and
continuing to control the parameter based on the additional predetermined eye gestures (¶ [0181]) (e.g., The volume might be controlled by the position of the user's focus along the spoke.).
Mulcahy fails to specifically show: wherein the parameter comprises a level of invariance to a disturbance experienced at the device, the disturbance making a visual connection between the display and the user weak or broken.
In the same field of invention, Bickerstaff teaches: head-mounted display (HMD) systems. Bickerstaff further teaches: a parameter comprises a level of invariance to a disturbance experienced at the device, the disturbance making a visual connection between the display and the user weak or broken (¶ [0077]) (e.g., At the left-hand side of FIG. 13, the display element has one or more associated actuators 600 which are operable, under the control of a driver circuit 610, two move the display element from side to side, or up and down, or both, to provide image stabilisation movements to compensate for the higher frequency component of the detected head motion).
Thus, it would have been obvious to one of ordinary skill in the art, having the teachings of Mulcahy and Bickerstaff before the effective filing date of the invention, to have combined the teachings of Bickerstaff with the device as taught by Mulcahy.
One would have been motivated to make such combination because a way to control the display of the video signal based upon a detected motion, to compensate for the higher frequency component of motion by moving the displayed image in an opposite direction to that of the detected motion would have been obtained and desired, as expressly taught by Bickerstaff (abstract).
As to claim 19, Mulcahy further shows:
The device of claim 18, wherein the first predetermined eye gesture adjusts the parameter in a first direction and the second predetermined eye gesture adjusts the parameter in a second direction opposite the first direction (¶ [0147]) (e.g., as the user rotates their head such that a vector that "shoots straight out" from between their eyes moves from the hub 816 to the right towards the "louder" menu item, the spoke 812 is progressively filled; NOTE: fig. 11B shows that the spoke menu 812 also can become empty in associated with a user rotating the head in the left direction, with the effect of action “Quieter” being executed).
As to claim 20, Mulcahy further shows:
The device of claim 19, user interface element shown on the display is adjusted on the display based on the control of the parameter in the first direction or the second direction (¶ [0147]) (e.g., as the user rotates their head such that a vector that "shoots straight out" from between their eyes moves from the hub 816 to the right towards the "louder" menu item, the spoke 812 is progressively filled; NOTE: fig. 11B shows that the spoke menu 812 also can become empty in associated with a user rotating the head in the left direction, with the effect of action “Quieter” being executed).
As to claim 23, Bickerstaff further teaches:
wherein the disturbance causes shaking or vibration of the device relative to the user.
One would have been motivated to make such combination because a way to control the display of the video signal based upon a detected motion, to compensate for the higher frequency component of motion by moving the displayed image in an opposite direction to that of the detected motion would have been obtained and desired, as expressly taught by Bickerstaff (abstract).
It is noted that any citation to specific, pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. In re Heck, 699 F.2d 1331, 1332-33,216 USPQ 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006,1009, 158 USPQ 275, 277 (CCPA 1968).
Response to Arguments
Applicant’s arguments with respect to claims above have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Wezowski et al. [U.S. 20070279591], DISPLAY BASED ON EYE INFORMATION
Murakami [U.S. 20110001763], a display control unit that changes a size of the display image in accordance with the viewing distance of the viewer
Thomas et al. [U.S. 20150221064] , USER DISTANCE BASED MODIFICATION OF A RESOLUTION OF A DISPLAY UNIT INTERFACED WITH A DATA PROCESSING DEVICE
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jordany Núñez whose telephone number is (571)272-2753. The examiner can normally be reached on M-F 8:30 AM - 5 PM.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mathew Ell can be reached on 5712724128. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
/JORDANY NUNEZ/Primary Examiner, Art Unit 2171 3/19/2026