DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on December 23, 2025 has been entered.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claim 12 is under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for pre-AIA the inventor(s), at the time the application was filed, had possession of the claimed invention. In particular, to further clarify, additional limitation “when the button input is received” recited in eleventh and twelfth lines of claim 12 was not disclosed and supported in the specification, and moreover the combined temporal concept was not disclosed and supported in the specification. In addition, the limitation and temporal concept was not disclosed in the original set of claims at the time of filing. Therefore, the limitation “when the button input is received” recited in the eleventh and twelfth lines of claim 12 constitutes new matter. Accordingly, any claims dependent on claim 12 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, based on same above reasoning.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-4 are rejected under 35 U.S.C. 103 as being unpatentable over Stolzenberg et al., U.S. Patent Application Publication 2021/0232220 A1 (hereinafter Stolzenberg), in view of Sohn et al., U.S. Patent 10,845,595 B1 (hereinafter Sohn), and Bailey et al., U.S. Patent Application Publication 2017/0097753 A1 (hereinafter Bailey).
Regarding claim 1, Stolzenberg teaches a wearable display device, comprising: a housing defining an external surface (60, 80 FIG. 9D, paragraph[0110] of Stolzenberg teaches with continued reference to FIG. 9D, the display system 60 includes a display 70, and various mechanical and electronic modules and systems to support the functioning of that display 70; the display 70 may be coupled to a frame 80, which is wearable by a display system user or viewer 90 and which is configured to position the display 70 in front of the eyes of the user 90; the display 70 may be considered eyewear in some embodiments; in some embodiments, a speaker 100 is coupled to the frame 80 and configured to be positioned adjacent the ear canal of the user 90 (in some embodiments, another speaker, not shown, may optionally be positioned adjacent the other ear canal of the user to provide stereo/shapeable sound control); the display system 60 may also include one or more microphones 110 or other devices to detect sound; in some embodiments, the microphone is configured to allow the user to provide inputs or commands to the system 60 (e.g., the selection of voice menu commands, natural language questions, etc.), and/or may allow audio communication with other persons (e.g., with other users of similar display systems; the microphone may further be configured as a peripheral sensor to collect audio data (e.g., sounds from the user and/or environment); in some embodiments, the display system may also include a peripheral sensor 120a, which may be separate from the frame 80 and attached to the body of the user 90 (e.g., on the head, torso, an extremity, etc. of the user 90); the peripheral sensor 120a may be configured to acquire data characterizing a physiological state of the user 90 in some embodiments; and for example, the sensor 120a may be an electrode, and See also at least paragraph[0109] of Stolzenberg (i.e., Stolzenberg teaches a wearable display system having a display coupled to a frame));
a button manipulatable relative to the housing, the button defining the external surface (3902 FIGS. 9D, 10A-10B, and 17, paragraph[0113] of Stolzenberg teaches FIGS. 10A and 10B illustrate examples of user inputs received through controller buttons or input regions on a user input device; in particular, FIGS. 10A and 10B illustrates a controller 3900, which may be a part of the wearable system disclosed herein and which may include a home button 3902, trigger 3904, bumper 3906, and touchpad 3908; and the user input device or a totem can serve as controller(s) 3900 in various embodiments of wearable systems, and See also at least paragraphs[0080], [0110]-[0111], [0115], [0179]-[0183], [0283], and [0285] of Stolzenberg (i.e., Stolzenberg teaches a controller that includes buttons, wherein the controller is a single integral device that is part of a processing module coupled to the frame that is an external surface of the wearable display system));
an optical module comprising: a display oriented to present content toward an eye of a user donning the wearable display device (70 FIGS. 1, 9D, 10A-10B, and 17, paragraph[0179] of Stolzenberg teaches FIG. 17 depicts an example application 1700 for a content follow system where two users of respective wearable systems are conducting a telepresence session; two users (named Alice 912 and Bob 914 in this example) are shown in this figure; the two users are wearing their respective wearable devices 902 and 904 which can include an HMD described with reference to FIG. 9D (e.g., the display device 70 of the system 60) for representing a virtual avatar of the other user in the telepresence session; the two users can conduct a telepresence session using the wearable device; and note that the vertical line in FIG. 17 separating the two users is intended to illustrate that Alice and Bob may (but need not) be in two different locations while they communicate via telepresence (e.g., Alice may be inside her office in Atlanta while Bob is outdoors in Boston), and See also at least paragraphs[0054], [0080], [0110]-[0112], [0115], [0180]-[0183], [0283], and [0285] of Stolzenberg (i.e., Stolzenberg teaches a display for representing a virtual avatar in a telepresence session, and that is capable of depicting an augmented reality scene)); and
a sensor oriented toward the eye and configured to detect a facial feature of the user; a processor electrically coupled to the button, the display, and the sensor, the processor configured to (462, 140 FIGS. 1-2, 6, 9D, 10A-10B, and 17, paragraph[0180] of Stolzenberg teaches the wearable devices 902 and 904 may be in communication with each other or with other user devices and computer systems; for example, Alice's wearable device 902 may be in communication with Bob's wearable device 904, e.g., via the network 990; the wearable devices 902 and 904 can track the users' environments and movements in the environments (e.g., via the respective outward-facing imaging system 464, or one or more location sensors) and speech (e.g., via the respective audio sensor 232); the wearable devices 902 and 904 can also track the users' eye movements or gaze based on data acquired by the inward-facing imaging system 462; and in some situations, the wearable device can also capture or track a user's facial expressions or other body movements (e.g., arm or leg movements) where a user is near a reflective surface and the outward-facing imaging system 464 can obtain reflected images of the user to observe the user's facial expressions or other body movements, and See also at least paragraphs[0042], [0054], [0057], [0080], [0088], [0110]-[0112], [0115], [0162], [0179], [0181]-[0183], [0283], and [0285] of Stolzenberg (i.e., Stolzenberg teaches the wearable display system having an inward-facing imaging system that tracks a user’s eye movements or gaze, wherein the controller, which is a processing module and has buttons, is electrically connected to the display of the display system via a communication link, and wherein the wearable display system is an augmented reality system that utilizes gaze direction to reorient content)); but does not expressly teach cause the content to change based on the facial feature and a manipulation of the button simultaneously with the facial feature.
However, Sohn teaches cause the content to change based on the facial feature and a manipulation of the button (FIGS. 5B, and 6, Col. 9, Lines 10-20 of Sohn teach based on the eye movement detected by the eye tracking system, the HMD may change the visual properties of the displayed content item in different ways; for example, in some embodiments, the HMD only changes the visual property of the content item in response to a determination that the user's eye position has a gaze direction corresponding to the content item for a threshold period of time; and as such, if the user's eye is performing a saccade or smooth pursuit, the visual properties of the content may be unchanged, even if the gaze direction of the user's eye falls upon the content item during the tracked movement, and See also at least Cols. 9 and 10, Cols. 10 and 11, Cols. 12 and 13, and Col. 14; Lines 3-67 and Lines 1-2, Lines 58-67 and Lines 1-7, Lines 40-67 and Lines 1-5, and Lines 4-39, respectively of Sohn (i.e., Sohn teaches an HMD that changes one or more visual properties of content based on gaze direction corresponding to the content)); but the combination of Stolzenberg and Sohn still do not expressly teach simultaneously with the facial feature.
However, Bailey teaches simultaneously with the facial feature (FIGS. 1-3, paragraph[0039] of Bailey teaches portable interface device 120 only includes one actuator or “button” 121; other implementations may include a second and even a third actuator, but in general portable interface device 120 includes very few actuators in order to minimize its form factor; in the illustrated example of FIG. 1, actuator 121 may provide a “select” function in combination with whatever the user is gazing at on at least one display 111 of HMD 110 as detected by eye-tracker 117 and determined by processor 112; as previously described, memory 113 of HMD 110 stores processor-executable instructions and/or data 114 that, when executed by processor 112 of HMD 110, cause the at least one display 111 to display at least one object 115 that is responsive to a selection operation performed by the user; in accordance with the present systems, devices, and methods, the selection operation performed by the user may comprise a substantially concurrent combination of gazing at the least one object 115 displayed by the at least one display 111 (as detected by eye-tracker 117) and activating the at least actuator 121 of the portable interface device 120; the selection operation may be effected by HMD 110 (e.g., by processor 112 of HMD 110) in response to receipt of a wireless “selection signal” 150 at receiver 116 transmitted from wireless signal generator 122 of portable interface device 120, and the selection operation may include “selecting” whatever object 115 on display 111 that eye tracker 117 identifies the user is looking/gazing at when the wireless selection signal 150 is receiver at receiver 116; to this end, when wireless receiver 116 of HMD 110 receives a wireless signal 150 from portable interface device 120, processor 112 executes processor-executable instructions and/or data 114 stored in memory 113, which cause processor 112 to: i) request current gaze direction data from eye-tracker 117; ii) identify a particular object 115 at which the user is gazing based on the current gaze direction data received from eye-tracker 117 (e.g., the particular object identified among at least one object displayed by at least one display 111); and iii) cause at least one display 111 to display the visual effect on the particular object 115, and See also at least ABSTRACT, paragraphs[0014], [0022], [0032], [0040] and [0048]-[0058] of Bailey (i.e., Bailey teaches a user activating an actuator of a wearable portable interface device while the user is substantially concurrently gazing at a displayed object to perform a selection operation, and in response to the selection operation, the head-mounted display displays a visual effect on the object)).
Furthermore, Stolzenberg, Sohn, and Bailey are considered to be analogous art because they are from the same field of endeavor with respect to a display device, and involve the same problem of forming the display device capable of suitably displaying a virtual object. Therefore, before the effective filing date of the claimed invention it would have been obvious to one of ordinary skill in the art to modify the system of Stolzenberg based on Sohn and Bailey to cause the content to change based on the facial feature and a manipulation of the button simultaneously with the facial feature. One reason for the modification as taught by Sohn is to have suitable presentation and changing of content items based on user motion and tracking of the user’s eye (ABSTRACT and Col. 1, Lines 7-10 of Sohn). Another reason for the modification as taught by Bailey is to suitably interact with content displayed on head-mounted displays and to have a multi-input interface that combines eye tracking with a wireless portable interface device (paragraph[0002] of Bailey). The same motivation and rationale to combine for claim 1 mentioned above, in light of corresponding statement of grounds of rejection, applies to all corresponding dependent claims mentioned in the corresponding statement of grounds of rejection.
Regarding claim 2, Stolzenberg, Sohn, and Bailey teach the wearable display device of claim 1, wherein the content includes virtual content and video passthrough content (FIGS. 1, 9D, 10A-10B, and 17, paragraph[0054] of Stolzenberg teaches FIG. 1 depicts an illustration of an augmented reality scenario with certain virtual reality objects, and certain actual reality objects viewed by a person; FIG. 1 depicts an augmented reality scene 100, wherein a user of an AR technology sees a real-world park-like setting 110 featuring people, trees, buildings in the background, and a concrete platform 120; and in addition to these items, the user of the AR technology also perceives that he “sees” a robot statue 130 standing upon the real-world platform 120, and a cartoon-like avatar character 140 (e.g., a bumble bee) flying by which seems to be a personification of a bumble bee, even though these elements do not exist in the real world, and See also at least paragraphs[0054], [0080], [0110]-[0112], [0115], [0179]-[0183], [0283], and [0285] of Stolzenberg (i.e., Stolzenberg teaches a display for representing a virtual avatar in a telepresence session)).
Regarding claim 3, Stolzenberg, Sohn, and Bailey teach the wearable display device of claim 1, wherein the sensor comprises: a first camera disposed adjacent the display; and a second camera disposed adjacent the display (630 FIGS. 1-2, 6, 9D, 10A-10B, and 17, paragraph[0088] of Stolzenberg teaches in some embodiments, a camera assembly 630 (e.g., a digital camera, including visible light and infrared light cameras) may be provided to capture images of the eye 210 and/or tissue around the eye 210 to, e.g., detect user inputs and/or to monitor the physiological state of the user; as used herein, a camera may be any image capture device; in some embodiments, the camera assembly 630 may include an image capture device and a light source to project light (e.g., infrared light) to the eye, which may then be reflected by the eye and detected by the image capture device; in some embodiments, the camera assembly 630 may be attached to the frame 80 (FIG. 9D) and may be in electrical communication with the processing modules 140 and/or 150, which may process image information from the camera assembly 630; and in some embodiments, one camera assembly 630 may be utilized for each eye, to separately monitor each eye, and See also at least paragraphs[0054], [0057], [0110]-[0112], [0115], [0179]-[0183], [0283], and [0285] of Stolzenberg (i.e., Stolzenberg teaches a camera assembly that capture images of a user’s eye, wherein the camera assembly includes cameras that are attached to the frame of the wearable display system, and wherein the display of the display system is coupled to the frame)).
Regarding claim 4, Stolzenberg, Sohn, and Bailey teach the wearable display device of claim 3, wherein the facial feature includes at least one of a gaze direction of the eye or a shape of the eye (FIGS. 5B, and 6, Col. 9, Lines 10-20 of Sohn teach based on the eye movement detected by the eye tracking system, the HMD may change the visual properties of the displayed content item in different ways; for example, in some embodiments, the HMD only changes the visual property of the content item in response to a determination that the user's eye position has a gaze direction corresponding to the content item for a threshold period of time; and as such, if the user's eye is performing a saccade or smooth pursuit, the visual properties of the content may be unchanged, even if the gaze direction of the user's eye falls upon the content item during the tracked movement, and See also at least Cols. 9 and 10, Cols. 10 and 11, Cols. 12 and 13, and Col. 14; Lines 3-67 and Lines 1-2, Lines 58-67 and Lines 1-7, Lines 40-67 and Lines 1-5, and Lines 4-39, respectively of Sohn (i.e., Sohn teaches an HMD that changes one or more visual properties of content based on gaze direction corresponding to the content)).
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Stolzenberg, in view of Sohn, Bailey, and Hegyi, U.S. Patent Application Publication 2021/0227200 A1 (hereinafter Hegyi).
Regarding claim 5, Stolzenberg, Sohn, and Bailey teach the wearable display device of claim 1, wherein: the button is a first button; the manipulation is a first manipulation (3902 FIGS. 9D, 10A-10B, and 17, paragraph[0113] of Stolzenberg teaches FIGS. 10A and 10B illustrate examples of user inputs received through controller buttons or input regions on a user input device; in particular, FIGS. 10A and 10B illustrates a controller 3900, which may be a part of the wearable system disclosed herein and which may include a home button 3902, trigger 3904, bumper 3906, and touchpad 3908; and the user input device or a totem can serve as controller(s) 3900 in various embodiments of wearable systems, and See also at least paragraphs[0080], [0110]-[0111], [0115], [0179]-[0183], [0283], and [0285] of Stolzenberg (i.e., Stolzenberg teaches a controller that includes buttons, wherein the controller is a single integral device that is part of a processing module coupled to the frame that is an external surface of the wearable display system, and wherein the button provides input via a press and release of the button)); the wearable display device further comprises a second button manipulatable relative to the housing, the second button defining the external surface (3906 FIGS. 9D, 10A-10B, and 17, paragraph[0114] of Stolzenberg teaches potential user inputs that can be received through controller 3900 include, but are not limited to, pressing and releasing the home button 3902; half and full (and other partial) pressing of the trigger 3904; releasing the trigger 3904; pressing and releasing the bumper 3906; and touching, moving while touching, releasing a touch, increasing or decreasing pressure on a touch, touching a specific portion such as an edge of the touchpad 3908, or making a gesture on the touchpad 3908 (e.g., by drawing a shape with the thumb), and See also at least paragraphs[0080], [0110]-[0111], [0113], [0115]-[0116], [0179]-[0183], [0283], and [0285] of Stolzenberg (i.e., Stolzenberg teaches a controller that includes buttons, wherein the controller is a single integral device that is part of a processing module coupled to the frame that is an external surface of the wearable display system, and wherein the button provides input via a press and release of the button)); and the processor is electrically coupled to the second button and configured to (FIGS. 1-2, 6, 9D, 10A-10B, and 17, paragraph[0088] of Stolzenberg teaches in some embodiments, a camera assembly 630 (e.g., a digital camera, including visible light and infrared light cameras) may be provided to capture images of the eye 210 and/or tissue around the eye 210 to, e.g., detect user inputs and/or to monitor the physiological state of the user; as used herein, a camera may be any image capture device; in some embodiments, the camera assembly 630 may include an image capture device and a light source to project light (e.g., infrared light) to the eye, which may then be reflected by the eye and detected by the image capture device; in some embodiments, the camera assembly 630 may be attached to the frame 80 (FIG. 9D) and may be in electrical communication with the processing modules 140 and/or 150, which may process image information from the camera assembly 630; and in some embodiments, one camera assembly 630 may be utilized for each eye, to separately monitor each eye, and See also at least paragraphs[0042], [0054], [0057], [0080], [0110]-[0112], [0115], [0162], [0179]-[0183], [0283], and [0285] of Stolzenberg (i.e., Stolzenberg teaches the wearable display system having an inward-facing imaging system that tracks a user’s eye movements or gaze, wherein the controller, which is a processing module and has buttons, is electrically connected to the display of the display system via a communication link, and wherein the wearable display system is an augmented reality system that utilizes gaze direction to reorient content)); but do not expressly teach cause the content to change based on a second manipulation of the second button.
However, Hegyi teaches cause the content to change based on a second manipulation of the second button (FIGS. 7A-10B, and 16A-B, paragraph[0220] of Hegyi teaches digital loupe controls, such as those used for magnification change, or starting and stopping a video recording, could be actuated via buttons placed on the ocular support arms; this is useful because ocular support arms are easily draped to provide sterility; parts of the ocular support structure may already need to be draped to enable the surgeon to adjust various articulations intraoperatively; and however, articulations that are driven by motors or other actuators may be commanded to different positions in a hands-free manner via voice or gesture or other means of issuing commands to a digital system, and See also at least paragraphs[0219] of Hegyi (i.e., Hegyi teaches an adjustable ocular display with buttons placed on ocular support arms of the ocular display, wherein the buttons that can be actuated to start and stop video)).
Furthermore, Stolzenberg, Sohn, Bailey, and Hegyi are considered to be analogous art because they are from the same field of endeavor with respect to a display device, and involve the same problem of forming the display device capable of suitably displaying a virtual object. Therefore, before the effective filing date of the claimed invention it would have been obvious to one of ordinary skill in the art to modify the system of Stolzenberg based on Sohn, Bailey, and Hegyi such that the processor is electrically coupled to the second button and configured to cause the content to change based on a second manipulation of the second button. One reason for the modification as taught by Sohn is to have suitable presentation and changing of content items based on user motion and tracking of the user’s eye (ABSTRACT and Col. 1, Lines 7-10 of Sohn). Another reason for the modification as taught by Bailey is to suitably interact with content displayed on head-mounted displays and to have a multi-input interface that combines eye tracking with a wireless portable interface device (paragraph[0002] of Bailey). Still another reason for the modification as taught by Hegyi is to have a suitable head-mounted display with a digitally created magnified view of the work area (ABSTRACT and paragraph[0005] of Hegyi).
Claims 6-11 are rejected under 35 U.S.C. 103 as being unpatentable over Stolzenberg, in view of Olson et al., U.S. Patent Application Publication 2021/0081034 A1 (hereinafter Olson), and Sohn.
Regarding claim 6, Stolzenberg teaches a wearable electronic device, comprising: a display frame (60, 80 FIG. 9D, paragraph[0110] of Stolzenberg teaches with continued reference to FIG. 9D, the display system 60 includes a display 70, and various mechanical and electronic modules and systems to support the functioning of that display 70; the display 70 may be coupled to a frame 80, which is wearable by a display system user or viewer 90 and which is configured to position the display 70 in front of the eyes of the user 90; the display 70 may be considered eyewear in some embodiments; in some embodiments, a speaker 100 is coupled to the frame 80 and configured to be positioned adjacent the ear canal of the user 90 (in some embodiments, another speaker, not shown, may optionally be positioned adjacent the other ear canal of the user to provide stereo/shapeable sound control); the display system 60 may also include one or more microphones 110 or other devices to detect sound; in some embodiments, the microphone is configured to allow the user to provide inputs or commands to the system 60 (e.g., the selection of voice menu commands, natural language questions, etc.), and/or may allow audio communication with other persons (e.g., with other users of similar display systems; the microphone may further be configured as a peripheral sensor to collect audio data (e.g., sounds from the user and/or environment); in some embodiments, the display system may also include a peripheral sensor 120a, which may be separate from the frame 80 and attached to the body of the user 90 (e.g., on the head, torso, an extremity, etc. of the user 90); the peripheral sensor 120a may be configured to acquire data characterizing a physiological state of the user 90 in some embodiments; and for example, the sensor 120a may be an electrode, and See also at least paragraph[0109] of Stolzenberg (i.e., Stolzenberg teaches a wearable display system having a display coupled to a frame));
a user input control externally positioned on the display frame (3902 FIGS. 9D, 10A-10B, and 17, paragraph[0113] of Stolzenberg teaches FIGS. 10A and 10B illustrate examples of user inputs received through controller buttons or input regions on a user input device; in particular, FIGS. 10A and 10B illustrates a controller 3900, which may be a part of the wearable system disclosed herein and which may include a home button 3902, trigger 3904, bumper 3906, and touchpad 3908; and the user input device or a totem can serve as controller(s) 3900 in various embodiments of wearable systems, and See also at least paragraphs[0080], [0110]-[0111], [0115], [0179]-[0183], [0283], and [0285] of Stolzenberg (i.e., Stolzenberg teaches a controller that includes buttons, wherein the controller is a single integral device that is part of a processing module coupled to the frame that is an external surface of the wearable display system));
an internal display carried by the display frame, the internal display configured to; display real world content and virtual content (70 FIGS. 1, 9D, 10A-10B, and 17, paragraph[0179] of Stolzenberg teaches FIG. 17 depicts an example application 1700 for a content follow system where two users of respective wearable systems are conducting a telepresence session; two users (named Alice 912 and Bob 914 in this example) are shown in this figure; the two users are wearing their respective wearable devices 902 and 904 which can include an HMD described with reference to FIG. 9D (e.g., the display device 70 of the system 60) for representing a virtual avatar of the other user in the telepresence session; the two users can conduct a telepresence session using the wearable device; and note that the vertical line in FIG. 17 separating the two users is intended to illustrate that Alice and Bob may (but need not) be in two different locations while they communicate via telepresence (e.g., Alice may be inside her office in Atlanta while Bob is outdoors in Boston), and See also at least paragraphs[0054], [0080], [0110]-[0112], [0115], [0180]-[0183], [0283], and [0285] of Stolzenberg (i.e., Stolzenberg teaches a display for representing a virtual avatar in a telepresence session, and that is capable of depicting an augmented reality scene)); and
an optical component positioned adjacent to the internal display, the optical component configured to collect facial data (630 FIGS. 1-2, 6, 9D, 10A-10B, and 17, paragraph[0088] of Stolzenberg teaches in some embodiments, a camera assembly 630 (e.g., a digital camera, including visible light and infrared light cameras) may be provided to capture images of the eye 210 and/or tissue around the eye 210 to, e.g., detect user inputs and/or to monitor the physiological state of the user; as used herein, a camera may be any image capture device; in some embodiments, the camera assembly 630 may include an image capture device and a light source to project light (e.g., infrared light) to the eye, which may then be reflected by the eye and detected by the image capture device; in some embodiments, the camera assembly 630 may be attached to the frame 80 (FIG. 9D) and may be in electrical communication with the processing modules 140 and/or 150, which may process image information from the camera assembly 630; and in some embodiments, one camera assembly 630 may be utilized for each eye, to separately monitor each eye, and See also at least paragraphs[0042], [0054], [0057], [0080], [0110]-[0112], [0115], [0162], [0179]-[0183], [0283], and [0285] of Stolzenberg (i.e., Stolzenberg teaches the wearable display system having a camera assembly that captures images of an eye and determines an amount of light reflected by the eye, wherein the controller, which is a processing module and has buttons, is electrically connected to the display of the display system via a communication link, and wherein the wearable display system is an augmented reality system that utilizes gaze direction to reorient content)); wherein the virtual content is configured to change (FIGS. 1-2, 6, 9D, 10A-10B, and 17, paragraph[0162] of Stolzenberg teaches at a content orientation block 1446, the AR system may utilize the content location and gaze direction of the user to reorient the virtual content; for example, the AR system may orient a surface of the content while at the determined content location to be perpendicular to the gaze direction of the user; and in some examples, the AR system may additionally move the location of the content so as to accomplish a comfortable viewing experience of the user in viewing the content at the updated orientation, and See also at least paragraphs[0042], [0054], [0057], [0080], [0088], [0110]-[0112], [0115], [0179]-[0183], [0283], and [0285] of Stolzenberg (i.e., Stolzenberg teaches the wearable display system having an inward-facing imaging system that tracks a user’s eye movements or gaze, wherein the controller, which is a processing module and has buttons, is electrically connected to the display of the display system via a communication link, and wherein the wearable display system is an augmented reality system that utilizes gaze direction to reorient content)); but does not expressly teach simultaneously; based on a simultaneous combination of the facial data and a manipulation of the user input control.
However, Olson teaches simultaneously (FIG. 5, paragraph[0083] of Olson teaches FIG. 5 is an example of simultaneously presenting virtual content representing a VR setting and physical content corresponding to a physical setting on display 230; in FIG. 5, the physical content corresponding to the physical setting is obtained using an image sensor (e.g., image sensor 212 of FIG. 2); in one implementation, the virtual content representing the VR setting and the physical content corresponding to the physical setting is simultaneously presented on display 230 by overlaying separate layers corresponding to each respective content, as illustrated by FIG. 6; since visual content corresponding to both the VR setting and the physical setting is presented on display 230 in FIG. 5, visual content representative of both a virtual object (e.g., virtual object 120) and a physical object (e.g., physical object 130) is presented on display 230 in this example; and this example of presenting visual content corresponding to both the VR setting and the physical setting on display 230 may correspond to one or more immersion levels among the second immersion level 720 through the fifth immersion level 750 discussed below in greater detail with reference to FIG. 7 (i.e., Olson teaches a display of an electronic device simultaneously presenting first content representing a virtual reality setting and second content corresponding to a physical setting)); but the combination of Stolzenberg, Olson still do not expressly teach based on a simultaneous combination of the facial data and a manipulation of the user input control.
However, Sohn teaches based on a; combination of the facial data and a manipulation of the user input control (FIGS. 2, 5B, and 6, Col. 9, Lines 10-20 of Sohn teach based on the eye movement detected by the eye tracking system, the HMD may change the visual properties of the displayed content item in different ways; for example, in some embodiments, the HMD only changes the visual property of the content item in response to a determination that the user's eye position has a gaze direction corresponding to the content item for a threshold period of time; and as such, if the user's eye is performing a saccade or smooth pursuit, the visual properties of the content may be unchanged, even if the gaze direction of the user's eye falls upon the content item during the tracked movement, and See also at least Cols. 4-5, Cols. 9 and 10, Cols. 10 and 11, Cols. 12 and 13, and Col. 14; Lines 64-67 and Line 1-57, Lines 4-9, 21-67 and Lines 1-2, Lines 12-67 and Lines 1-7, Lines 40-67 and Lines 1-5, and Lines 4-39, respectively of Sohn (i.e., Sohn teaches an HMD that changes one or more visual properties of content based on gaze direction corresponding to the content and hand gesture used to interact with content such as a link or button displayed on the HMD having a rigid body)); but the combination of Stolzenberg, Olson, and Sohn still do not expressly teach simultaneous.
However, Bailey teaches simultaneous (FIGS. 1-3, paragraph[0039] of Bailey teaches portable interface device 120 only includes one actuator or “button” 121; other implementations may include a second and even a third actuator, but in general portable interface device 120 includes very few actuators in order to minimize its form factor; in the illustrated example of FIG. 1, actuator 121 may provide a “select” function in combination with whatever the user is gazing at on at least one display 111 of HMD 110 as detected by eye-tracker 117 and determined by processor 112; as previously described, memory 113 of HMD 110 stores processor-executable instructions and/or data 114 that, when executed by processor 112 of HMD 110, cause the at least one display 111 to display at least one object 115 that is responsive to a selection operation performed by the user; in accordance with the present systems, devices, and methods, the selection operation performed by the user may comprise a substantially concurrent combination of gazing at the least one object 115 displayed by the at least one display 111 (as detected by eye-tracker 117) and activating the at least actuator 121 of the portable interface device 120; the selection operation may be effected by HMD 110 (e.g., by processor 112 of HMD 110) in response to receipt of a wireless “selection signal” 150 at receiver 116 transmitted from wireless signal generator 122 of portable interface device 120, and the selection operation may include “selecting” whatever object 115 on display 111 that eye tracker 117 identifies the user is looking/gazing at when the wireless selection signal 150 is receiver at receiver 116; to this end, when wireless receiver 116 of HMD 110 receives a wireless signal 150 from portable interface device 120, processor 112 executes processor-executable instructions and/or data 114 stored in memory 113, which cause processor 112 to: i) request current gaze direction data from eye-tracker 117; ii) identify a particular object 115 at which the user is gazing based on the current gaze direction data received from eye-tracker 117 (e.g., the particular object identified among at least one object displayed by at least one display 111); and iii) cause at least one display 111 to display the visual effect on the particular object 115, and See also at least ABSTRACT, paragraphs[0014], [0022], [0032], [0040] and [0048]-[0058] of Bailey (i.e., Bailey teaches a user activating an actuator of a wearable portable interface device while the user is substantially concurrently gazing at a displayed object to perform a selection operation, and in response to the selection operation, the head-mounted display displays a visual effect on the object)).
Furthermore, Stolzenberg, Olson, Sohn, and Bailey are considered to be analogous art because they are from the same field of endeavor with respect to a display device, and involve the same problem of forming the display device capable of suitably displaying a virtual object. Therefore, before the effective filing date of the claimed invention it would have been obvious to one of ordinary skill in the art to modify the system of Stolzenberg based on Olson, Sohn, and Bailey to have the internal display carried by the display frame, the internal display configured to simultaneously display real world content and virtual content; wherein the virtual content is configured to change based on a simultaneous combination of the facial data and a manipulation of the user input control. One reason for the modification as taught by Olson is to have device of selectively transitioning between levels of simulated reality (SR) immersion, using an input device, presented by an electronic device (paragraph[0001] of Olson). Another reason for the modification as taught by Sohn is to have suitable presentation and changing of content items based on user motion and tracking of the user’s eye (ABSTRACT and Col. 1, Lines 7-10 of Sohn). Another reason for the modification as taught by Bailey is to suitably interact with content displayed on head-mounted displays and to have a multi-input interface that combines eye tracking with a wireless portable interface device (paragraph[0002] of Bailey). The same motivation and rationale to combine for claim 6 mentioned above, in light of corresponding statement of grounds of rejection, applies to all corresponding dependent claims mentioned in the corresponding statement of grounds of rejection.
Regarding claim 7, Stolzenberg, Olson, Sohn, and Bailey teach the wearable electronic device of claim 6, wherein the user input control comprises at least one of a lever, a button, a dial, a rocker, a slider, or a toggle (FIGS. 9D, 10A-10B, and 17, paragraph[0113] of Stolzenberg teaches FIGS. 10A and 10B illustrate examples of user inputs received through controller buttons or input regions on a user input device; in particular, FIGS. 10A and 10B illustrates a controller 3900, which may be a part of the wearable system disclosed herein and which may include a home button 3902, trigger 3904, bumper 3906, and touchpad 3908; and the user input device or a totem can serve as controller(s) 3900 in various embodiments of wearable systems, and See also at least paragraphs[0080], [0110]-[0111], [0115], [0179]-[0183], [0283], and [0285] of Stolzenberg (i.e., Stolzenberg teaches a controller that includes buttons, wherein the controller is a single integral device that is part of a processing module coupled to the frame that is an external surface of the wearable display system)).
Regarding claim 8, Stolzenberg, Olson, Sohn, and Bailey teach the wearable electronic device of claim 6, wherein the optical component comprises at least one of: a camera; a light emitting diode; or an infrared sensor (630 FIGS. 1-2, 6, 9D, 10A-10B, and 17, paragraph[0088] of Stolzenberg teaches in some embodiments, a camera assembly 630 (e.g., a digital camera, including visible light and infrared light cameras) may be provided to capture images of the eye 210 and/or tissue around the eye 210 to, e.g., detect user inputs and/or to monitor the physiological state of the user; as used herein, a camera may be any image capture device; in some embodiments, the camera assembly 630 may include an image capture device and a light source to project light (e.g., infrared light) to the eye, which may then be reflected by the eye and detected by the image capture device; in some embodiments, the camera assembly 630 may be attached to the frame 80 (FIG. 9D) and may be in electrical communication with the processing modules 140 and/or 150, which may process image information from the camera assembly 630; and in some embodiments, one camera assembly 630 may be utilized for each eye, to separately monitor each eye, and See also at least paragraphs[0042], [0054], [0057], [0080], [0110]-[0112], [0115], [0162], [0179]-[0183], [0283], and [0285] of Stolzenberg (i.e., Stolzenberg teaches the wearable display system having a camera assembly that captures images of an eye and determining an amount of light reflected by the eye, wherein the controller, which is a processing module and has buttons, is electrically connected to the display of the display system via a communication link, and wherein the wearable display system is an augmented reality system that utilizes gaze direction to reorient content)).
Regarding claim 9, Stolzenberg, Olson, Sohn, and Bailey teach the wearable electronic device of claim 6, wherein the facial data comprises at least one of an eye measurement or a gaze estimation (FIGS. 1-2, 6, 9D, 10A-10B, and 17, paragraph[0088] of Stolzenberg teaches in some embodiments, a camera assembly 630 (e.g., a digital camera, including visible light and infrared light cameras) may be provided to capture images of the eye 210 and/or tissue around the eye 210 to, e.g., detect user inputs and/or to monitor the physiological state of the user; as used herein, a camera may be any image capture device; in some embodiments, the camera assembly 630 may include an image capture device and a light source to project light (e.g., infrared light) to the eye, which may then be reflected by the eye and detected by the image capture device; in some embodiments, the camera assembly 630 may be attached to the frame 80 (FIG. 9D) and may be in electrical communication with the processing modules 140 and/or 150, which may process image information from the camera assembly 630; and in some embodiments, one camera assembly 630 may be utilized for each eye, to separately monitor each eye, and See also at least paragraphs[0042], [0054], [0057], [0080], [0110]-[0112], [0115], [0162], [0179]-[0183], [0283], and [0285] of Stolzenberg (i.e., Stolzenberg teaches the wearable display system having a camera assembly that captures images of an eye and determines an amount of light reflected by the eye, wherein the controller, which is a processing module and has buttons, is electrically connected to the display of the display system via a communication link, and wherein the wearable display system is an augmented reality system that utilizes gaze direction to reorient content)).
Regarding claim 10, Stolzenberg, Olson, Sohn, and Bailey teach the wearable electronic device of claim 6, wherein a change of the virtual content comprises altering a level of virtual immersion, the virtual immersion including a first amount of the virtual content and a second amount of the real world content (FIGS. 5-11, paragraphs[0091]-[0092] of Olson teaches in some implementations, an input device is used to transition between levels of immersion that are associated with different reality boundary locations; some implementations, involve a method that presents, on the display, an SR environment at a first immersion level that is associated with a first location of a reality boundary; the method further involves receiving, using an input device, input representing a request to change the first immersion level to a second immersion level; the input may change the location of the reality boundary from the first location to a second location; in accordance with receiving the input, the method presents the SR environment at the second immersion level; the second immersion level is associated with the second location of the reality boundary and wherein real content of a physical setting and virtual content are presented in the SR environment based on the location of the reality boundary; the second immersion level may display more real content and less virtual content than the first immersion level or the second immersion level may display less real content and more virtual content than the first immersion level; and in some implementations, virtual content is only presented on one side of the reality boundary; and in some implementations, real content is only presented on one side of the reality boundary, and See also at least paragraphs[0027]-[0028], [0083], and [0093]-[0128] of Olson (i.e., Olson teaches transitioning between levels of immersion that are associated with different reality boundaries, wherein input received from an input device capable of changing an amount of virtual or real content displayed in a simulated reality environment)).
Regarding claim 11, Stolzenberg, Olson, Sohn, and Bailey teach the wearable electronic device of claim 6, wherein the change of the virtual content comprises changing a detection setting for the optical component (FIGS. 5B-6, and 6, Col. 12, Lines 9-21 of Sohn teach while content is being displayed to the user, the HMD uses an eye tracking unit to determine 620 a gaze direction of the user's eye; the eye tracking unit may utilize any type of technique to determine the position of the eye; for example, in some embodiments, the eye tracking unit comprises a camera or other type of imaging device configured to capture one or more images of the eye; the eye tracking unit may use shape recognition techniques to determine a location of the user's pupil or iris within the captured images, in order to determine a position of the eye; and in some embodiments, the eye tracking unit may further comprise a projector configured to project one or more light patterns over portions of the eye, and See also at least Cols. 9 and 10, Cols. 10 and 11, Cols. 12 and 13, and Col. 14; Lines 4-67 and Lines 1-2, Lines 58-67 and Lines 1-7, Lines 40-67 and Lines 1-5, and Lines 4-39, respectively of Sohn (i.e., Sohn teaches an HMD that changes one or more visual properties of content based on gaze direction corresponding to the content, wherein an eye tracking unit is capable of varying a type of technique for determining the gaze direction))).
Claims 12-20 are rejected under 35 U.S.C. 103 as being unpatentable over Stolzenberg, in view of Olson.
Regarding claim 12, Stolzenberg teaches a head-mountable electronic device, comprising: a housing (60, 80 FIG. 9D, paragraph[0110] of Stolzenberg teaches with continued reference to FIG. 9D, the display system 60 includes a display 70, and various mechanical and electronic modules and systems to support the functioning of that display 70; the display 70 may be coupled to a frame 80, which is wearable by a display system user or viewer 90 and which is configured to position the display 70 in front of the eyes of the user 90; the display 70 may be considered eyewear in some embodiments; in some embodiments, a speaker 100 is coupled to the frame 80 and configured to be positioned adjacent the ear canal of the user 90 (in some embodiments, another speaker, not shown, may optionally be positioned adjacent the other ear canal of the user to provide stereo/shapeable sound control); the display system 60 may also include one or more microphones 110 or other devices to detect sound; in some embodiments, the microphone is configured to allow the user to provide inputs or commands to the system 60 (e.g., the selection of voice menu commands, natural language questions, etc.), and/or may allow audio communication with other persons (e.g., with other users of similar display systems; the microphone may further be configured as a peripheral sensor to collect audio data (e.g., sounds from the user and/or environment); in some embodiments, the display system may also include a peripheral sensor 120a, which may be separate from the frame 80 and attached to the body of the user 90 (e.g., on the head, torso, an extremity, etc. of the user 90); the peripheral sensor 120a may be configured to acquire data characterizing a physiological state of the user 90 in some embodiments; and for example, the sensor 120a may be an electrode, and See also at least paragraph[0109] of Stolzenberg (i.e., Stolzenberg teaches a wearable display system having a display coupled to a frame));
a button externally positioned on the housing (3902 FIGS. 9D, 10A-10B, and 17, paragraph[0113] of Stolzenberg teaches FIGS. 10A and 10B illustrate examples of user inputs received through controller buttons or input regions on a user input device; in particular, FIGS. 10A and 10B illustrates a controller 3900, which may be a part of the wearable system disclosed herein and which may include a home button 3902, trigger 3904, bumper 3906, and touchpad 3908; and the user input device or a totem can serve as controller(s) 3900 in various embodiments of wearable systems, and See also at least paragraphs[0080], [0110]-[0111], [0115], [0179]-[0183], [0283], and [0285] of Stolzenberg (i.e., Stolzenberg teaches a controller that includes buttons, wherein the controller is a single integral device that is part of a processing module coupled to the frame that is an external surface of the wearable display system));
a display integrated with the housing (70 FIGS. 1, 9D, 10A-10B, and 17, paragraph[0179] of Stolzenberg teaches FIG. 17 depicts an example application 1700 for a content follow system where two users of respective wearable systems are conducting a telepresence session; two users (named Alice 912 and Bob 914 in this example) are shown in this figure; the two users are wearing their respective wearable devices 902 and 904 which can include an HMD described with reference to FIG. 9D (e.g., the display device 70 of the system 60) for representing a virtual avatar of the other user in the telepresence session; the two users can conduct a telepresence session using the wearable device; and note that the vertical line in FIG. 17 separating the two users is intended to illustrate that Alice and Bob may (but need not) be in two different locations while they communicate via telepresence (e.g., Alice may be inside her office in Atlanta while Bob is outdoors in Boston), and See also at least paragraphs[0054], [0080], [0110]-[0112], [0115], [0180]-[0183], [0283], and [0285] of Stolzenberg (i.e., Stolzenberg teaches a display for representing a virtual avatar in a telepresence session, and that is capable of depicting an augmented reality scene));
an optical sensor disposed within the housing, the optical sensor oriented toward a user when the head-mountable electronic device is donned (630 FIGS. 1-2, 6, 9D, 10A-10B, and 17, paragraph[0088] of Stolzenberg teaches in some embodiments, a camera assembly 630 (e.g., a digital camera, including visible light and infrared light cameras) may be provided to capture images of the eye 210 and/or tissue around the eye 210 to, e.g., detect user inputs and/or to monitor the physiological state of the user; as used herein, a camera may be any image capture device; in some embodiments, the camera assembly 630 may include an image capture device and a light source to project light (e.g., infrared light) to the eye, which may then be reflected by the eye and detected by the image capture device; in some embodiments, the camera assembly 630 may be attached to the frame 80 (FIG. 9D) and may be in electrical communication with the processing modules 140 and/or 150, which may process image information from the camera assembly 630; and in some embodiments, one camera assembly 630 may be utilized for each eye, to separately monitor each eye, and See also at least paragraphs[0042], [0054], [0057], [0080], [0110]-[0112], [0115], [0162], [0179]-[0183], [0283], and [0285] of Stolzenberg (i.e., Stolzenberg teaches the wearable display system having a camera assembly that captures images of an eye and determines an amount of light reflected by the eye, wherein the controller, which is a processing module and has buttons, is electrically connected to the display of the display system via a communication link, and wherein the wearable display system is an augmented reality system that utilizes gaze direction to reorient content));
a processor communicatively coupled to the button and the optical sensor; and a memory device storing instructions that, when executed by the processor, cause the processor to: (462, 140 FIGS. 1-2, 6, 9D, 10A-10B, and 17, paragraph[0180] of Stolzenberg teaches the wearable devices 902 and 904 may be in communication with each other or with other user devices and computer systems; for example, Alice's wearable device 902 may be in communication with Bob's wearable device 904, e.g., via the network 990; the wearable devices 902 and 904 can track the users' environments and movements in the environments (e.g., via the respective outward-facing imaging system 464, or one or more location sensors) and speech (e.g., via the respective audio sensor 232); the wearable devices 902 and 904 can also track the users' eye movements or gaze based on data acquired by the inward-facing imaging system 462; and in some situations, the wearable device can also capture or track a user's facial expressions or other body movements (e.g., arm or leg movements) where a user is near a reflective surface and the outward-facing imaging system 464 can obtain reflected images of the user to observe the user's facial expressions or other body movements, and See also at least paragraphs[0042], [0054], [0057], [0080], [0088], [0110]-[0112], [0115], [0162], [0179], [0181]-[0183], [0283], and [0285] of Stolzenberg (i.e., Stolzenberg teaches the wearable display system having an inward-facing imaging system that tracks a user’s eye movements or gaze, wherein the controller, which is a processing module with memory and has buttons, is electrically connected to the camera assembly and the display of the display system via a communication link, and wherein the wearable display system is an augmented reality system that utilizes gaze direction to reorient content)); identify sensor data from the optical sensor (FIGS. 1-2, 6, 9D, 10A-10B, and 17, paragraph[0088] of Stolzenberg teaches in some embodiments, a camera assembly 630 (e.g., a digital camera, including visible light and infrared light cameras) may be provided to capture images of the eye 210 and/or tissue around the eye 210 to, e.g., detect user inputs and/or to monitor the physiological state of the user; as used herein, a camera may be any image capture device; in some embodiments, the camera assembly 630 may include an image capture device and a light source to project light (e.g., infrared light) to the eye, which may then be reflected by the eye and detected by the image capture device; in some embodiments, the camera assembly 630 may be attached to the frame 80 (FIG. 9D) and may be in electrical communication with the processing modules 140 and/or 150, which may process image information from the camera assembly 630; and in some embodiments, one camera assembly 630 may be utilized for each eye, to separately monitor each eye, and See also at least paragraphs[0042], [0054], [0057], [0080], [0110]-[0112], [0115], [0162], [0179]-[0183], [0283], and [0285] of Stolzenberg (i.e., Stolzenberg teaches the wearable display system having a camera assembly that captures images of an eye and determines an amount of light reflected by the eye, wherein the controller, which is a processing module and has buttons, is electrically connected to the display of the display system via a communication link, and wherein the wearable display system is an augmented reality system that utilizes gaze direction to reorient content)); but does not expressly teach receive a button input in response to a depression of the button;; and control user interface content of the head-mountable electronic device based on the button input and the sensor data.
However, Olson teaches receive a button input in response to a depression of the button;; and control user interface content of the head-mountable electronic device based on the button input and the sensor data (FIGS. 1-2, paragraph[0047] of Olson teaches input device 114 is configured to receive inputs representing requests to transition from only presenting the first sensory content with the output device, to presenting a combination of the first sensory content and the second sensory content with the output device, to only presenting the second sensory content with the output device; in some respects input device 114 may be analogous to a “home” button for a user during an SR experience in that input device 114 facilitates transitioning between the SR experience and a physical setting in which device 110 is located; in one implementation, input device 114 is disposed on an outward facing surface of device 110; and in one implementation, input device 114 is disposed on an exterior surface of device 110, and See also at least paragraphs[0001], [0026]-[0028], [0037]-[0046], [0048]-[0049], [0052], [0059], and [0066]-[0069] of Olson (i.e., Olson teaches an input device that is connected to processor and that is a button capable of receiving input through actuation, which facilitates, based on eye tracking, transitioning between a simulated reality experience and a physical setting and various levels of immersion, wherein the input device receives inputs representing request to transition from presenting first sensory content to presenting only second sensory content, or to presenting a combination of first sensory content and second sensory content, wherein the sensory content is obtain from a sensor)).
Furthermore, Stolzenberg and Olson are considered to be analogous art because they are from the same field of endeavor with respect to a display device, and involve the same problem of forming the display device capable of suitably displaying an immersive virtual environment. Therefore, before the effective filing date of the claimed invention it would have been obvious to one of ordinary skill in the art to modify the system of Stolzenberg based on Olson and Sohn to receive a button input in response to a depression of the button;; and control user interface content of the head-mountable electronic device based on the button input and the sensor data. Another reason for the modification as taught by Olson is to be able to selectively transition between levels of simulated reality immersion presented by an electronic device (ABSTRACT and paragraph[0001] of Olson). The same motivation and rationale to combine for claim 12 mentioned above, in light of corresponding statement of grounds of rejection, applies to all corresponding dependent claims mentioned in the corresponding statement of grounds of rejection.
Regarding claim 13, Stolzenberg and Olson teach the head-mountable electronic device of claim 12, wherein: the optical sensor is a first optical sensor; a system further comprises a second optical sensor; and the first optical sensor and the second optical sensor are positioned adjacent to the display (630 FIGS. 1-2, 6, 9D, 10A-10B, and 17, paragraph[0088] of Stolzenberg teaches in some embodiments, a camera assembly 630 (e.g., a digital camera, including visible light and infrared light cameras) may be provided to capture images of the eye 210 and/or tissue around the eye 210 to, e.g., detect user inputs and/or to monitor the physiological state of the user; as used herein, a camera may be any image capture device; in some embodiments, the camera assembly 630 may include an image capture device and a light source to project light (e.g., infrared light) to the eye, which may then be reflected by the eye and detected by the image capture device; in some embodiments, the camera assembly 630 may be attached to the frame 80 (FIG. 9D) and may be in electrical communication with the processing modules 140 and/or 150, which may process image information from the camera assembly 630; and in some embodiments, one camera assembly 630 may be utilized for each eye, to separately monitor each eye, and See also at least paragraphs[0054], [0057], [0110]-[0112], [0115], [0179]-[0183], [0283], and [0285] of Stolzenberg (i.e., Stolzenberg teaches a camera assembly that capture images of a user’s eye, wherein the camera assembly includes cameras that are attached to the frame of the wearable display system, and wherein the display of the display system is coupled to the frame)).
Regarding claim 14, Stolzenberg and Olson teach the head-mountable electronic device of claim 13, wherein: the first optical sensor is positioned adjacent a first side of the display; and the second optical sensor is positioned adjacent a second side of the display (FIGS. 1-2, 6, 9D, 10A-10B, and 17, paragraph[0088] of Stolzenberg teaches in some embodiments, a camera assembly 630 (e.g., a digital camera, including visible light and infrared light cameras) may be provided to capture images of the eye 210 and/or tissue around the eye 210 to, e.g., detect user inputs and/or to monitor the physiological state of the user; as used herein, a camera may be any image capture device; in some embodiments, the camera assembly 630 may include an image capture device and a light source to project light (e.g., infrared light) to the eye, which may then be reflected by the eye and detected by the image capture device; in some embodiments, the camera assembly 630 may be attached to the frame 80 (FIG. 9D) and may be in electrical communication with the processing modules 140 and/or 150, which may process image information from the camera assembly 630; and in some embodiments, one camera assembly 630 may be utilized for each eye, to separately monitor each eye, and See also at least paragraphs[0054], [0057], [0110]-[0112], [0115], [0179]-[0183], [0283], and [0285] of Stolzenberg (i.e., Stolzenberg teaches a camera assembly that capture images of a user’s eye, wherein the camera assembly includes cameras that are attached to the frame of the wearable display system, and wherein the display of the display system is coupled to the frame)).
Regarding claim 15, Stolzenberg and Olson teach the head-mountable electronic device of claim 12, wherein the sensor data includes eye-tracking data (FIGS. 1-2, paragraph[0047] of Olson teaches input device 114 is configured to receive inputs representing requests to transition from only presenting the first sensory content with the output device, to presenting a combination of the first sensory content and the second sensory content with the output device, to only presenting the second sensory content with the output device; in some respects input device 114 may be analogous to a “home” button for a user during an SR experience in that input device 114 facilitates transitioning between the SR experience and a physical setting in which device 110 is located; in one implementation, input device 114 is disposed on an outward facing surface of device 110; and in one implementation, input device 114 is disposed on an exterior surface of device 110, and See also at least paragraphs[0001], [0026]-[0028], [0037]-[0046], [0048]-[0049], [0052], [0059], and [0068] of Olson (i.e., Olson teaches an input device that is connected to processor and that is a button capable of receiving input through actuation, which facilitates, based on an eye tracking characteristic, transitioning between a simulated reality experience and a physical setting and various levels of immersion)).
Regarding claim 16, Stolzenberg and Olson teach the head-mountable electronic device of claim 12, wherein controlling the user interface content comprises changing how the display presents virtual content with respect to displayed real world content (FIGS. 5-11, paragraphs[0091]-[0092] of Olson teaches in some implementations, an input device is used to transition between levels of immersion that are associated with different reality boundary locations; some implementations, involve a method that presents, on the display, an SR environment at a first immersion level that is associated with a first location of a reality boundary; the method further involves receiving, using an input device, input representing a request to change the first immersion level to a second immersion level; the input may change the location of the reality boundary from the first location to a second location; in accordance with receiving the input, the method presents the SR environment at the second immersion level; the second immersion level is associated with the second location of the reality boundary and wherein real content of a physical setting and virtual content are presented in the SR environment based on the location of the reality boundary; the second immersion level may display more real content and less virtual content than the first immersion level or the second immersion level may display less real content and more virtual content than the first immersion level; and in some implementations, virtual content is only presented on one side of the reality boundary; and in some implementations, real content is only presented on one side of the reality boundary, and See also at least paragraphs[0001], [0027]-[0028], [0037]-[0049], [0052], [0059], [0068], [0083], and [0093]-[0128] of Olson (i.e., Olson teaches transitioning between levels of immersion that are associated with different reality boundaries, wherein input received from an input device capable of changing an amount of virtual or real content displayed relative to a boundary within a simulated reality environment)).
Regarding claim 17, Stolzenberg and Olson teach the head-mountable electronic device of claim 16, wherein controlling the user interface content comprises causing the display to: adjust at least one of a size, an orientation, or a position of the virtual content relative to the real world content; remove at least a portion of the virtual content; or increase an amount of the virtual content relative to the real world content (FIGS. 5-11, paragraphs[0091]-[0092] of Olson teaches in some implementations, an input device is used to transition between levels of immersion that are associated with different reality boundary locations; some implementations, involve a method that presents, on the display, an SR environment at a first immersion level that is associated with a first location of a reality boundary; the method further involves receiving, using an input device, input representing a request to change the first immersion level to a second immersion level; the input may change the location of the reality boundary from the first location to a second location; in accordance with receiving the input, the method presents the SR environment at the second immersion level; the second immersion level is associated with the second location of the reality boundary and wherein real content of a physical setting and virtual content are presented in the SR environment based on the location of the reality boundary; the second immersion level may display more real content and less virtual content than the first immersion level or the second immersion level may display less real content and more virtual content than the first immersion level; and in some implementations, virtual content is only presented on one side of the reality boundary; and in some implementations, real content is only presented on one side of the reality boundary, and See also at least paragraphs[0001], [0027]-[0028], [0037]-[0049], [0052], [0059], [0068], [0083], and [0093]-[0128] of Olson (i.e., Olson teaches transitioning between levels of immersion that are associated with different reality boundaries, wherein input received from an input device capable of changing an amount of virtual or real content displayed relative to a boundary within a simulated reality environment)).
Regarding claim 18, Stolzenberg and Olson teach the head-mountable electronic device of claim 16, wherein a displayed combination of the virtual content and the real world content varies between: a first immersion limit of 0 % of the virtual content and 100 % of the real world content; and a second immersion limit of 100 % of the virtual content and 0 % of the real world content (FIGS. 5-11, paragraphs[0091]-[0092] of Olson teaches in some implementations, an input device is used to transition between levels of immersion that are associated with different reality boundary locations; some implementations, involve a method that presents, on the display, an SR environment at a first immersion level that is associated with a first location of a reality boundary; the method further involves receiving, using an input device, input representing a request to change the first immersion level to a second immersion level; the input may change the location of the reality boundary from the first location to a second location; in accordance with receiving the input, the method presents the SR environment at the second immersion level; the second immersion level is associated with the second location of the reality boundary and wherein real content of a physical setting and virtual content are presented in the SR environment based on the location of the reality boundary; the second immersion level may display more real content and less virtual content than the first immersion level or the second immersion level may display less real content and more virtual content than the first immersion level; and in some implementations, virtual content is only presented on one side of the reality boundary; and in some implementations, real content is only presented on one side of the reality boundary, and See also at least paragraphs[0001], [0027]-[0028], [0037]-[0049], [0052], [0059], [0068], [0083], and [0093]-[0128] of Olson (i.e., Olson teaches transitioning between levels of immersion that are associated with different reality boundaries, wherein input received from an input device capable of changing an amount of virtual or real content displayed relative to a boundary within a simulated reality environment)).
Regarding claim 19, Stolzenberg and Olson teach the head-mountable electronic device of claim 16, wherein: the real world content includes a three-dimensional space; and the virtual content includes at least one virtual wall visually bounding a portion of the three-dimensional space (800, 830A and 830B FIGS. 5-12, paragraphs[0107] of Olson teaches in display 1000A (presenting the first immersion level), a region extending between viewpoint position 810 and the reality boundary 830A encompasses portion 840 of the SR experience 800; yet, in display 1000B (presenting the second immersion level), the region extending between viewpoint position 810 and the reality boundary 830B encompasses both portion 840 and portion 850 of the SR experience 800; thus, in transitioning between the first and second immersion levels (presented by display 1000A and display 1000B, respectively), the reality boundary 830 transitioned in a radially outward direction 820; specifically, the reality boundary 830 transitioned in the radially outward direction 820 between a first position (represented by reality boundary 830A) and a second position (represented by reality boundary 830B); and the first position being more proximate to the viewpoint position 810 than the second position, and See also at least paragraphs[0001], [0027]-[0049], [0052], [0059], [0068], [0083], and [0091]-[0106], [0108]-[0128], [0131], an d[0134] of Olson (i.e., Olson teaches transitioning between levels of immersion that are associated with different reality boundaries, wherein input received from an input device capable of changing an amount of virtual or real content displayed relative to a boundary encompassing a portion of a simulated reality experience (e.g., a mixed reality experience) including at least one object and at which a virtual/physical transition begins within the simulated reality environment having two-dimensional and even three-dimensional content)).
Regarding claim 20, Stolzenberg and Olson teach the head-mountable electronic device of claim 16, wherein: the real world content includes a three-dimensional space; and the three-dimensional space includes at least one wall visually bounding a portion of the virtual content (800, 830A and 830B FIGS. 5-12, paragraphs[0107] of Olson teaches in display 1000A (presenting the first immersion level), a region extending between viewpoint position 810 and the reality boundary 830A encompasses portion 840 of the SR experience 800; yet, in display 1000B (presenting the second immersion level), the region extending between viewpoint position 810 and the reality boundary 830B encompasses both portion 840 and portion 850 of the SR experience 800; thus, in transitioning between the first and second immersion levels (presented by display 1000A and display 1000B, respectively), the reality boundary 830 transitioned in a radially outward direction 820; specifically, the reality boundary 830 transitioned in the radially outward direction 820 between a first position (represented by reality boundary 830A) and a second position (represented by reality boundary 830B); and the first position being more proximate to the viewpoint position 810 than the second position, and See also at least paragraphs[0001], [0027]-[0049], [0052], [0059], [0068], [0083], and [0091]-[0106], [0108]-[0128], [0131], an d[0134] of Olson (i.e., Olson teaches transitioning between levels of immersion that are associated with different reality boundaries, wherein input received from an input device capable of changing an amount of virtual or real content displayed relative to a boundary encompassing a portion of a simulated reality experience (e.g., a mixed reality experience) including at least one object and at which a virtual/physical transition begins within the simulated reality environment having two-dimensional and even three-dimensional content)).
Response to Arguments
Applicant's arguments filed December 23, 2025 have been fully considered but they are not persuasive. The following is a brief summary of Applicant’s arguments:
In regard to currently amended claim 1, Applicants submitted that the prior art of record does not teach or suggest the following: “the processor configured to cause the content to change based on the facial feature and a simultaneous manipulation of the button simultaneously with the facial feature”.
In regard to currently amended claim 6, Applicants submitted that the prior art of record does not teach or suggest the following: “wherein the virtual content is configured to change based on a simultaneous combination of the facial data and a manipulation of the user input control”.
In regard to currently amended claim 12, Applicants submitted that the prior art of record does not teach or suggest the following: “a processor communicatively coupled to the button and the optical sensor; and a memory device storing instructions that, when executed by the processor, cause the processor to: receive a button input in response to a depression of the button; identify sensor data from the optical sensor when the button input is received; and control user interface content of the head-mountable electronic device based on the button input and the sensor data”.
Examiner respectfully disagrees. Specifically, in regard to arguments ‘A’ summarized above at least Col. 9, Lines 10-20 of Sohn teach based on the eye movement detected by the eye tracking system, the HMD may change the visual properties of the displayed content item in different ways; for example, in some embodiments, the HMD only changes the visual property of the content item in response to a determination that the user's eye position has a gaze direction corresponding to the content item for a threshold period of time; and as such, if the user's eye is performing a saccade or smooth pursuit, the visual properties of the content may be unchanged, even if the gaze direction of the user's eye falls upon the content item during the tracked movement, and See also at least Cols. 9 and 10, Cols. 10 and 11, Cols. 12 and 13, and Col. 14; Lines 3-67 and Lines 1-2, Lines 58-67 and Lines 1-7, Lines 40-67 and Lines 1-5, and Lines 4-39, respectively of Sohn.
Thus, Sohn teaches an HMD that changes one or more visual properties of content based on gaze direction corresponding to the content.
Moreover, at paragraph[0039] of Bailey teaches portable interface device 120 only includes one actuator or “button” 121; other implementations may include a second and even a third actuator, but in general portable interface device 120 includes very few actuators in order to minimize its form factor; in the illustrated example of FIG. 1, actuator 121 may provide a “select” function in combination with whatever the user is gazing at on at least one display 111 of HMD 110 as detected by eye-tracker 117 and determined by processor 112; as previously described, memory 113 of HMD 110 stores processor-executable instructions and/or data 114 that, when executed by processor 112 of HMD 110, cause the at least one display 111 to display at least one object 115 that is responsive to a selection operation performed by the user; in accordance with the present systems, devices, and methods, the selection operation performed by the user may comprise a substantially concurrent combination of gazing at the least one object 115 displayed by the at least one display 111 (as detected by eye-tracker 117) and activating the at least actuator 121 of the portable interface device 120; the selection operation may be effected by HMD 110 (e.g., by processor 112 of HMD 110) in response to receipt of a wireless “selection signal” 150 at receiver 116 transmitted from wireless signal generator 122 of portable interface device 120, and the selection operation may include “selecting” whatever object 115 on display 111 that eye tracker 117 identifies the user is looking/gazing at when the wireless selection signal 150 is receiver at receiver 116; to this end, when wireless receiver 116 of HMD 110 receives a wireless signal 150 from portable interface device 120, processor 112 executes processor-executable instructions and/or data 114 stored in memory 113, which cause processor 112 to: i) request current gaze direction data from eye-tracker 117; ii) identify a particular object 115 at which the user is gazing based on the current gaze direction data received from eye-tracker 117 (e.g., the particular object identified among at least one object displayed by at least one display 111); and iii) cause at least one display 111 to display the visual effect on the particular object 115, and See also at least ABSTRACT, paragraphs[0014], [0022], [0032], [0040] and [0048]-[0058] of Bailey.
Thus, Bailey teaches a user activating an actuator of a wearable portable interface device while the user is substantially concurrently gazing at a displayed object to perform a selection operation, and in response to the selection operation, the head-mounted display displays a visual effect on the object.
Furthermore, as mentioned above, Stolzenberg, Sohn, and Bailey are considered to be analogous art because they are from the same field of endeavor with respect to a display device, and involve the same problem of forming the display device capable of suitably displaying a virtual object. Therefore, before the effective filing date of the claimed invention it would have been obvious to one of ordinary skill in the art to modify the system of Stolzenberg based on Sohn and Bailey to cause the content to change based on the facial feature and a manipulation of the button simultaneously with the facial feature. One reason for the modification as taught by Sohn is to have suitable presentation and changing of content items based on user motion and tracking of the user’s eye (ABSTRACT and Col. 1, Lines 7-10 of Sohn). Another reason for the modification as taught by Bailey is to suitably interact with content displayed on head-mounted displays and to have a multi-input interface that combines eye tracking with a wireless portable interface device (paragraph[0002] of Bailey). The same motivation and rationale to combine for claim 1 mentioned above, in light of corresponding statement of grounds of rejection, applies to all corresponding dependent claims mentioned in the corresponding statement of grounds of rejection.
In addition, in regard to argument ‘B’ summarized above at least paragraph[0162] of Stolzenberg teaches at a content orientation block 1446, the AR system may utilize the content location and gaze direction of the user to reorient the virtual content; for example, the AR system may orient a surface of the content while at the determined content location to be perpendicular to the gaze direction of the user; and in some examples, the AR system may additionally move the location of the content so as to accomplish a comfortable viewing experience of the user in viewing the content at the updated orientation, and See also at least paragraphs[0042], [0054], [0057], [0080], [0088], [0110]-[0112], [0115], [0179]-[0183], [0283], and [0285] of Stolzenberg.
Thus, Stolzenberg teaches the wearable display system having an inward-facing imaging system that tracks a user’s eye movements or gaze, wherein the controller, which is a processing module and has buttons, is electrically connected to the display of the display system via a communication link, and wherein the wearable display system is an augmented reality system that utilizes gaze direction to reorient content.
Moreover, at least Col. 9, Lines 10-20 of Sohn teach based on the eye movement detected by the eye tracking system, the HMD may change the visual properties of the displayed content item in different ways; for example, in some embodiments, the HMD only changes the visual property of the content item in response to a determination that the user's eye position has a gaze direction corresponding to the content item for a threshold period of time; and as such, if the user's eye is performing a saccade or smooth pursuit, the visual properties of the content may be unchanged, even if the gaze direction of the user's eye falls upon the content item during the tracked movement, and See also at least Cols. 4-5, Cols. 9 and 10, Cols. 10 and 11, Cols. 12 and 13, and Col. 14; Lines 64-67 and Line 1-57, Lines 4-9, 21-67 and Lines 1-2, Lines 12-67 and Lines 1-7, Lines 40-67 and Lines 1-5, and Lines 4-39, respectively of Sohn.
Thus, Sohn teaches an HMD that changes one or more visual properties of content based on gaze direction corresponding to the content and hand gesture used to interact with content such as a link or button displayed on the HMD having a rigid body.
Still moreover, paragraph[0039] of Bailey teaches portable interface device 120 only includes one actuator or “button” 121; other implementations may include a second and even a third actuator, but in general portable interface device 120 includes very few actuators in order to minimize its form factor; in the illustrated example of FIG. 1, actuator 121 may provide a “select” function in combination with whatever the user is gazing at on at least one display 111 of HMD 110 as detected by eye-tracker 117 and determined by processor 112; as previously described, memory 113 of HMD 110 stores processor-executable instructions and/or data 114 that, when executed by processor 112 of HMD 110, cause the at least one display 111 to display at least one object 115 that is responsive to a selection operation performed by the user; in accordance with the present systems, devices, and methods, the selection operation performed by the user may comprise a substantially concurrent combination of gazing at the least one object 115 displayed by the at least one display 111 (as detected by eye-tracker 117) and activating the at least actuator 121 of the portable interface device 120; the selection operation may be effected by HMD 110 (e.g., by processor 112 of HMD 110) in response to receipt of a wireless “selection signal” 150 at receiver 116 transmitted from wireless signal generator 122 of portable interface device 120, and the selection operation may include “selecting” whatever object 115 on display 111 that eye tracker 117 identifies the user is looking/gazing at when the wireless selection signal 150 is receiver at receiver 116; to this end, when wireless receiver 116 of HMD 110 receives a wireless signal 150 from portable interface device 120, processor 112 executes processor-executable instructions and/or data 114 stored in memory 113, which cause processor 112 to: i) request current gaze direction data from eye-tracker 117; ii) identify a particular object 115 at which the user is gazing based on the current gaze direction data received from eye-tracker 117 (e.g., the particular object identified among at least one object displayed by at least one display 111); and iii) cause at least one display 111 to display the visual effect on the particular object 115, and See also at least ABSTRACT, paragraphs[0014], [0022], [0032], [0040] and [0048]-[0058] of Bailey.
Thus, to reiterate, Bailey teaches a user activating an actuator of a wearable portable interface device while the user is substantially concurrently gazing at a displayed object to perform a selection operation, and in response to the selection operation, the head-mounted display displays a visual effect on the object.
Furthermore, as mentioned above, Stolzenberg, Olson, Sohn, and Bailey are considered to be analogous art because they are from the same field of endeavor with respect to a display device, and involve the same problem of forming the display device capable of suitably displaying a virtual object. Therefore, before the effective filing date of the claimed invention it would have been obvious to one of ordinary skill in the art to modify the system of Stolzenberg based on Olson, Sohn, and Bailey to have the internal display carried by the display frame, the internal display configured to simultaneously display real world content and virtual content; wherein the virtual content is configured to change based on a simultaneous combination of the facial data and a manipulation of the user input control. One reason for the modification as taught by Olson is to have device of selectively transitioning between levels of simulated reality (SR) immersion, using an input device, presented by an electronic device (paragraph[0001] of Olson). Another reason for the modification as taught by Sohn is to have suitable presentation and changing of content items based on user motion and tracking of the user’s eye (ABSTRACT and Col. 1, Lines 7-10 of Sohn). Another reason for the modification as taught by Bailey is to suitably interact with content displayed on head-mounted displays and to have a multi-input interface that combines eye tracking with a wireless portable interface device (paragraph[0002] of Bailey). The same motivation and rationale to combine for claim 6 mentioned above, in light of corresponding statement of grounds of rejection, applies to all corresponding dependent claims mentioned in the corresponding statement of grounds of rejection.
Still in addition, in regard to argument ‘C’ summarized above at least paragraph[0180] of Stolzenberg teaches the wearable devices 902 and 904 may be in communication with each other or with other user devices and computer systems; for example, Alice's wearable device 902 may be in communication with Bob's wearable device 904, e.g., via the network 990; the wearable devices 902 and 904 can track the users' environments and movements in the environments (e.g., via the respective outward-facing imaging system 464, or one or more location sensors) and speech (e.g., via the respective audio sensor 232); the wearable devices 902 and 904 can also track the users' eye movements or gaze based on data acquired by the inward-facing imaging system 462; and in some situations, the wearable device can also capture or track a user's facial expressions or other body movements (e.g., arm or leg movements) where a user is near a reflective surface and the outward-facing imaging system 464 can obtain reflected images of the user to observe the user's facial expressions or other body movements, and See also at least paragraphs[0042], [0054], [0057], [0080], [0088], [0110]-[0112], [0115], [0162], [0179], [0181]-[0183], [0283], and [0285] of Stolzenberg.
Thus, examiner maintains, Stolzenberg teaches the wearable display system having an inward-facing imaging system that tracks a user’s eye movements or gaze, wherein the controller, which is a processing module with memory and has buttons, is electrically connected to the camera assembly and the display of the display system via a communication link, and wherein the wearable display system is an augmented reality system that utilizes gaze direction to reorient content.
Moreover, paragraph[0088] of Stolzenberg teaches in some embodiments, a camera assembly 630 (e.g., a digital camera, including visible light and infrared light cameras) may be provided to capture images of the eye 210 and/or tissue around the eye 210 to, e.g., detect user inputs and/or to monitor the physiological state of the user; as used herein, a camera may be any image capture device; in some embodiments, the camera assembly 630 may include an image capture device and a light source to project light (e.g., infrared light) to the eye, which may then be reflected by the eye and detected by the image capture device; in some embodiments, the camera assembly 630 may be attached to the frame 80 (FIG. 9D) and may be in electrical communication with the processing modules 140 and/or 150, which may process image information from the camera assembly 630; and in some embodiments, one camera assembly 630 may be utilized for each eye, to separately monitor each eye, and See also at least paragraphs[0042], [0054], [0057], [0080], [0110]-[0112], [0115], [0162], [0179]-[0183], [0283], and [0285] of Stolzenberg.
Thus, examiner maintains, Stolzenberg teaches the wearable display system having a camera assembly that captures images of an eye and determines an amount of light reflected by the eye, wherein the controller, which is a processing module and has buttons, is electrically connected to the display of the display system via a communication link, and wherein the wearable display system is an augmented reality system that utilizes gaze direction to reorient content.
Still moreover, paragraph[0047] of Olson teaches input device 114 is configured to receive inputs representing requests to transition from only presenting the first sensory content with the output device, to presenting a combination of the first sensory content and the second sensory content with the output device, to only presenting the second sensory content with the output device; in some respects input device 114 may be analogous to a “home” button for a user during an SR experience in that input device 114 facilitates transitioning between the SR experience and a physical setting in which device 110 is located; in one implementation, input device 114 is disposed on an outward facing surface of device 110; and in one implementation, input device 114 is disposed on an exterior surface of device 110, and See also at least paragraphs[0001], [0026]-[0028], [0037]-[0046], [0048]-[0049], [0052], [0059], and [0066]-[0069] of Olson.
Thus, to still further clarify, Olson teaches an input device that is connected to processor and that is a button capable of receiving input through actuation, which facilitates, based on eye tracking, transitioning between a simulated reality experience and a physical setting and various levels of immersion, wherein the input device receives inputs representing request to transition from presenting first sensory content to presenting only second sensory content, or to presenting a combination of first sensory content and second sensory content, wherein the sensory content is obtain from a sensor.
Furthermore, as mentioned above, Stolzenberg and Olson are considered to be analogous art because they are from the same field of endeavor with respect to a display device, and involve the same problem of forming the display device capable of suitably displaying a virtual object. Therefore, before the effective filing date of the claimed invention it would have been obvious to one of ordinary skill in the art to modify the system of Stolzenberg based on Olson and Sohn to receive a button input in response to a depression of the button;; and control user interface content of the head-mountable electronic device based on the button input and the sensor data. Another reason for the modification as taught by Olson is to be able to selectively transition between levels of simulated reality immersion presented by an electronic device (ABSTRACT and paragraph[0001] of Olson). The same motivation and rationale to combine for claim 12 mentioned above, in light of corresponding statement of grounds of rejection, applies to all corresponding dependent claims mentioned in the corresponding statement of grounds of rejection.
Also, in regard to independent claims 1, 6, and 12 Applicant submitted that similar arguments apply to respective dependent claims. Therefore, the Examiner’s response in regard to arguments ‘A’, ‘ B’, and ‘C’ summarized above, also applies to respective dependent claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ABDUL-SAMAD A ADEDIRAN whose telephone number is (571)272-3128. The examiner can normally be reached Monday through Thursday, 8:00 am to 5:00 pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amr Awad can be reached at 571-272-7764. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ABDUL-SAMAD A ADEDIRAN/Primary Examiner, Art Unit 2621