DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 23 January 2026 has been entered in which claims 1-3, 5, 8, 17, and 18 have been amended. Claims 1-20 are currently pending and an office action on the merits follows.
Inventorship
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-8 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pub. No. 2022/0092747 by Edwin et al. (“Edwin”), in view of U.S. Pub. No. 2020/0193714 by Browy et al. (“Browy”), and in further view of U.S. Patent No. 11,269,406 by Sztuk et al. (“Sztuk”).
As to claim 1, Edwin discloses an electronic device (Edwin, head-mounted display device 100, Figure 1A), comprising:
a housing (Edwin, head-mounted display device 100, Figure 1A) having a frame with a nose bridge and having temples coupled to the frame by hinges (Edwin, The head-mounted display device 100 includes a wearable frame 102 that mounts to a user's head during use. The wearable frame 102 includes left and right temple arms 302L, 302R that are positionable over the user's left and right ear, respectively. A nose piece 306 is provided to comfortably allow the wearable frame 102 to rest against the nose of the user. Figure 1A, ¶ [0023]);
a projector in the housing and configured to output light (Edwin, projection assembly 108L, Figure 1A);
a waveguide in the housing and configured to propagate the light (Edwin, eyepiece 70L may be implemented as a waveguide-based display into which the scanned light from the respective projection subsystems 108L is injected, Figure 1A, ¶ [0029]);
an optical coupler on the waveguide and configured to couple a first portion of the light out of the waveguide while passing a second portion of the light (Edwin, While a majority of the guided light may exit the eyepiece 70L as the light traverses the DOE(s) (e.g., directed toward a user's left eye), a portion of this light may continue on toward an out-coupling DOE 190L, where it may be coupled out of the eyepiece 70L as light (represented by the light ray 203) and at least partially intercepted by a light sensing assembly 122. Figure 1A, ¶ [0030]);
an optical sensor (Edwin, light sensing assembly 122, Figure 1A) in the housing and configured to generate sensor data in response to the second portion of the light (Edwin, a portion of this light may continue on toward an out-coupling DOE 190L, where it may be coupled out of the eyepiece 70L as light (represented by the light ray 203) and at least partially intercepted by a light sensing assembly 122. Figure 1A, ¶ [0030]) (Edwin, As shown in FIG. 1A, the light sensing assembly 122 is located in the bridge 304 of the wearable frame 102. The light sensing assembly 122 includes a separate camera (imaging sensor) for the left and right eyepieces. The cameras are configured to determine the virtual content displayed on the left and right eyepieces. Figure 1A, ¶ [0032]); and
a position sensor at the nose bridge (Edwin, light sensing assembly 122, Figure 1A) and configured to measure orientation information, wherein the position sensor is selected from the group consisting of: an accelerometer, a compass sensor, a gyroscope, and an inertial measurement unit, the projector being configured to adjust the light based on the sensor data and the orientation information (Edwin, The light sensing assembly 122 can be sufficiently rigid so the camera associated with the left eyepiece has a fixed position and orientation relative to the position and orientation of the camera associated with the right eyepiece. ¶ [0033]) (Edwin, FIGS. 1A and 2A illustrate the light rays 203 being transmitted from the left and right out-coupling DOEs 190L, 190R. In the undeformed state of FIG. 1A, the light rays 203 reach the respective cameras of the light sensing assembly 122 at substantially the same time. This time-of-flight information depends the relative position of the left eyepiece 70L relative to the right eyepiece 70R. For example, light reaching the display sooner than expected may indicate that the eyepiece is bent away from the user so that the out-couple DOE is closer to the light sensing assembly 122. Conversely, light reaching the display later than expected may indicate that the eyepiece is bent toward the user so that the out-couple DOE is farther from the light sensing assembly 122. Figures 1A and 2A, ¶ [0041])(Edwin, by comparing the left captured image with a left target image representing the undeformed state, the VAR system can determine that the left eyepiece 70L is deformed. The VAR system can also determine a transformation to be applied to subsequent images for correcting for this deformation. The transformation is then applied to subsequent images for the left eyepiece and sent to the left projection subsystem 108L. ¶ [0043]).
Edwin does not expressly disclose
a housing having a frame with a nose bridge and having temples coupled to the frame by hinges;
wherein the position sensor is selected from the group consisting of: an accelerometer, a compass sensor, a gyroscope, and an inertial measurement unit,
Browy teaches an eyewear mixed reality system
wherein the position sensor is selected from the group consisting of: an accelerometer, a compass sensor, a gyroscope, and an inertial measurement unit (Browy, The data may include data a) captured from sensors (which may be, e.g., operatively coupled to the frame 230 or otherwise attached to the user 210), such as image capture devices (e.g., cameras in the inward-facing imaging system or the outward-facing imaging system), audio sensors (e.g., microphones), inertial measurement units (IMUs), accelerometers, compasses, global positioning system (GPS) units, radio devices, or gyroscopes; ¶ [0049]), Browy teaches the entire group of possible position sensors required by the claim limitation.
At the time before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Edwin’s sensors to include Browy’s sensors because such a modification is the result of combining prior art elements according to known methods to yield predictable results. More specifically, Edwin’s sensors as modified by Browy’s sensors is known to yield a predictable result of providing sensors for a head mounted display device since this permits additional sensor data in order to determine the position and movement state of the head mounted display. Thus, a person of ordinary skill would have appreciated including in Edwin’s sensors the ability to do Browy’s sensors since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
Thus, Edwin, as modified by Browy, teaches the position sensor being one from a group of accelerometers, compasses, gyroscopes, or inertial measurement units.
Edwin, as modified by Browy, still does not expressly teach a housing having a frame with a nose bridge and having temples coupled to the frame by hinges;
Sztuk teaches a head mounted display system comprising a housing having a frame with a nose bridge and having temples coupled to the frame by hinges (Sztuk, structural member 234 is fixedly coupled or hingedly coupled with a frame member, another structural member, a nose piece, etc., shown as frame 236. Figure 3, Column 15, Rows 47-49);
At the time before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Edwin’s frame and temple arms to include Sztuk’s hingedly coupled temple arms because such a modification is the result of simple substitution of one known element for another producing a predictable result. More specifically, Edwin’s frame and temple arms and Sztuk’s hingedly coupled temple arms perform the same general and predictable function, the predictable function being providing a frame and temple arms for a head mounted display device. Since each individual element and its function are shown in the prior art, albeit shown in separate references, the difference between the claimed subject matter and the prior art rests not on any individual element or function but in the very combination itself – that is in the substitution of Edwin’s frame and temple arms by replacing it with Sztuk’s hingedly coupled temple arms. Thus, the simple substitution of one known element for another producing a predictable result renders the claim obvious.
Thus, Edwin, as modified by Browy and Sztuk, teaches the frame and temple arms coupled by hinges.
As to claim 2, Edwin, as modified by Browy and Sztuk, teaches the electronic device wherein the optical sensor is at the nose bridge (Edwin, light sensing assembly 122 at the nose piece 306, Figure 1A) and wherein the position sensor is mounted to the optical sensor. The light sensing assembly 122, of Edwin, includes camera for detecting the out-coupled light from each of the waveguides to determine the position and orientation between the two sides of the device.
As to claim 3, Edwin, as modified by Browy and Sztuk, teaches the electronic device further comprising:
a camera on the housing and configured to capture an image of world light, wherein the position sensor is at the camera (Edwin, The VAR system 500 includes the sensors for monitoring the real-world environment 510. These sensors 510 are shown to include a user orientation sensor and an angle sensing assembly, but the previously describes sensors are also included with the VAR system 500. The VAR system 500 also includes a three-dimension (3D) database 508 for storing 3D scene data. The CPU 502 may control the overall operation of the VAR system 500, while the GPU 504 renders frames (e.g., translating a 3D scene into a 2D image) from the 3D data stored in the 3D database 508 and stores these frames in the frame buffer(s) 506. Figure 5, ¶ [0063]).
As to claim 4, Edwin, as modified by Browy and Sztuk, teaches the electronic device the projector being further configured to adjust the light to register a virtual object in the light to a real-world object in the world light based on the image of the world light, the orientation information, and the sensor data (Sztuk, Sensors 104 can capture images 112 of an environment around sensors 104. For example, sensors 104 can capture images 112 of an environment in or around a field of view of the user of the HMD. Images 112 can be representations of the environment, such as color or grayscale array or matrix of pixels representing parameters of light captured from the environment (e.g., color, brightness, intensity). The environment can be an indoor or outdoor environment, including both natural and man-made structures, terrain, or other objects, including sky, clouds, roads, buildings, streets, pedestrians, or cyclists. The environment can include one or more objects (e.g., real-world objects), which can be represented by images 112 captured by the sensors. Column 6, Rows 51-63). In addition, the motivation used is the same as in the rejection of claim 1.
As to claim 5, Edwin, as modified by Browy and Sztuk, teaches the electronic device further comprising:
an additional projector in the housing and configured to output additional light (Edwin, projection assembly 108R, Figure 1A);
an additional waveguide in the housing and configured to propagate the additional light (Edwin, eyepiece 70R may be implemented as a waveguide-based display into which the scanned light from the respective projection subsystems 108R is injected, Figure 1A, ¶ [0029]);
an additional optical coupler on the additional waveguide and configured to couple a first portion of the additional light out of the waveguide while passing a second portion of the additional light (Edwin, The right projection subsystem 108R, along with right eyepiece 70R and DOE(s) thereof (e.g., out-coupling element 190R, in-coupling element (ICE), OPE, and EPE), may operate in a similar manner to projection subsystem 108L. For example, the projection subsystem 108R, right eyepiece 70R, and DOE(s) thereof may present virtual content to a user's right eye, and out-couple and direct light representative of virtual content to the light sensing assembly 122 through the out-coupling DOE 190R. Figure 1A, ¶ [0031]); and
an additional optical sensor in the housing and configured to generate additional sensor data in response to the second portion of the additional light, the projector being further configured to adjust the light based on the additional sensor data (Edwin, The light sensing assembly 122 can be sufficiently rigid so the camera associated with the left eyepiece has a fixed position and orientation relative to the position and orientation of the camera associated with the right eyepiece. ¶ [0033]) (Edwin, FIGS. 1A and 2A illustrate the light rays 203 being transmitted from the left and right out-coupling DOEs 190L, 190R. In the undeformed state of FIG. 1A, the light rays 203 reach the respective cameras of the light sensing assembly 122 at substantially the same time. This time-of-flight information depends the relative position of the left eyepiece 70L relative to the right eyepiece 70R. For example, light reaching the display sooner than expected may indicate that the eyepiece is bent away from the user so that the out-couple DOE is closer to the light sensing assembly 122. Conversely, light reaching the display later than expected may indicate that the eyepiece is bent toward the user so that the out-couple DOE is farther from the light sensing assembly 122. Figures 1A and 2A, ¶ [0041])(Edwin, The left figure of FIG. 3A illustrates the misaligned left and right monocular virtual content 72L, 72R from the pitching of the right eyepiece 70R as shown in FIG. 2B. Once transformed, the right monocular virtual content 72R perfectly overlays the left monocular virtual content 72L even though the right eyepiece 70R is still pitched (as shown in FIG. 3B). FIG. 3B illustrates that transformed images shown on the misaligned frame of FIG. 1B have a proper binocular representation after the transformation process.. ¶ [0045]).
As to claim 6, Edwin, as modified by Browy and Sztuk, teaches the electronic device the projector being further configured to adjust the light to compensate for a binocular misalignment between the projector and the additional projector based on the sensor data and the additional sensor data (Edwin, by comparing the left captured image with a left target image representing the undeformed state, the VAR system can determine that the left eyepiece 70L is deformed. The VAR system can also determine a transformation to be applied to subsequent images for correcting for this deformation. The transformation is then applied to subsequent images for the left eyepiece and sent to the left projection subsystem 108L. ¶ [0043])( Edwin, The left figure of FIG. 3A illustrates the misaligned left and right monocular virtual content 72L, 72R from the pitching of the right eyepiece 70R as shown in FIG. 2B. Once transformed, the right monocular virtual content 72R perfectly overlays the left monocular virtual content 72L even though the right eyepiece 70R is still pitched (as shown in FIG. 3B). FIG. 3B illustrates that transformed images shown on the misaligned frame of FIG. 1B have a proper binocular representation after the transformation process.. ¶ [0045]).
As to claim 7, Edwin, as modified by Browy and Sztuk, teaches the electronic device further comprising:
an optical sensor module that includes the optical sensor and the additional optical sensor and that is mounted to the waveguide and the additional waveguide (Edwin, FIGS. 1A and 2A illustrate the light rays 203 being transmitted from the left and right out-coupling DOEs 190L, 190R. In the undeformed state of FIG. 1A, the light rays 203 reach the respective cameras of the light sensing assembly 122 at substantially the same time. This time-of-flight information depends the relative position of the left eyepiece 70L relative to the right eyepiece 70R. For example, light reaching the display sooner than expected may indicate that the eyepiece is bent away from the user so that the out-couple DOE is closer to the light sensing assembly 122. Conversely, light reaching the display later than expected may indicate that the eyepiece is bent toward the user so that the out-couple DOE is farther from the light sensing assembly 122. Figures 1A and 2A, ¶ [0041]).
As to claim 8, Edwin, as modified by Browy and Sztuk, teaches the electronic device wherein the optical sensor module is disposed in the nose bridge (Edwin, As shown in FIG. 1A, the light sensing assembly 122 is located in the bridge 304 of the wearable frame 102. Figure 1A, ¶ [0032]).
Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pub. No. 2022/0092747 by Edwin et al. (“Edwin”) in view of U.S. Pub. No. 2021/0132889 by Sato (“Sato”).
As to claim 17, Edwin discloses a method of operating a head-mounted device (Edwin, head-mounted display device 100, Figure 1A) comprising:
with a first projector in a first display (Edwin, projection assembly 108L, Figure 1A), producing first light that is coupled into a first waveguide in the first display (Edwin, eyepiece 70L may be implemented as a waveguide-based display into which the scanned light from the respective projection subsystems 108L is injected, Figure 1A, ¶ [0029]);
with a second projector in a second display (Edwin, projection assembly 108R, Figure 1A), producing second light that is coupled into a second waveguide in the second display light (Edwin, eyepiece 70R may be implemented as a waveguide-based display into which the scanned light from the respective projection subsystems 108R is injected, Figure 1A, ¶ [0029]);
with a camera (Edwin, camera 103L, Figure 1A), capturing an image of world light (Edwin, The ends of the left and right cantilevered arms 310L, 310R away from the nose of the user includes cameras 103L, 103R, respectively. The left camera 103L and the right camera 103R are configured to obtain images of the user's environment, e.g., the objects in front of the user. Figure 1A, ¶ [0036]);
with an optical sensor (Edwin, light sensing assembly 122, Figure 1A), receiving a portion of the first light from the first waveguide and a portion of the second light from the second waveguide and generating sensor data from the portion of the first light and the portion of the second light (Edwin, a portion of this light may continue on toward an out-coupling DOE 190L, where it may be coupled out of the eyepiece 70L as light (represented by the light ray 203) and at least partially intercepted by a light sensing assembly 122. Figure 1A, ¶ [0030]) (Edwin, As shown in FIG. 1A, the light sensing assembly 122 is located in the bridge 304 of the wearable frame 102. The light sensing assembly 122 includes a separate camera (imaging sensor) for the left and right eyepieces. The cameras are configured to determine the virtual content displayed on the left and right eyepieces. Figure 1A, ¶ [0032]);
with one or more processors (Edwin, a position-sensing diode can measure the position of the light rays 203 and this information can be analyzed by the processor to determine the relative position of the respective eyepieces 70L, 70R. ¶ [0034]), adjusting the first light based on the sensor data and the position measurement (Edwin, The light sensing assembly 122 can be sufficiently rigid so the camera associated with the left eyepiece has a fixed position and orientation relative to the position and orientation of the camera associated with the right eyepiece. ¶ [0033]) (Edwin, FIGS. 1A and 2A illustrate the light rays 203 being transmitted from the left and right out-coupling DOEs 190L, 190R. In the undeformed state of FIG. 1A, the light rays 203 reach the respective cameras of the light sensing assembly 122 at substantially the same time. This time-of-flight information depends the relative position of the left eyepiece 70L relative to the right eyepiece 70R. For example, light reaching the display sooner than expected may indicate that the eyepiece is bent away from the user so that the out-couple DOE is closer to the light sensing assembly 122. Conversely, light reaching the display later than expected may indicate that the eyepiece is bent toward the user so that the out-couple DOE is farther from the light sensing assembly 122. Figures 1A and 2A, ¶ [0041])(Edwin, by comparing the left captured image with a left target image representing the undeformed state, the VAR system can determine that the left eyepiece 70L is deformed. The VAR system can also determine a transformation to be applied to subsequent images for correcting for this deformation. The transformation is then applied to subsequent images for the left eyepiece and sent to the left projection subsystem 108L. ¶ [0043]).
Edwin does not expressly disclose
with an inertial measurement unit at the camera, gathering a position measurement;
Sato teaches a head mounted display system
with an inertial measurement unit at the camera, gathering a position measurement (Sato, The mounting band 90 includes a mounting base portion 91 made of resin, a belt 92 fabricated from cloth, which is linked to the mounting base portion 91, an inertial sensor 50, and the camera 60. Figure 1, ¶ [0014])(Sato, The camera 60 is configured to capture the outside scene, and is disposed at a central part of the mounting base portion 91. Figure 1, ¶ [0015])(Sato, The inertial sensor 50 (Inertial Measurement Unit 75, hereinafter referred to as “IMU 50”) serves as an inertia measurement device of six degrees of freedom… The IMU 50 is built-in in the mounting base portion 91. Figure 1, ¶ [0016]);
At the time before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Edwin’s camera and IMU to include Sato’s camera and IMU location because such a modification is based on the use of known techniques to improve similar devices in the same way. More specifically, Sato’s camera and IMU location is comparable to Edwin’s camera and IMU because both are directed at determining the movement of the user in the real world. Therefore, it is within the capabilities of one of ordinary skill in the art to modify Edwin’s camera and IMU to include Sato’s camera and IMU location with the predictable result of providing the camera and IMU fixed attached to a specific location on the user head mounted device.
Thus, Edwin, as modified by Sato, teaches the inertial sensor at a location with the camera to determine the user’s position.
Claims 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pub. No. 2022/0092747 by Edwin et al. (“Edwin”), in view of U.S. Pub. No. 2021/0132889 by Sato (“Sato”), and in further view of U.S. Patent No. 11,269,406 by Sztuk et al. (“Sztuk”).
As to claim 18, Edwin, as modified by Sato, discloses the method further comprising:
with an additional camera (Edwin, camera 103R, Figure 1A), capturing an additional image of world light (Edwin, The ends of the left and right cantilevered arms 310L, 310R away from the nose of the user includes cameras 103L, 103R, respectively. The left camera 103L and the right camera 103R are configured to obtain images of the user's environment, e.g., the objects in front of the user. Figure 1A, ¶ [0036]);
Edwin, as modified by Browy, does not expressly disclose the method further comprising:
with the one or more processors, identifying a first relative orientation between the camera and the additional camera, a second relative orientation between the first display and the camera, and a third relative orientation between the second display and the additional camera, wherein adjusting the first light comprises adjusting the first light based on the sensor data, the image and the additional image of the world light, the first relative orientation, the second relative orientation, and the third relative orientation.
Sztuk teaches a head mounted display system which includes multiple cameras which capture images of the world light (Sztuk, left sensor 104a (e.g., left image capture device), Figure 2, Column 12, Rows 39-40) (Sztuk, right sensor 104b (e.g., left image capture device), Figure 2, Column 12, Row 40)(Sztuk, Sensors 104a, 104b can be mounted to or integrated in the HMD body 202. The left sensor 104a can capture first images corresponding to a first view (e.g., left eye view), and the right sensor 104b can capture images corresponding to a second view (e.g., right eye view). Figure 2, Column 12, Rows 42-47);
with the one or more processors, identifying a first relative orientation between the camera and the additional camera, a second relative orientation between the first display and the camera, and a third relative orientation between the second display and the additional camera, wherein adjusting the first light comprises adjusting the first light based on the sensor data, the image and the additional image of the world light, the first relative orientation, the second relative orientation, and the third relative orientation (Sztuk, Sensors 104 can include eye tracking sensors 104 or head tracking sensors 104 that can provide information such as positions, orientations, or gaze directions of the eyes or head of the user (e.g., wearer) of an HMD. In some embodiments, sensors 104 are inside out tracking cameras configured to provide images for head tracking operations. Sensors 104 can be eye tracking sensors 104 that provide eye tracking data 148, such as data corresponding to at least one of a position or an orientation of one or both eyes of the user. Sensors 104 can be oriented in a direction towards the eyes of the user (e.g., as compared to sensors 104 that capture images of an environment outside of the HMD). Column 5, Rows 10-21).
The combination of Edwin, Sato, and Sztuk teaches the determination and correction for the orientation of the head mounted display system in the environment and the misalignment of the waveguides.
At the time before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Edwin’s’ head mounted display to include Sztuk’s head mounted display sensors because such a modification is based on the use of known techniques to improve similar devices in the same way. More specifically, Sztuk’s head mounted display sensors is comparable to Edwin’s head mounted display because both are directed to head mounted display systems. Therefore, it is within the capabilities of one of ordinary skill in the art to modify Edwin’s head mounted display to include Sztuk’s head mounted display sensors with the predictable result of providing additional sensing systems to detect the environment and the position of the device within the environment.
Thus, Edwin, as modified by Sato and Sztuk, teaches the sensors which detect the orientation of the head mounted display system.
As to claim 19, Edwin, as modified by Sato and Sztuk, teaches the method wherein adjusting the first light comprises:
adjusting the first light to correct for a binocular misalignment between the first display and the second display based on the sensor data (Edwin, FIGS. 1A and 2A illustrate the light rays 203 being transmitted from the left and right out-coupling DOEs 190L, 190R. In the undeformed state of FIG. 1A, the light rays 203 reach the respective cameras of the light sensing assembly 122 at substantially the same time. This time-of-flight information depends the relative position of the left eyepiece 70L relative to the right eyepiece 70R. For example, light reaching the display sooner than expected may indicate that the eyepiece is bent away from the user so that the out-couple DOE is closer to the light sensing assembly 122. Conversely, light reaching the display later than expected may indicate that the eyepiece is bent toward the user so that the out-couple DOE is farther from the light sensing assembly 122. Figures 1A and 2A, ¶ [0041])(Edwin, by comparing the left captured image with a left target image representing the undeformed state, the VAR system can determine that the left eyepiece 70L is deformed. The VAR system can also determine a transformation to be applied to subsequent images for correcting for this deformation. The transformation is then applied to subsequent images for the left eyepiece and sent to the left projection subsystem 108L. ¶ [0043]).
As to claim 20, Edwin, as modified by Sato and Sztuk, teaches the method wherein adjusting the first light comprises:
registering a virtual object in the first light to a real-world object in the world light based on the first relative orientation, the second relative orientation, and the third relative orientation (Sztuk, Sensors 104 can capture images 112 of an environment around sensors 104. For example, sensors 104 can capture images 112 of an environment in or around a field of view of the user of the HMD. Images 112 can be representations of the environment, such as color or grayscale array or matrix of pixels representing parameters of light captured from the environment (e.g., color, brightness, intensity). The environment can be an indoor or outdoor environment, including both natural and man-made structures, terrain, or other objects, including sky, clouds, roads, buildings, streets, pedestrians, or cyclists. The environment can include one or more objects (e.g., real-world objects), which can be represented by images 112 captured by the sensors. Column 6, Rows 51-63). In addition, the motivation used is the same as in the rejection of claim 18.
Allowable Subject Matter
Claims 9-16 are allowed.
The following is an examiner’s statement of reasons for allowance:
As to claim 9, Edwin (U.S. Pub. No. 2022/0092747) discloses a head-mounted display device (Edwin, head-mounted display device 100, Figure 1A) comprising:
a housing (Edwin, wearable frame 102, Figure 1A) having a first portion (Edwin, left side of display subsystem 104, Figure 1), a second portion (Edwin, right side of display subsystem 104, Figure 1), and a nose bridge (Edwin, nose piece 306, Figure 1A) that couples the first portion to the second portion;
a first projector (Edwin, left projection subassembly 108L, Figure 1A) in the first portion of the housing and configured to produce first light (Edwin, Projection assemblies (also referred to as projectors) 108L, 108R of the head-mounted display device 100 project the virtual objections onto the eyepieces 70L, 70R for display. Figure 1A, ¶ [0019]);
a first waveguide (Edwin, eyepiece 70L, Figure 1A) in the first portion of the housing and configured to propagate the first light (Edwin, the eyepieces 70L, 70R may be implemented as a waveguide-based display into which the scanned light from the respective projection subsystems 108L, 108R, is injected. Figure 1A, ¶ [0029]);
a second projector (Edwin, right projection subassembly 108R, Figure 1A) in the second portion of the housing and configured to produce second light (Edwin, Projection assemblies (also referred to as projectors) 108L, 108R of the head-mounted display device 100 project the virtual objections onto the eyepieces 70L, 70R for display. Figure 1A, ¶ [0019]);
a second waveguide (Edwin, eyepiece 70R, Figure 1A) in the second portion of the housing and configured to propagate the second light (Edwin, the eyepieces 70L, 70R may be implemented as a waveguide-based display into which the scanned light from the respective projection subsystems 108L, 108R, is injected. Figure 1A, ¶ [0029]);
an optical sensor (Edwin, light sensing assembly 122, Figure 1A) in the nose bridge and coupled to the first and second waveguides; As shown in figure 1A of Edwin, the nose piece 306 is coupled to the eyepieces 70L and 70R.
Edwin does not expressly teach
a first outward-facing camera (OFC) on the first portion of the housing;
a second OFC on the second portion of the housing, the first OFC and the second OFC being configured to capture images of world light;
a first inertial measurement unit at the first OFC;
a second inertial measurement unit at the second OFC;
a third inertial measurement unit at the nose bridge.
Sztuk (U.S. Patent No. 11,269,406) teaches a head mounted display system which includes
a first outward-facing camera (OFC) on the first portion of the housing (Sztuk, left sensor 104a (e.g., left image capture device), Figure 2, Column 12, Rows 39-40);
a second OFC on the second portion of the housing (Sztuk, right sensor 104b (e.g., left image capture device), Figure 2, Column 12, Row 40), the first OFC and the second OFC being configured to capture images of world light (Sztuk, Sensors 104a, 104b can be mounted to or integrated in the HMD body 202. The left sensor 104a can capture first images corresponding to a first view (e.g., left eye view), and the right sensor 104b can capture images corresponding to a second view (e.g., right eye view). Figure 2, Column 12, Rows 42-47);
Sztuk continues to teach sensors for position and orientation (Sztuk, Sensors 104 can include eye tracking sensors 104 or head tracking sensors 104 that can provide information such as positions, orientations, or gaze directions of the eyes or head of the user (e.g., wearer) of an HMD. In some embodiments, sensors 104 are inside out tracking cameras configured to provide images for head tracking operations. Sensors 104 can be eye tracking sensors 104 that provide eye tracking data 148, such as data corresponding to at least one of a position or an orientation of one or both eyes of the user. Sensors 104 can be oriented in a direction towards the eyes of the user (e.g., as compared to sensors 104 that capture images of an environment outside of the HMD). Column 5, Rows 10-21);
At the time before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Edwin’s’ head mounted display to include Sztuk’s head mounted display sensors because such a modification is based on the use of known techniques to improve similar devices in the same way. More specifically, Sztuk’s head mounted display sensors is comparable to Edwin’s head mounted display because both are directed to head mounted display systems. Therefore, it is within the capabilities of one of ordinary skill in the art to modify Edwin’s head mounted display to include Sztuk’s head mounted display sensors with the predictable result of providing additional sensing systems to detect the environment and the position of the device within the environment.
Thus, Edwin, as modified by Sztuk, teaches the sensors which detect the environment around the head mounted display system.
Edwin, as modified by Sztuk, still does not expressly teach
a first inertial measurement unit at the first OFC;
a second inertial measurement unit at the second OFC;
a third inertial measurement unit at the nose bridge.
In addition, no other prior art was found which teaches, alone or in combination, the cited limitations.
As to dependent claims 10-16, these claims are allowable as they depend upon allowable independent claim 9.
Any comments considered necessary by applicant must be submitted no later than the payment of the issue fee and, to avoid processing delays, should preferably accompany the issue fee. Such submissions should be clearly labeled “Comments on Statement of Reasons for Allowance.”
Response to Arguments
Applicant’s arguments with respect to claims 1-8 and 17-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
U.S. Patent No. 11,202,043 by Elazhary et al. teaches a self-testing display device which compensates for misalignment of the waveguides for the head mounted display.
U.S. Pub. No. 2023/0239455 by Churin et al. teaches a head mounted display which calibrates the stereoscopic display.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRENT D CASTIAUX whose telephone number is (571)272-5143. The examiner can normally be reached Mon-Fri 7:30 AM- 4:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chanh Nguyen can be reached at (571)272-7772. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BRENT D CASTIAUX/Primary Examiner, Art Unit 2623