DETAILED ACTION
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
2. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on December 31, 2025 has been entered. Claims 1, 8 and 15 are amended and rejection of claims 1-21 are traversed and still pending.
Claim Rejections - 35 USC § 103
3. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
4. Claim(s) 1-21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Samples et al. (US 2018/0330521 A1, hereinafter referred as “Samples”) in view of Foxlin et al. (US 2002/0024675 A1, hereinafter referred as “Foxlin”), in further view of Biac et al. (US 2016/0004300 A1, hereinafter referred as “Baic”), in further view of Balan et al. (US 2017/0358139 A1, hereinafter referred as “Balan”), and still in further view of Pryor et al. (US 2004/0046736 A1, hereinafter referred as “Pryor”).
Regarding claim 1, Samples discloses a system comprising:
a wearable head device (Fig. 1, abstract and ¶0033 discloses a head-mounted device 102 worn by the user 104);
a camera system configured to capture images of a surrounding environment of a user of the wearable head device (Fig. 2 and ¶0033 discloses one or more outward-facing cameras on the HMD 102 may acquire image data (e.g. visible light image data) of the surrounding environment and of a handheld object 106 held by the user 102);
a see-through display of the wearable head device (Figs. 1-2 and ¶0033 discloses a HMD 102, worn by a user 104, displays virtual and/or augmented reality imagery); and
one or more processors (¶0039 discloses processing system 500) configured to perform a method comprising:
activating one or more of a plurality of light emitting elements of an object (¶0071 discloses the handheld object may be controlled to turn one or more of its light sources off, to turn one or more light sources on at specified time intervals),
activating the camera system to capture an image (¶0036 discloses image data acquired by the camera or cameras of the HMD), the image comprising a view of the activated one or more of the plurality of light emitting elements (abstract, and ¶0036 discloses the light sources may take any suitable form, such as light-emitting diodes (LEDs) that emit visible light for detection via a visible light camera or cameras on the HMD),
deactivating the one or more of the plurality of light emitting elements (¶0071 discloses the handheld object may be controlled to turn one or more of its light sources off),
determining, based on the identified relationship, a position of the object in the surrounding environment at a first time (¶0038 discloses tracking the positions of light from the light sources on the handheld object 106 using the one or more cameras on the HMD 102),
determining a predicted position of the object at a second time later than the first time (¶0050 discloses predict an expected pose of the controller in a next frame based upon HMD motion and the handheld object pose), wherein said determining the predicted position is based on an input from a first orientation sensor of the object (¶0084 discloses IMU data may also be used to inform motion prediction)…
Samples doesn’t explicitly disclose one or more microphones; the view comprising a visual distortion of the view of the activated one or more of the plurality of light emitting elements, the visual distortion introduced by the camera system, detecting, by the one or more microphones, an acoustic signal emitted by one or more audio sources of the object, the acoustic signal comprising an audible or ultrasonic signal, identifying in the captured image, a relationship between a movement in the surrounding environment and a movement of the light emitting elements, the movement in the surrounding environment smaller than the movement of the light emitting elements, based on the visual distortion; determining a predicted position of the object… based further on the identified relationship between the movement in the environment and the movement of the light emitting elements, determining, based on the predicted position of the object, a display position of virtual content in the see-through display of the wearable head device, and presenting the virtual content at the display position in the see-through display of the wearable head device, wherein the plurality of light emitting elements comprises a first light emitting element of a first wavelength and a second light emitting element of a second wavelength, different from the first wavelength, and wherein the camera system is configured such that the visual distortion increases a sensitivity of the camera system by causing the camera system to be more sensitive to the movement of the light emitting elements.
However, in a similar field of endeavor, Foxlin discloses one or more microphones (¶0074 discloses as shown in FIG. 1, three microphones 80, 82, 84 and their ultrasonic pulse detection circuits together with the InterTrax 2 board are embedded in a rigid plastic assembly designed to fit elegantly over the brow of an HMD); and detecting, by the one or more microphones (80, 82, 84), an acoustic signal emitted by one or more audio sources of the object (14) (Fig. 1 and ¶0057 discloses sourceless head orientation tracker 30 with a head-worn tracking device 12 that tracks a hand-mounted 3D beacon 14 relative to the head 16), the acoustic signal comprising an audible or ultrasonic signal (¶0074 discloses ultrasonic pulse detection circuits).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Samples due to potential for low cost, lightweight, low power, good resolution, and high update rates when tracking at the relatively close ranges typical of head-hand displacements (¶0057).
However, in a similar field of endeavor, Baic discloses the view comprising a visual distortion (¶0319 discloses distortion that arises due to the fish-eye lens effect) of the view of the activated one or more of the plurality of light emitting elements (Fig. 4-5 and ¶0139 discloses the gesture controllers 150a,b,c,d can additionally or alternatively… include coloured light emitting elements 152 such as LEDs), the visual distortion introduced by the camera system (¶0319 discloses distortion that arises due to fish-eye lenses), based on the visual distortion, a relationship between a movement in the surrounding environment and a movement of the light emitting elements (¶0319-¶0333 discloses code designed to reduce the exaggerated movement of the lighting elements 152 caused by the fish-eye lens effect); and determining a predicted position of the object (¶0314 discloses average pixel per frame velocity vector (i.e., the predictive offset) can then preferably be added to the current cursor position to give a prediction of the next position)… based further on the identified relationship between the movement in the environment and the movement of the light emitting elements (¶0319-¶0333 discloses code designed to reduce the exaggerated movement of the lighting elements 152 caused by the fish-eye lens effect), and wherein the camera system is configured such that the visual distortion increases a sensitivity of the camera system by causing the camera system to be more sensitive to the movement of the light emitting elements (¶0319 discloses distortion that arises due to the fish-eye lens effect which in effect increases a sensitivity of the camera system).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Samples for the purpose of incorporating a fish eye lens that captures a wider field of view.
Samples as modified doesn’t explicitly disclose determining, based on the predicted position of the object, a display position of virtual content in the see-through display of the wearable head device, and presenting the virtual content at the display position in the see-through display of the wearable head device, wherein the plurality of light emitting elements comprises a first light emitting element of a first wavelength and a second light emitting element of a second wavelength, different from the first wavelength.
However, in a similar field of endeavor, Balan discloses determining, based on the predicted position of the object (¶0071 discloses predict the position of controller 40 based on a forward prediction algorithm, such as a Kalman filter using double integration operating on the accelerometer data from IMU 44), a display position of virtual content in the see-through display of the wearable head device (¶0002, ¶0060 and ¶0065 discloses one or more wireless hand-held inertial controllers that the user of the system can manipulate to interact with the HMD and provide user input to the HMD, including, but not limited to, controlling and moving a virtual cursor, selection, movement and rotation of objects, scrolling, etc.), and presenting the virtual content at the display position in the see-through display of the wearable head device (¶0002, ¶0060 and ¶0065 discloses one or more wireless hand- held inertial controllers that the user of the system can manipulate to interact with the HMD and provide user input to the HMD, including, but not limited to, controlling and moving a virtual cursor, selection, movement and rotation of objects, scrolling, etc.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Samples for the purpose of using a Kalman filters for forward prediction in order to combine predicted states and noisy measurements to produce unbiased estimates that minimize variance.
Samples as modified still doesn’t explicitly disclose wherein the plurality of light emitting elements comprises a first light emitting element of a first wavelength and a second light emitting element of a second wavelength, different from the first wavelength.
However, in a similar field of endeavor, Pryor discloses wherein the plurality of light emitting elements comprises a first light emitting element of a first wavelength and a second light emitting element of a second wavelength, different from the first wavelength (¶0420-¶0421 discloses calibration datum's 1221-1224 [in the form of LEDs] are shown projected on the screen either in a calibration mode or continuously for use by the stereo camera system which can for example search for their particular color; and ¶0473 discloses the targets should desirably be individually identifiable either due to their color).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify Samples so that the targets should desirably be individually identifiable either due to their color (¶0473).
Regarding claim 2, Samples discloses the system of claim 1, wherein:
said activating the one or more of the plurality of light emitting elements comprises activating according to a pulse width modulation scheme (¶0065 and ¶0066 discloses light pulses of sufficient width output by the handheld object light sources), and
said activating the camera system comprises activating according to a capture pattern, the capture pattern corresponding to the pulse width modulation scheme (¶0066 and ¶0114 discloses where camera(s) of the HMD and the light pulse cycles of the handheld object are synchronized).
Regarding claim 3, Samples discloses the system of claim 2, wherein:
the one or more of the plurality of light emitting elements is configured to emit visible light (¶0033 discloses one or more outward-facing cameras on the HMD 102 may acquire image data (e.g. visible light image data) of the surrounding environment and of a handheld object 106 held by the user 102), and
the pulse width modulation scheme comprises a pulse width modulation at a frequency imperceptible to a human eye (¶0063 discloses light source pulsing may be perceptible by the human eye when the pulse frequency is lower than the refresh speed of the eye. Thus, using a light pulse frequency of 90 Hz or higher, for example, may help to reduce perceptibility of the light source modulation).
Regarding claim 4, Samples discloses the system of claim 1, wherein the camera system comprises two or more cameras mounted separately on the wearable head device (Fig. 2 and ¶0033 discloses HMD 200 imaging a handheld object 202 using a stereo camera imaging system (indicated by first camera 204 and second camera 206)) and wherein each of the two or more cameras views the plurality of light emitting elements from a different perspective (¶0134 discloses receiving, from a first camera of the stereo camera arrangement, first image data from a perspective of the first camera, and at 2604, receiving, from a second camera of the stereo camera arrangement, second image data from a perspective of the second camera; and ¶0135 discloses detecting, in the first image data and the second image data, a plurality of light sources of the handheld object).
Regarding claim 5, Samples discloses the system of claim 1, further comprising an inertial measurement unit (IMU) (¶0035 discloses the handheld object 106 comprises an inertial measurement unit (IMU)), wherein said determining the predicted position of the object is further based on an output from the IMU (¶0084 discloses IMU data may also be used to inform motion prediction).
Regarding claim 6, Samples discloses the system of claim 1, wherein the method further comprises:
providing the predicted position as a control input (¶0026 discloses a handheld controller may be tracked as is the devices are moved through space by a user to provide inputs to control a user interface of the HMD) for a software application operating via the wearable head device (Fig. 16 and ¶0101-¶0103 discloses logic subsystem 1602 may be configured to execute instructions that are part of one or more applications, services, programs, routines, libraries, objects, components, data structures, or other logical constructs).
Regarding claim 7, Samples doesn’t explicitly disclose the system of claim 1, wherein the method further comprises: using the position of the plurality of light emitting elements as an anchor for virtual content presented via the see-through display of the wearable head device.
However, in a similar field of endeavor, Balan discloses using the position of the plurality of light emitting elements as an anchor for the virtual content presented via the see-through display of the wearable head device (¶0002, ¶0060 and ¶0065 discloses one or more wireless hand-held inertial controllers that the user of the system can manipulate to interact with the HMD and provide user input to the HMD, including, but not limited to, controlling and moving a virtual cursor, selection, movement and rotation of objects, scrolling, etc.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Samples so that the display device interfaces with wireless hand-held inertial controllers for providing user input to the display device (¶0004).
Regarding claim(s) 8-14, this/these method claim(s) has/have similar limitations as apparatus claim(s) 1-7, respectively, and therefore rejected on similar grounds.
Regarding claim(s) 15, this/these method claim(s) has/have similar limitations as apparatus claim(s) 1, and therefore rejected on similar grounds.
Moreover, Samples discloses a non-transitory computer readable medium comprising instructions that, when executed by one or more processors, cause the one or more processors to perform a method comprising (abstract discloses a storage device storing instructions executable by the logic device):…
Regarding claim(s) 16-20, this/these method claim(s) has/have similar limitations as apparatus claim(s) 2-6, respectively, and therefore rejected on similar grounds.
Regarding claim 21, Samples doesn’t explicitly disclose the system of claim 1, wherein: the identifying the relationship based on the detected acoustic signal comprises triangulating a plurality of detected acoustic signals using a microphone array comprising the one or more microphones, the plurality of detected acoustic signals comprising the detected acoustic signal emitted by one or more audio sources of the object.
However, in a similar field of endeavor, Foxlin discloses the identifying the relationship based on the detected acoustic signal comprises triangulating a plurality of detected acoustic signals using a microphone array comprising the one or more microphones (Fig. 1 and ¶0075 discloses three microphones 80, 82 and 94 arranged in a known geometry receiving ultrasonic pulses emitted by a hand-mounted beacon 14 and the position is computed using range measurements form multiple microphones which is a characteristic of triangulation using a microphone array), the plurality of detected acoustic signals comprising the detected acoustic signal emitted by one or more audio sources of the object (14) (Fig. 1 and ¶0057 discloses head orientation tracker 30 with a head-worn tracking device 12 that tracks a hand-mounted 3D beacon 14 relative to the head 16).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Samples due to potential for low cost, lightweight, low power, good resolution, and high update rates when tracking at the relatively close ranges typical of head-hand displacements (¶0057).
Response to Arguments
5. Applicant's arguments with respect to claims 1-21 have been considered but are moot in view of the new ground(s) of rejection.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PRIYANK J SHAH whose telephone number is (571)270-3732. The examiner can normally be reached on 10:00 - 6:00 M-F.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, LunYi Lao can be reached on 571-272-7671. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PRIYANK J SHAH/Primary Examiner, Art Unit 2621