DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 02/11/2026 has been entered.
Response to Arguments
Applicant's arguments filed 02/11/2026 have been fully considered but they are not persuasive. With regards to independent claims 1, 10, and 19 Applicant argues that the previously cited references fail to teach or render obvious the new claim limitation “the at least two object-tracking assemblies comprising a first object-tracking assembly disposed adjacent to a second object-tracking assembly, the first object-tracking assembly aligned along a third axis parallel to the first axis”. Examiner respectfully disagrees as Fortin-Deschenes et al in at least Fig. 4A and paragraph 0077 discloses an embodiment that discloses two object-tracking assemblies that are adjacent to each other on a third axis parallel to the first axis. Fig. 4A of Fortin-Deschenes et al is reproduced below for reference.
PNG
media_image1.png
325
557
media_image1.png
Greyscale
As can be seen from Fig. 4A above the two circled assemblies comprising camera sensors 62 and 64 each comprise a plurality of imaging devices (62, 66) and (64,68) aligned along a first axis. The two assemblies also comprise a plurality of illumination devices (76, 77) and (74, 75) that are aligned along a second axis perpendicular to the first axis. Also as can be seen from Fig. 4A the two assemblies are aligned along a third axis that is parallel to the first axis. The rejections therefore are sustained.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-4, 9-13, and 18-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Fortin-Deschenes et al (pub # 20190258058) in view of Bosworth (U.S. Pat # 11,507,203).
Consider claim 1. Fortin-Deschenes et al teaches A housing of an MR head-wearable device, (Fig. 1 and paragraph 0072, HMD 7). comprising:
one or more displays within an interior surface of the housing configured to cause presentation of an extended reality environment while a user is wearing the MR head-wearable device; (paragraph 0068, The proposed HMD system implements virtual reality by having a user look at a display through a wide angle eyepiece. Paragraph 0073, The user (1) wearing the HMD (7) looks at a display (27) through wide angle eyepieces (26, 35)).
and at least two object-tracking assemblies disposed on an exterior surface of the housing, (Fig. 4A and paragraph 0077, left object tracking assembly comprising cameras 64 and 68 and right object tracking assembly comprising cameras 62 and 68).
each of the at least two object-tracking assemblies including: a plurality of imaging devices aligned along a first axis, (Fig. 4 and paragraph 0077, cameras 62, 64, 66, and 68 aligned along a first axis).
and an illumination device aligned along a second axis, perpendicular to the first axis, (Fig. 4A and paragraph 0077, LED flood lights 74, 75, 76, and 77 aligned along a second axis perpendicular to the first axis).
the illumination device disposed at a predetermined intermediate distance between
at least two respective imaging devices of the plurality of imaging devices, (Fig. 1 and paragraph 0077, LED flood lights 74, 75, 76, and 77 disposed at intermediate distances between cameras 64 and 68 as well as 62 and 66).
wherein, while the MR head-wearable device is performing operations, the object-tracking assemblies is configured to determine, based on imaging data obtained by the plurality of imaging devices while the illuminating device is generating ambient lighting conditions, that the obtained imaging data satisfies an object-tracking threshold for causing presentation of a tracked object via the one or more displays. (See at least paragraph 0081, stereo cameras (66, 68) supported with the LED flood lights (74, 75, 76, 77) provide better resolution at the cost of more image processing computation time. In the exemplary process, the body mesh (304) is extracted from the depth (156) and body segmentation (302) information by detecting close 3D data, or rather, by applying a threshold on the intensity when using the LED flood lights (74, 75, 76, 77). Next, a skeletal model (306) is extracted from the mesh. Finally, predefined gestures are finally recognized (308) by tracking the body motion and matching the skeleton shape and motion to the gesture models. The recognized gestures type, position and body stereo mask (310) are provided for graphics rendering (124)).
the at least two object-tracking assemblies comprising a first object-tracking assembly disposed adjacent to a second object-tracking assembly, the first object-tracking assembly aligned along a third axis parallel to the first axis. (Fig. 4A and paragraph 0077, left and right object tracking assemblies are aligned along a third axis parallel to the first axis).
Fortin-Deschenes et al does not specifically disclose wherein a first imaging device is disposed above a second imaging device such that the first imaging device is closer to a field of view of the user. However Bosworth in at least Fig. 1A discloses a virtual reality headset 104 comprising a plurality of imaging devices, cameras 105A and 105B, wherein the camera 105A is disposed above the camera 105B and is closer to the field of view of the user. Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the imaging devices of Fortin-Deschenes et al to be configured like the imaging devices of Bosworth in order to capture a more accurate image of the user’s point-of-view.
Consider claim 10. Fortin-Deschenes et al teaches A mixed-reality (MR) head-wearable device, (Fig. 1 and paragraph 0072, HMD 7). comprising:
a housing; (Fig. 1 and paragraph 0072, HMD 7).
one or more displays within an interior surface of the housing configured to cause presentation of an extended reality environment while a user is wearing the MR head-wearable device; (paragraph 0068, The proposed HMD system implements virtual reality by having a user look at a display through a wide angle eyepiece. Paragraph 0073, The user (1) wearing the HMD (7) looks at a display (27) through wide angle eyepieces (26, 35)).
and at least two object-tracking assemblies disposed on an exterior surface of the housing, (Fig. 4A and paragraph 0077, left object tracking assembly comprising cameras 64 and 68 and right object tracking assembly comprising cameras 62 and 68).
each of the at least two object-tracking assemblies including: a plurality of imaging devices aligned along a first axis, (Fig. 4 and paragraph 0077, cameras 62, 64, 66, and 68 aligned along a first axis).
and an illumination device aligned along a second axis, perpendicular to the first axis, (Fig. 4A and paragraph 0077, LED flood lights 74, 75, 76, and 77 aligned along a second axis perpendicular to the first axis).
the illumination device disposed at a predetermined intermediate distance between at least two respective imaging devices of the plurality of imaging devices, (Fig. 1 and paragraph 0077, LED flood lights 74, 75, 76, and 77 disposed at intermediate distances between cameras 64 and 68 as well as 62 and 66).
wherein, while the MR head-wearable device is performing operations, the object-tracking assembly is configured to determine, based on imaging data obtained by the plurality of imaging devices while the illuminating device is generating ambient lighting conditions, that the obtained imaging data satisfies an object-tracking threshold for causing presentation of a tracked object via the one or more displays. (See at least paragraph 0081, stereo cameras (66, 68) supported with the LED flood lights (74, 75, 76, 77) provide better resolution at the cost of more image processing computation time. In the exemplary process, the body mesh (304) is extracted from the depth (156) and body segmentation (302) information by detecting close 3D data, or rather, by applying a threshold on the intensity when using the LED flood lights (74, 75, 76, 77). Next, a skeletal model (306) is extracted from the mesh. Finally, predefined gestures are finally recognized (308) by tracking the body motion and matching the skeleton shape and motion to the gesture models. The recognized gestures type, position and body stereo mask (310) are provided for graphics rendering (124)).
the at least two object-tracking assemblies comprising a first object-tracking assembly disposed adjacent to a second object-tracking assembly, the first object-tracking assembly aligned along a third axis parallel to the first axis. (Fig. 4A and paragraph 0077, left and right object tracking assemblies are aligned along a third axis parallel to the first axis).
Fortin-Deschenes et al does not specifically disclose wherein a first imaging device is disposed above a second imaging device such that the first imaging device is closer to a field of view of the user. However Bosworth in at least Fig. 1A discloses a virtual reality headset 104 comprising a plurality of imaging devices, cameras 105A and 105B, wherein the camera 105A is disposed above the camera 105B and is closer to the field of view of the user. Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the imaging devices of Fortin-Deschenes et al to be configured like the imaging devices of Bosworth in order to capture a more accurate image of the user’s point-of-view.
Consider claim 19. Fortin-Deschenes et al teaches A method, comprising:
performing MR operations at a MR head-wearable device that includes a housing, (Fig. 1 and paragraph 0072, HMD 7). the housing including:
one or more displays within an interior surface of the housing configured to cause presentation of an extended reality environment while a user is wearing the MR head-wearable device; (paragraph 0068, The proposed HMD system implements virtual reality by having a user look at a display through a wide angle eyepiece. Paragraph 0073, The user (1) wearing the HMD (7) looks at a display (27) through wide angle eyepieces (26, 35)).
and at least two object-tracking assemblies disposed on an exterior surface of the housing, (Fig. 4A and paragraph 0077, left object tracking assembly comprising cameras 64 and 68 and right object tracking assembly comprising cameras 62 and 68).
each of the at least two object-tracking assemblies including: a plurality of imaging devices aligned along a first axis, (Fig. 4 and paragraph 0077, cameras 62, 64, 66, and 68 aligned along a first axis).
and an illumination device aligned along a second axis, perpendicular to the first axis, (Fig. 4A and paragraph 0077, LED flood lights 74, 75, 76, and 77 aligned along a second axis perpendicular to the first axis).
the illumination device disposed at a predetermined intermediate distance between at least two respective imaging devices of the plurality of imaging devices, (Fig. 1 and paragraph 0077, LED flood lights 74, 75, 76, and 77 disposed at intermediate distances between cameras 64 and 68 as well as 62 and 66).
determining, based on imaging data obtained by the plurality of imaging devices while the illuminating device is generating ambient lighting conditions, that the obtained imaging data satisfies an object-tracking threshold; and in accordance with the determining that the obtained imaging data satisfies the object-tracking threshold, causing presentation of a tracked object via the one or more displays. (See at least paragraph 0081, stereo cameras (66, 68) supported with the LED flood lights (74, 75, 76, 77) provide better resolution at the cost of more image processing computation time. In the exemplary process, the body mesh (304) is extracted from the depth (156) and body segmentation (302) information by detecting close 3D data, or rather, by applying a threshold on the intensity when using the LED flood lights (74, 75, 76, 77). Next, a skeletal model (306) is extracted from the mesh. Finally, predefined gestures are finally recognized (308) by tracking the body motion and matching the skeleton shape and motion to the gesture models. The recognized gestures type, position and body stereo mask (310) are provided for graphics rendering (124)).
the at least two object-tracking assemblies comprising a first object-tracking assembly disposed adjacent to a second object-tracking assembly, the first object-tracking assembly aligned along a third axis parallel to the first axis. (Fig. 4A and paragraph 0077, left and right object tracking assemblies are aligned along a third axis parallel to the first axis).
Fortin-Deschenes et al does not specifically disclose wherein a first imaging device is disposed above a second imaging device such that the first imaging device is closer to a field of view of the user. However Bosworth in at least Fig. 1A discloses a virtual reality headset 104 comprising a plurality of imaging devices, cameras 105A and 105B, wherein the camera 105A is disposed above the camera 105B and is closer to the field of view of the user. Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the imaging devices of Fortin-Deschenes et al to be configured like the imaging devices of Bosworth in order to capture a more accurate image of the user’s point-of-view.
Consider claims 2, 11, and 20. Fortin-Deschenes et al further teaches The housing of claim 1, wherein the at least two respective imaging devices and the illumination device of the object-tracking assemblies are arranged in a triangular configuration. (Fig. 1, imaging devices 12 and 13 and illumination device 14 are arranged in a triangular configuration).
Consider claims 3 and 12. Fortin-Deschenes et al further teaches The housing of claim 1, wherein the plurality of imaging devices includes (i) at least one visible color imaging sensor, (paragraph 0072, RGB cameras 11 and 12).
and (ii) at least one SLAM camera. (paragraph 0080, SLAM).
Consider claims 4 and 13. Fortin-Deschenes et al further teaches The housing of claim 3, wherein each one of the at least one visible color imaging sensor, the at least one SLAM camera, and the illumination device are covered by respective cover windows made of distinct materials. (paragraph 0077, lenses 63, 65, 67, and 69).
Consider claims 9 and 18. Fortin-Deschenes et al further teaches The housing of claim 1, wherein the housing occludes a field of view of the user while the user is wearing the MR head-wearable device. (As can be seen from Fig. 2A the housing of the HMD occludes the user’s field of view from the sides).
Claim(s) 5- 7 and 14-16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Fortin-Deschenes et al (pub # 20190258058) in view of Bosworth (U.S. Pat # 11,507,203) and further in view of Sztuk et al (U.S. Pat # 11,435,593).
Consider claims 5 and 14. Fortin-Deschenes et al in view of Bosworth does not specifically disclose the plurality of imaging devices includes a third imaging sensor, different than the at least two respective imaging devices, and the third imaging sensor is configured to increase a field of view of the user. However Sztuk et al in at least Fig. 9 and col. 10 lines 1-16 discloses HMD 900 comprising cameras 902 and 908 disposed on right and left sides of the HMD respectively, thus increasing the field of view of the user. Therefore it would have been obvious to one of ordinary skill in the art to combine the system of Sztuk et al with the system of Fortin-Deschenes et al in view of Bosworth in order to improve the system by giving the user a wide field of view and enhance a user experience.
Consider claims 6 and 15. Sztuk et al further teaches The housing of claim 5, wherein the third imaging sensor is located on a side-facing portion of the exterior surface of the housing. (Fig. 9 cameras 902 and 908 located on side-facing portions of the HMD).
Consider claims 7 and 16. Fortin-Deschenes et al in view of Bosworth does not specifically disclose wherein the illumination device and a respective imaging device of the plurality of imaging devices are angled downward. However Sztuk et al in at least Fig. 9 and col. 10 lines 17-26 discloses an HMD 900 comprising cameras 902 and 908 that may be angled at a downward angle such as 30 degrees, 60 degrees, or any other appropriate angle. Therefore it would have been obvious to one of ordinary skill in the art to combine the system of Sztuk et al with the system of Fortin-Deschenes et al in view of Bosworth in order to improve the system by giving the user a wide field of view and enhance a user experience.
Claim(s) 8 and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Fortin-Deschenes et al (pub # 20190258058) in view of Bosworth (U.S. Pat # 11,507,203) and further in view of Katz et al (pub # 20170287194).
Consider claims 8 and 17. Fortin-Deschenes et al in view of Bosworth does not specifically disclose wherein the illumination device is configured to extend beyond the exterior surface of the housing. However Katz et al in at least Figs. 3 and 4 discloses an HMD comprising an illumination source 200 that extends beyond the exterior surface of the housing of the HMD. Therefore it would have been obvious to one of ordinary skill in the art to modify the illumination devices of Fortin-Deschenes et al to extend beyond the exterior surface of the housing as disclosed by Katz et al in order to illuminate an area exterior to the housing of the HMD and improve the user experience.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHAYCE R BIBBEE whose telephone number is (571)270-7222. The examiner can normally be reached Mon-Thurs 8:00-6:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Eason can be reached at 571-270-7230. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHAYCE R BIBBEE/Examiner, Art Unit 2624