Prosecution Insights
Last updated: April 19, 2026
Application No. 18/422,026

SYSTEM AND METHOD OF COORDINATE SYSTEM ALIGNMENT FOR MULTIPLE HEAD MOUNTED DISPLAYS

Non-Final OA §103
Filed
Jan 25, 2024
Examiner
XU, XIAOLAN
Art Unit
2488
Tech Center
2400 — Computer Networks
Assignee
HTC Corporation
OA Round
3 (Non-Final)
74%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
87%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
247 granted / 334 resolved
+16.0% vs TC avg
Moderate +13% lift
Without
With
+13.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
37 currently pending
Career history
371
Total Applications
across all art units

Statute-Specific Performance

§101
6.3%
-33.7% vs TC avg
§103
49.7%
+9.7% vs TC avg
§102
20.0%
-20.0% vs TC avg
§112
13.4%
-26.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 334 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/13/2026 has been entered. Response to Arguments Applicant’s arguments with respect to claim(s) 1 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-4, 6-12, 14-16 are rejected under 35 U.S.C. 103 as being unpatentable over Micusik et al. (US 20230186521 A1) in view of Yang (US 12217458 B1). Regarding claim 1. Micusik discloses A system of coordinate system alignment for multiple head mounted displays (abstract, an interactive augmented reality experience between two eyewear devices by using alignment between respective 6DOF trajectories, also referred to herein as ego motion alignment), comprising: a first head mounted display, comprising a first camera ([0032] the eyewear device 100 includes two cameras 114A, 114B); and a second head mounted display (figure 8, [0101] a first eyewear device 100A operated by a user A, and a second eyewear device 100B operated by a user B), wherein the first head mounted display is configured to: capture a first image of the second head mounted display by the first camera ([0102] identify the physical attribute in the plurality of frames generated by cameras 114A and 114B, determine the face or mouth of the other user); perform a first face detection on the first image ([0102] identify the physical attribute in the plurality of frames generated by cameras 114A and 114B, determine the face or mouth of the other user) to obtain a first bounding box corresponding to a first coordinate system ([0106] Eyewear device 100A of user A and eyewear device 100B of user B track the eyewear device of the other user, or an object of the other user, such as on the user's face, to provide the collaborative AR experience; [0114] uses the face detection of the first option, e.g. a bounding box of the face in the image 302A and 302B to initialize a more accurate eyewear device 100 tracker 906; [0125] uses the face detection of the first option, e.g. a bounding box of the face in the image 302A and 302B to initialize a more accurate eyewear device 100 detector 904); align the first coordinate system corresponding to the first head mounted display with a second coordinate system corresponding to the second head mounted display according to a first position of the first bounding box, so as to update the first coordinate system ([0106] establishing a collaborative AR experience between users of eyewear devices 100 by using alignment between respective 6DOF trajectories generated by 6DOF pose trackers 900, also referred to herein as ego motion alignment. Eyewear device 100A of user A and eyewear device 100B of user B track the eyewear device of the other user, or an object of the other user, such as on the user's face, to provide the collaborative AR experience. This enables sharing common 3D content between multiple eyewear users; [0114] uses the face detection of the first option, e.g. a bounding box of the face in the image 302A and 302B to initialize a more accurate eyewear device 100 tracker 906; [0125] uses the face detection of the first option, e.g. a bounding box of the face in the image 302A and 302B to initialize a more accurate eyewear device 100 detector 904); and display an output image according to the updated first coordinate system ([0106] sharing common 3D content between multiple eyewear users; users of eyewear devices 100 to add virtual 3D content and see the 3D content properly positioned through their eyewear device 100; Each user can simultaneously modify the virtual 3D content), wherein the first head mounted display is further configured to: obtain a coordinate of the first bounding box ([0106] Eyewear device 100A of user A and eyewear device 100B of user B track the eyewear device of the other user, or an object of the other user, such as on the user's face, to provide the collaborative AR experience; [0114] uses the face detection of the first option, e.g. a bounding box of the face in the image 302A and 302B to initialize a more accurate eyewear device 100 tracker 906, runs a more sophisticated and accurate tracking algorithm to track a point P2 on the eyewear device 100 instead of on the face, knowing the x,y coordinates of point P2 in some camera images 302A and 302B; [0125] uses the face detection of the first option, e.g. a bounding box of the face in the image 302A and 302B to initialize a more accurate eyewear device 100 detector 904, runs a more sophisticated and accurate tracking algorithm to track a point P2 on the eyewear device 100 instead of on the face, knowing the x,y coordinates of point P2 in some camera images 302A and 302B (inherently, the initialization of tracking/detecting is to obtain the x,y coordinates of the bounding box or point P2)); and calculate the first position according to the coordinate, intrinsic parameters of the first camera, and extrinsic parameters of the first camera ([0106] Eyewear device 100A of user A and eyewear device 100B of user B track the eyewear device of the other user, or an object of the other user, such as on the user's face, to provide the collaborative AR experience; [0070] the eyewear device 100 includes a collection of motion-sensing components, provide position, orientation, and motion data about the device relative to six axes (x, y, z, pitch, roll, yaw); [0088] The processor 432 of the eyewear device 100 determines its position with respect to one or more objects 604 within the environment 600 using captured images, constructs a map of the environment 600 using a coordinate system (x, y, z) for the environment 600, and determines its position within the coordinate system (inherently, determine its position using captured images (captured images include the coordinate) according to camera intrinsic and extrinsic parameters); [0086] construct the map and determine location and position information using a simultaneous localization and mapping (SLAM) algorithm applied to data received from one or more sensors, a SLAM algorithm is used to construct and update a map of an environment, while simultaneously tracking and updating the location of a device (or a user) within the mapped environment; [0101] (Inherently, determine location and position information using a SLAM algorithm applied to data received from sensors (data received from sensors includes the coordinate) according to both camera intrinsic and extrinsic parameters)). However, Micusik does not explicitly disclose calculate the first position according to the coordinate, an inverse matrix of an intrinsic matrix of the first camera, and an inverse matrix of an extrinsic matrix of the first camera. Yang discloses calculate a position according to a coordinate (column 8 equation (2)), an inverse matrix of an intrinsic matrix of a camera (column 9 equations (3) and (6)), and an inverse matrix of an extrinsic matrix of the camera (column 9 equations (7) and (8)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the inventions of Micusik and Yang, to calculate the first position according to the coordinate, an inverse matrix of an intrinsic matrix of the first camera, and an inverse matrix of an extrinsic matrix of the first camera, in order to align the coordinate systems. Regarding claim 2. Micusik discloses The system of claim 1, wherein the second head mounted display comprises a second camera ([0032] the eyewear device 100 includes two cameras 114A, 114B; figure 8, [0101] a first eyewear device 100A operated by a user A, and a second eyewear device 100B operated by a user B), wherein the second head mounted display is communicatively connected to the first head mounted display ([0103] the eyewear devices 100A and 100B are in a session, and communicating with each other) and is configured to: capture a second image of the first head mounted display by the second camera ([0102] identify the physical attribute in the plurality of frames generated by cameras 114A and 114B, determine the face or mouth of the other user); perform a second face detection on the second image to obtain a second bounding box corresponding to the second coordinate system ([0102] identify the physical attribute in the plurality of frames generated by cameras 114A and 114B, determine the face or mouth of the other user; [0106] Eyewear device 100A of user A and eyewear device 100B of user B track the eyewear device of the other user, or an object of the other user, such as on the user's face, to provide the collaborative AR experience); and transmit information to the first head mounted display, wherein the information is associated with a second position of the second bounding box ([0103] the respective (x, y, z) coordinate positions of each eyewear device 100 are shared with the other eyewear devices(s) automatically). Regarding claim 3. Micusik discloses The system of claim 2, wherein the first head mounted display aligns the first coordinate system with the second coordinate system according to the information ([0106] establishing a collaborative AR experience between users of eyewear devices 100 by using alignment between respective 6DOF trajectories generated by 6DOF pose trackers 900, also referred to herein as ego motion alignment; [0120] Once the ego motion alignment transformation 910 is estimated by processor 432, it allows processor 432 to transform a virtual 3D content 912 from a local coordinate system of one user/eyewear device to another user/eyewear device at 914. The same virtual 3D content 912 can then be projected and properly rendered into the eyewear device 100 of both users such that the projected 3D content is synchronized and properly displayed i.e. rendered from the correct viewpoint per user). Regarding claim 4. Micusik discloses The system of claim 2, wherein the information comprises a difference between the second position and a third position of the second head mounted display ([0070] the eyewear device 100 includes a collection of motion-sensing components, provide position, orientation, and motion data about the device relative to six axes (x, y, z, pitch, roll, yaw)). Regarding claim 6. Micusik discloses The system of claim 1, wherein the first head mounted display is further configured to: capture a plurality of images by the first camera ([0087] Sensor data includes images received from one or both of the cameras 114A, 114B); and perform a simultaneous localization and mapping algorithm according to the plurality of images to obtain the extrinsic matrix ([0086] construct the map and determine location and position information using a simultaneous localization and mapping (SLAM) algorithm applied to data received from one or more sensors, a SLAM algorithm is used to construct and update a map of an environment, while simultaneously tracking and updating the location of a device (or a user) within the mapped environment; [0101]). Regarding claim 7. Micusik discloses The system of claim 1, wherein the first head mounted display is further configured to: obtain depth information of the first head mounted display by the first camera ([0034] Each of the visible-light cameras 114A, 114B have a different frontward facing field of view which are overlapping to enable generation of three-dimensional depth images); and calculate the first position according to the depth information ([0055] The generated depth images are in the three-dimensional space domain and can comprise a matrix of vertices on a three-dimensional location coordinate system that includes an X axis for horizontal position (e.g., length), a Y axis for vertical position (e.g., height), and a Z axis for depth (e.g., distance)). Regarding claim 8. Micusik discloses The system of claim 1, wherein the first head mounted display further comprises a distance sensor ([0058] The device 100 may also include a depth sensor 213) and the first head mounted display is further configured to: obtain depth information of the first head mounted display by the distance sensor ([0058] The device 100 may also include a depth sensor 213); and calculate the first position according to the depth information ([0055] The generated depth images are in the three-dimensional space domain and can comprise a matrix of vertices on a three-dimensional location coordinate system that includes an X axis for horizontal position (e.g., length), a Y axis for vertical position (e.g., height), and a Z axis for depth (e.g., distance)). Regarding claim 9. The same analysis has been stated in claim 1. Regarding claim 10. The same analysis has been stated in claim 2. Regarding claim 11. The same analysis has been stated in claim 3. Regarding claim 12. The same analysis has been stated in claim 4. Regarding claim 14. The same analysis has been stated in claim 6. Regarding claim 15. The same analysis has been stated in claim 7. Regarding claim 16. The same analysis has been stated in claim 8. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to XIAOLAN XU whose telephone number is (571)270-7580. The examiner can normally be reached Mon. to Fri. 9am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, SATH V. PERUNGAVOOR can be reached at (571) 272-7455. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /XIAOLAN XU/Primary Examiner, Art Unit 2488
Read full office action

Prosecution Timeline

Jan 25, 2024
Application Filed
Aug 01, 2025
Non-Final Rejection — §103
Sep 22, 2025
Response Filed
Oct 28, 2025
Final Rejection — §103
Jan 13, 2026
Request for Continued Examination
Jan 25, 2026
Response after Non-Final Action
Feb 05, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598315
IMAGE ENCODING/DECODING METHOD AND DEVICE FOR DETERMINING SUB-LAYERS ON BASIS OF REQUIRED NUMBER OF SUB-LAYERS, AND BIT-STREAM TRANSMISSION METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12586255
CONFIGURABLE POSITIONS FOR AUXILIARY INFORMATION INPUT INTO A PICTURE DATA PROCESSING NEURAL NETWORK
2y 5m to grant Granted Mar 24, 2026
Patent 12587652
IMAGE CODING DEVICE AND METHOD
2y 5m to grant Granted Mar 24, 2026
Patent 12581120
Method and Apparatus for Signaling Tile and Slice Partition Information in Image and Video Coding
2y 5m to grant Granted Mar 17, 2026
Patent 12581092
TEMPORAL INITIALIZATION POINTS FOR CONTEXT-BASED ARITHMETIC CODING
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
74%
Grant Probability
87%
With Interview (+13.3%)
2y 11m
Median Time to Grant
High
PTA Risk
Based on 334 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month