Prosecution Insights
Last updated: April 19, 2026
Application No. 18/787,834

REGISTRATION AND PARALLAX ERROR CORRECTION FOR VIDEO SEE-THROUGH (VST) EXTENDED REALITY (XR)

Non-Final OA §103
Filed
Jul 29, 2024
Examiner
IMPERIAL, JED-JUSTIN
Art Unit
2616
Tech Center
2600 — Communications
Assignee
Samsung Electronics Co., Ltd.
OA Round
1 (Non-Final)
73%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
85%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
289 granted / 397 resolved
+10.8% vs TC avg
Moderate +12% lift
Without
With
+12.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
13 currently pending
Career history
410
Total Applications
across all art units

Statute-Specific Performance

§101
4.1%
-35.9% vs TC avg
§103
59.2%
+19.2% vs TC avg
§102
18.9%
-21.1% vs TC avg
§112
8.5%
-31.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 397 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 6-8, 13-15, 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bleyer et al. (US 2022/0028095 A1) in view of Yang et al. (“A Global Correction Framework for Camera Registration in Video See-Through Augmented Reality”). In regards to claim 1, Bleyer teaches a method, comprising: identifying a transformation associated with a video see-through (VST) extended reality (XR) device using a parallax error, the parallax error based on one or more differences between (i) one or more actual positions of contents of a scene as imaged using a see-through camera of the VST XR device and (ii) one or more perceived positions of the contents of the scene at a virtual camera associated with a viewpoint of a user when viewing a display panel of the VST device (e.g. [0004]-[0005]: an MR system may modify and/or reproject captured image data to correspond to the perspective of a user's eye to generate pass-through views; an MR system may modify and/or reproject captured image data to generate a pass-through view using depth information for the captured environment obtained by the MR system (e.g. using a depth system of the MR system, such as a time of flight camera, a rangefinder, stereoscopic depth cameras, etc.); pass-through views generated by modifying and/or reprojecting captured image data may at least partially correct for differences in perspective brought about by the physical separation between a user's eyes and the camera(s) of the MR system (known as the “parallax problem,” “parallax error,” or, simply “parallax”); parallax-corrected pass-through images may appear to a user as though they were captured by cameras that are co-located with the user's eyes; Examiner’s note: where the reprojection is the transformation; different in perspective shows difference in actual position and virtual camera/viewpoint of user); obtaining an image that captures an object using the see-through camera (e.g. [0073]: to generate a parallax-corrected passthrough image, the scanning sensor(s) 105 may rely on its cameras (e.g. visible light camera(s) 110, low light camera(s) 115, thermal imaging camera(s) 120, UV camera(s) 125, or any other type of camera) to obtain one or more raw images of the environment (e.g. environment 175)); applying the transformation to the image in order to generate a modified image (e.g. further in [0073]: these raw images may also be used to determine depth data detailing the distance from the sensor to any objects captured by the raw images (e.g. a z-axis range or measurement); once these raw images are obtained, then a depth map can be computed from the depth data embedded or included within the raw images, and passthrough images can be generated (e.g. one for each pupil) using the depth map for any reprojections); and rendering the modified image for presentation on the display panel (e.g. as above, [0004]-[0005]: modify and/or reproject captured image data to correspond to the perspective of a user's eye to generate pass-through views; parallax-corrected pass-through images may appear to a user as though they were captured by cameras that are co-located with the user's eyes; Examiner’s note: shows generated pass-through views would be rendered and displayed for viewing); wherein applying the transformation modifies the image such that a perceived position of the object substantially matches an actual position of the object when the display panel is viewed by the user (e.g. as above, [0004]-[0005]: parallax-corrected pass-through images may appear to a user as though they were captured by cameras that are co-located with the user's eyes), but does not explicitly teach the method, wherein the transformation uses a registration error, the registration error based on one or more differences between (i) one or more actual positions of contents of a scene as imaged using a see-through camera of the VST XR device and (ii) one or more perceived positions of the contents of the scene at a virtual camera associated with a viewpoint of a user when viewing a display panel of the VST device. However, Yang teaches a method, wherein the transformation uses a registration error, the registration error based on one or more differences between (i) one or more actual positions of contents of a scene as imaged using a see-through camera of the VST XR device and (ii) one or more perceived positions of the contents of the scene at a virtual camera associated with a viewpoint of a user when viewing a display panel of the VST device (e.g. Section 5, page 11: to achieve global misregistration reduction in video-based augmented reality systems; accurate visualization of virtual content relies on relative transformation between the camera and virtual objects, which are both registered in the same reference coordinate system; Section 5.2, page 12: error correction is performed on the HMD-to-camera transformation matrix; six parameters, encompassing translations and orientations, can be adjusted to minimize the initially estimated result obtained from the uncorrected input propagated from the world-to-HMD transformation; see also Section 3, 2nd paragraph on page 5,Fig.6: transformation model of the virtual cameras; the base origin represents the virtual world origin, and the virtual objects are positioned in this world coordinate system through world-to-object transformation; the HMD tracking center is also tracked in the virtual world coordinates, enabling the acquisition of the world-to-HMD transformation via the tracking system; this transformation provides the relative pose of the HMD, including its position and orientation, in the virtual world system; the HMD-to-camera transformation remains fixed, as the real camera is rigidly mounted on the HMD; the pose of the virtual camera with respect to the virtual world coordinates is determined by two transformations: world-to-HMD and HMD-to-camera; each virtual camera then generates a perspective projection of the objects onto an image plane to produce a 2D virtual image). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings/combination of Bleyer to correct registration errors, in the same conventional manner as taught by Yang as both deal with viewing images in a video see-through device. The motivation to combine the two would be that it would allow the correction of registration errors. In regards to device claim 8 and medium claim 15, claim(s) 8, 15 recite(s) limitations that is/are similar in scope to the limitations recited in claim 1. Therefore, claim(s) 8, 15 is/are subject to rejections under the same rationale as applied hereinabove for claim 1. In regards to claim 6, the combination of Bleyer and Yang teaches a method, wherein the transformation comprises a static transformation (e.g. Yang as above, Section 5.2, page 12: error correction is performed on the HMD-to-camera transformation matrix; six parameters, encompassing translations and orientations, can be adjusted to minimize the initially estimated result obtained from the uncorrected input propagated from the world-to-HMD transformation; further in Section 5.2, page 12: certain individual adjustments may yield similar corrections; for instance, translating along the X-axis and rotating along the Y-axis can produce comparable effects; Examiner’s note: shows use of static transformations). In addition, the same rationale/motivation of claim 1 is used for claim 6. In regards to device claim 13, claim(s) 13 recite(s) limitations that is/are similar in scope to the limitations recited in claim 6. Therefore, claim(s) 13 is/are subject to rejections under the same rationale as applied hereinabove for claim 6. In regards to claim 7, the combination of Bleyer and Yang teaches a method, wherein the transformation is based on one or more extrinsic parameters of the see-through camera, one or more extrinsic parameters of the display panel, and one or more extrinsic parameters of the virtual camera (e.g. Yang as above, Section 3, 2nd paragraph on page 5,Fig.6: transformation model of the virtual cameras; the base origin represents the virtual world origin, and the virtual objects are positioned in this world coordinate system through world-to-object transformation; the HMD tracking center is also tracked in the virtual world coordinates, enabling the acquisition of the world-to-HMD transformation via the tracking system; this transformation provides the relative pose of the HMD, including its position and orientation, in the virtual world system; the HMD-to-camera transformation remains fixed, as the real camera is rigidly mounted on the HMD; the pose of the virtual camera with respect to the virtual world coordinates is determined by two transformations: world-to-HMD and HMD-to-camera; each virtual camera then generates a perspective projection of the objects onto an image plane to produce a 2D virtual image; Examiner’s note: shows transformations are based on extrinsic parameters (coordinate / orientation) of camera, display (HMD), virtual camera). In addition, the same rationale/motivation of claim 1 is used for claim 7. In regards to device claim 14 and medium claim 20, claim(s) 14, 20 recite(s) limitations that is/are similar in scope to the limitations recited in claim 7. Therefore, claim(s) 14, 20 is/are subject to rejections under the same rationale as applied hereinabove for claim 7. Allowable Subject Matter Claim(s) 2-5, 9-12, 16-19 is/are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. To note, claim(s) 3-4, 10-11, 17-18 is/are included as they depend respectively on claim(s) 2, 9, 16. The following is a statement of reasons for the indication of allowable subject matter: Claim(s) 2-5, 9-12, 16-19 was/were carefully reviewed and a search with regards to independent claim(s) 1, 8, 15 has been made. Accordingly, those claim(s) are believed to be distinct from the prior art searched. Regarding claim(s) 2-4, 9-11, 16-18 and specifically independent claim(s) 1, 8, 15, the prior art search was found to neither anticipate nor suggest the method of claim 1/device of claim 8/medium of claim 15, further comprising: self-registering the see-through camera and the virtual camera by integrating (i) a self-calibration of the virtual camera based on the registration error and (ii) a self-calibration of the virtual camera based on the parallax error (emphasis added). Regarding claim(s) 5, 12, 19 (and specifically independent claim(s) 1, 8, 15), the prior art search was found to neither anticipate nor suggest the method of claim 1/device of claim 8/medium of claim 15, wherein identifying the transformation comprises: using a registration model to identify the registration error; and using a parallax model to identify the parallax error, the parallax model separate from the registration model (emphasis added). It is viewed that any of the previously cited references or any of the prior art searched, in part or in whole, cannot be combined in such a way to render the claimed invention obvious. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JED-JUSTIN IMPERIAL whose telephone number is (571)270-5807. The examiner can normally be reached Monday to Friday, 9am - 6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Hajnik can be reached at (571) 272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JED-JUSTIN IMPERIAL/ Examiner, Art Unit 2616 /DANIEL F HAJNIK/ Supervisory Patent Examiner, Art Unit 2616
Read full office action

Prosecution Timeline

Jul 29, 2024
Application Filed
Mar 20, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602890
NEURAL VECTOR FIELDS FOR 3D SHAPE GENERATION
2y 5m to grant Granted Apr 14, 2026
Patent 12597225
IMAGE PROCESSING METHOD AND APPARATUS, ELECTRONIC DEVICE, PROGRAM, AND READABLE STORAGE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12586332
METHOD, APPARATUS, DEVICE AND STORAGE MEDIUM FOR MANIPULATING VIRTUAL OBJECT
2y 5m to grant Granted Mar 24, 2026
Patent 12579750
RENDERING VIEWS OF A SCENE IN A GRAPHICS PROCESSING UNIT
2y 5m to grant Granted Mar 17, 2026
Patent 12541934
SYSTEM AND METHOD FOR RAPID SENSOR COMMISSIONING
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
73%
Grant Probability
85%
With Interview (+12.1%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 397 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month