Prosecution Insights
Last updated: April 19, 2026
Application No. 18/313,121

IMAGE STITCHING WITH EGO-MOTION COMPENSATED CAMERA CALIBRATION FOR SURROUND VIEW VISUALIZATION

Final Rejection §102§103
Filed
May 05, 2023
Examiner
BEZUAYEHU, SOLOMON G
Art Unit
2674
Tech Center
2600 — Communications
Assignee
Nvidia Corporation
OA Round
2 (Final)
75%
Grant Probability
Favorable
3-4
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
464 granted / 618 resolved
+13.1% vs TC avg
Strong +31% interview lift
Without
With
+30.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
30 currently pending
Career history
648
Total Applications
across all art units

Statute-Specific Performance

§101
16.0%
-24.0% vs TC avg
§103
49.7%
+9.7% vs TC avg
§102
13.4%
-26.6% vs TC avg
§112
11.7%
-28.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 618 resolved cases

Office Action

§102 §103
DETAILED ACTION Response to Arguments Applicant's arguments filed with respect to claims 1-10 have been fully considered but are moot in view of the new ground(s) of rejection. The rejections are necessitated due to claim amendments. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1 is rejected under 35 U.S.C. 102(a)(2) as being anticipated by WANG et al. (Pub. No. US 2015/0332446). WANG teaches generating, based at least on ego-motion (vehicle dynamics) of an ego-object (Vehicle) in an environment (world coordinates), a representation of at least one of an estimated rotation (Rotation dynamics/matrix) or an estimated translation (transition dynamics/vector) of a body (vehicle body) of the ego-object relative to a calibration state (Stationary orientation) of the body during calibration of one or more extrinsic sensor parameters (extrinsic parameters) [Para. 8-11, 29, 30, and 32]; and generating, using a transformation (rotation matrices R and translation vectors t) based at least on the representation of the estimated rotation or the estimated translation, an ego-motion compensated (corrected) projection (top-down view image) of frames of image data representing two or more overlapping views of the environment (areas around the vehicle) [Para. 24, 25, and 27]. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1 and 2 are rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. (Pub. No. US 2013/0329072) in view of Walters et al. (Pub. No. US 2020/0150677). Regarding claim 1, Zhou teaches method comprising: generating, based at least on ego-motion of an ego-object (capturing device) in an environment, a representation of at least one of an estimated rotation or an estimated translation of a body of the ego-object relative to a calibration state (first image) of the body [Abstract, Para. 5 “A rotation metric between the two images (e.g., due to motion of the image capture device) based on sensor output data may be determined. A translation measure for the second image, relative to the first image, may be determined based on the rotation metric and the focal length”; Para. 19 “More particularly, motion data derived from sensor output may be used to identify the rotational change between two images.”; Para. 30, fig. 4 and related description] and generating, using a transformation based at least on the representation of the estimated rotation or the estimated translation, of image data representing two or more overlapping views of the environment [Para. 3 states “There are many challenges associated with taking visually appealing panoramic images”. “blending between the overlapping regions of various images used to construct the overall panoramic image””; Para. 19 “The identified rotational change may be used to directly align/stitch the images”. Para. 5 “It is significant that the two images may be aligned without performing standard image processing or analysis” meaning, whether they are overlapped or not, the system stitches two consecutive frames based on estimated rotation/translation. Therefore, it’s clear Zhou teaches that when the frames are overlapped, they will be stitched together according to the rotation]. However, Zhou doesn’t explicitly teach about generating a representation during calibration of one or more extrinsic sensor parameters. Walters teaches generating (recorded), based at least on ego-motion (pose measurement) of an ego-object (mobile structure) in an environment, a representation of at least one of an estimated rotation (orientation) or an estimated translation (position) of a body of the ego-object relative to a calibration state (an arbitrary initial position and orientation) of the body during calibration (for calibration) of one or more extrinsic sensor parameters (coordinate frame transformations) [Para. 41, 289, 272, 265, and 109]. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify Zhou to teach the claim limitation, feature as taught by Walters; because the modification enables the conventional automated directional control systems easier to calibrate and make it more accurate by reducing the number sensor needed. Regarding claim 2, Zhou further teaches generating a surround view visualization based at least on the ego-motion compensated projection [Para. 30-31]. Claims 3 is rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. (Pub. No. US 2013/0329072) in view of Walters et al. (Pub. No. US 2020/0150677) in view of Ermilios et al. (Pub. No. US 2019/0080476). Regarding claim 3, Zhou in view of Walters doesn’t explicitly teach the claim limitation. However, Ermilios teaches wherein the generating of the ego-motion compensated projection comprises applying the transformation to one or more calibration parameters associated with a camera that captured a corresponding at least one frame of the frames of image data [Para. 29]. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify Zhou in view of Walters to teach the claim limitation, feature as taught by Ermilios; because the modification enables to generate a virtual plan view with the help of a look-up table and may incorporate an anti-aliasing filter during rendering to improve image quality and thus tracking performance. Claim 4 and 5 are rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. (Pub. No. US 2013/0329072) in view of Walters et al. (Pub. No. US 2020/0150677) in view of Ryu et al. (Patent No. US 7,571,030). Regarding claim 4, Zhou in view of Walters doesn’t explicitly teach the claim limitation. However, Ryu teaches wherein the generating of the representation of the estimated rotation of the body of the ego-object is based at least on estimated stiffness of the body in one or more rotational directions and detected acceleration of the body in the one or more rotational directions [Col. 4 lines 38-53, Equation 8 and related description]. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify Zhou in view of Walters to teach the claim limitation, feature as taught by Ryu; because the modification enables to system to reduce roll rate of a vehicle. Regarding claims 5, Zhou in view of Walters doesn’t explicitly teach the claim limitation. However, Ryu teaches wherein the generating of the representation of the estimated rotation of the body of the ego-object comprises estimating rotation with respect to a ground surface using one or more suspension level sensors to measure displacement between the body and the ground surface [Col. 4 lines 19-53, Equation 8 and related description]. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify Zhou in view of Walters to teach the claim limitation, feature as taught by Ryu; because the modification enables to system to reduce roll rate of a vehicle. Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. (Pub. No. US 2013/0329072) in view of Walters et al. (Pub. No. US 2020/0150677) in view of Eriksson (Patent No US 8321168). Regarding claim 6, Zhou in view of Walters doesn’t explicitly teach the claim limitation. However, Eriksson teaches wherein the generating of the representation of the estimated rotation of the body of the ego-object comprises estimating rotation with respect to a ground surface based at least on applying low pass filtering to a signal representing a detected up-vector of the body to estimate orientation of the ground surface and applying high pass filtering to the signal to estimate orientation of the body [Claim 10 and corresponding description]. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify Zhou in view of Walters to teach the claim limitation, feature as taught by Eriksson; because the modification enables to system provide a real-time visual odometry. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. (Pub. No. US 2013/0329072) in view of Walters et al. (Pub. No. US 2020/0150677) in view of Krishnaswamy et al. (Patent No. US 8,213,706). Regarding claim 7, Zhou in view of Walters doesn’t explicitly teach the claim limitation. However, Krishnaswamy teaches wherein the generating of the representation of the estimated rotation of the body of the ego-object comprises applying structure-from-motion to triangulate and track (extracting and matching frame to frame) positions of observed objects on a ground surface [Abstract, brief summary. Fig. 1 and related description]. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify Zhou in view of Walters to teach the claim limitation, feature as taught by Krishnaswamy; because the modification enables to system provide a real-time visual odometry. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. (Pub. No. US 2013/0329072) in view of Walters et al. (Pub. No. US 2020/0150677) in view of Chen et al. (Pub. No. US 2022/0144260). Regarding claim 8, Zhou in view of Walters doesn’t explicitly teach the claim limitation. However, Chen teaches wherein the generating of the representation of the estimated rotation of the body of the ego-object comprises using a deep neural network to predict the representation of the estimated rotation based at least on an input representation of a ground surface [para. 46-47 and 77]. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify Zhou in view of Walters to teach the claim limitation, feature as taught by Chen; because the modification enables to system identify risk object very quickly and accurately. Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. (Pub. No. US 2013/0329072) in view of Walters et al. (Pub. No. US 2020/0150677) in view of Lei (Pub. No. US 2010/0040290). Regarding claim 9, Zhou in view of Walters doesn’t explicitly teach the claim limitation. However, Lei teaches wherein the generating of the representation of the estimated rotation of the body of the ego-object comprises selecting a rotation estimation technique from a plurality of supported rotation estimation techniques [Para. 69]. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify Zhou in view of Walters to teach the claim limitation, feature as taught by Lei; because the modification enables to system detect scene change accurately. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. (Pub. No. US 2013/0329072) in view of Walters et al. (Pub. No. US 2020/0150677) in view of Kristensen et al. (Pub. No. US 2021/0286923). Regarding claim, 10 Zhou in view of Walters doesn’t explicitly teach the claim limitation. However, Kristensen teaches wherein the method is performed by at least one of: a control system for an autonomous or semi-autonomous machine; and a system for performing simulation operations [Para 72, and 37]. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify Zhou in view of Walters to teach the claim limitation, feature as taught by Kristensen; because the modification enables to system detect scene change accurately. Allowable Subject Matter Claims 11-20 are allowed. No prior art was found that reads on “estimated rotation/translation of a suspension of the ego-object” and “estimating rotation/translation information corresponding to sprung mass of the ego-object”. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SOLOMON G BEZUAYEHU whose telephone number is (571)270-7452. The examiner can normally be reached on Monday-Friday 10 AM-7 PM. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, O’Neal Mistry can be reached on 313-446-4912. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-0101 (IN USA OR CANADA) or 571-272-1000. /SOLOMON G BEZUAYEHU/ Primary Examiner, Art Unit 2666
Read full office action

Prosecution Timeline

May 05, 2023
Application Filed
Jul 18, 2025
Non-Final Rejection — §102, §103
Sep 18, 2025
Examiner Interview Summary
Sep 18, 2025
Applicant Interview (Telephonic)
Dec 02, 2025
Response Filed
Mar 07, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602717
APPARATUS, METHOD, AND COMPUTER-READABLE STORAGE MEDIUM FOR CONTEXTUALIZED EQUIPMENT RECOMMENDATION
2y 5m to grant Granted Apr 14, 2026
Patent 12602946
DOCUMENT CLASSIFICATION USING UNSUPERVISED TEXT ANALYSIS WITH CONCEPT EXTRACTION
2y 5m to grant Granted Apr 14, 2026
Patent 12591350
TECHNIQUES FOR POSITIONING SPEAKERS WITHIN A VENUE
2y 5m to grant Granted Mar 31, 2026
Patent 12586355
ROAD AND INFRASTRUCTURE ANALYSIS TOOL
2y 5m to grant Granted Mar 24, 2026
Patent 12561852
Cross-Modal Contrastive Learning for Text-to-Image Generation based on Machine Learning Models
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
75%
Grant Probability
99%
With Interview (+30.9%)
3y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 618 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month