Prosecution Insights
Last updated: April 19, 2026
Application No. 18/195,784

PANORAMA GENERATION USING NEURAL NETWORKS

Final Rejection §103
Filed
May 10, 2023
Examiner
CHANG, KENT WU
Art Unit
2614
Tech Center
2600 — Communications
Assignee
Nvidia Corporation
OA Round
2 (Final)
13%
Grant Probability
At Risk
3-4
OA Rounds
2y 3m
To Grant
13%
With Interview

Examiner Intelligence

Grants only 13% of cases
13%
Career Allow Rate
5 granted / 38 resolved
-48.8% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 3m
Avg Prosecution
7 currently pending
Career history
45
Total Applications
across all art units

Statute-Specific Performance

§101
3.1%
-36.9% vs TC avg
§103
58.5%
+18.5% vs TC avg
§102
29.6%
-10.4% vs TC avg
§112
8.2%
-31.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 38 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Duan (Panoramic Image Generation: From 2-D Sketch to Spherical Image, cited previously) in view of Gausebeck, (US2019/0026958). As to claim 1, Duan teaches a processor comprising: one or more circuits to cause one or more neural networks to iteratively generate different portions of one or more panoramic images to complete the one or more panoramic images based, at least in part, on a segmentation map indicating content to be included in the one or more panoramic images (Duan mentions combining one or more panoramic images, and in Page 197, that the proposed method improved the performance of image reconstruction in iterations. See also page 197 Col. 2 Para.1-3, page 199, Col.2, Algorithm 1; page 201. Col.2 Para 2-3). Duan is silent in that the generated portions of the panoramic image is overlapped. Gausebeck teaches to “generate different, overlapping portions of one or more panoramic images to complete the one or more panoramic images (para 0100, The one or more panorama models 514 can employ a neural network model that has been trained on panoramic images with 3D ground truth data associated therewith. For example, in various implementations, the one or more panorama models 514 can be generated based on 2D panoramic image data with associated 3D data (referred to herein as 2D/3D panoramic data) that was captured by a 2D/3D capture device in association with capture of the 2D panoramic image data. The 2D/3D panoramic capture device can incorporate one or more cameras (or one or more camera lenses) that provide a field-of-view up to a 360°, as well as one or more depth sensors that provide a filed-of-view up to 360°, thereby providing for capture of an entire panoramic image and panoramic depth data associated therewith to be captured simultaneously and merged into a 2D/3D panoramic image. The depth sensors can include one or more 3D capture devices that use at least some hardware to capture depth information. For example, the depth sensors can include but are not limited to LiDAR sensors/device, laser rangefinder sensors/devices, time-of-flight sensors/devices, structured light sensors/devices, lightfield-camera sensors/device, active stereo depth derivation sensors/devices, etc.). In other embodiments, the panoramic 2D/3D training used to develop the one or more panorama models 514 can include panoramic image data and associated 3D data generated by a capture device assembly that incorporates one or more color cameras and one or more 3D sensors attached to a rotating stage, or otherwise a device configured to rotate about an axis during the capture process (e.g., using synchronized rotation signals). During rotation, multiple images and depth readings are captured which can be merged into a single panoramic 2D/3D image. In some implementations, by rotating the stage, images with mutually overlapping fields-of-view but different viewpoints are obtained, and 3D information can be derived from them using stereo algorithms. The 2D/3D panoramic training data can also be associated with information identifying a capture position and a capture orientation of the 2D/3D panoramic image, which can be generated by the 2D/3D capture device and/or derived in association with the capture processes. Additional details regarding a graphical user interface that facilitates reviewing and aiding the capture process is described in U.S. patent application Ser. No. 15/417,162 filed on Jan. 26, 2017 and entitled “CAPTURING AND ALIGNING PANORAMIC IMAGE AND DEPTH DATA,” the entirety of which is incorporated herein by reference. Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention, to have modified the system of Duan with the teaching of Gausebeck so as to allow aligning the different portions of images to generate the panoramic picture. As to claim 2, Duan as modified by Gausebeck teaches the processor of claim 1, wherein iteratively generating the different portions of the one or more panoramic images comprises generating one or more normal-perspective images and projecting the one or more normal-perspective images to the different portions of the one or more panoramic images (see Duan page 202, Col.2 para2; Gausebeck para 0100). As to claim 3, Duan as modified by Gausebeck teaches the processor of claim 2, wherein the one or more normal- perspective images are generated based, at least in part, on segmentation map (see Gausebeck, para 0161 and claim 10, In some implementations, the semantic labeling component 928 can also perform semantic segmentation and further identify and defined boundaries of recognized objects in the 2D images. The semantic labels/boundaries associated with features included in a 2D image can be characterized as structured auxiliary data 930 and used to facilitate deriving depth data for the 2D images. In this regard, the semantic label/segmentation information associated with a 2D image can also be used as input to one or more augmented 3D-from-2D models (e.g., one or more augmented models 810) along with the 2D image to generate derived 3D data 116 for the 2D image, used by the model generation component 118 to facilitate the alignment process in association with 3D model generation, and/or stored in memory ; and claim 10: The system of claim 1, further comprising: an object segmentation component configured to extract object image data of an object included in a two-dimensional image, and wherein three-dimensional data derivation component is further configured to employ the one or more 3D-from-2D neural network models to derive object three-dimensional data from the object image data). As to claim 4, Duan as modified by Gausebeck teaches the processor of claim 2, wherein the one or more normal- perspective images correspond to a plurality of viewing directions of the one or more panoramic images (see Duan, page 202, Col. 2, para 2-3). As to claim 5, Duan as modified by Gausebeck teaches the processor of claim 4, wherein two or more of the plurality of viewing directions correspond to two or more portions of the one or more panoramic images which at least partially overlap (see Duan, page 202, Col. 2, para 2-4). As to claim 6, Duan as modified by Gausebeck teaches the processor of claim 4, wherein the one or more normal-perspective images are projected to the portions of the one or more panoramic images based, at least in part, on the plurality of viewing directions (see Duan, page 202, Col. 2, para 2-3). As to claim 7, Duan as modified by Gausebeck teaches the processor of claim 1, wherein the one or more panoramic images comprise[[s]] a spherical panoramic image (see Duan, page 202, Col. 2, para 2). Claims 8-14 are method claims performed by a processor similar to claims 1-7, thus see the rejection of claims 1-7 and note that Duan in view of Gausebeck teaches the method of using the processor for performing the function as recited in these claims. Claims 15-20 recites a system comprising a processor similar to claims 1-7, thus also see rejection of claims 1-7. Response to Arguments Applicant’s arguments with respect to claim(s) 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Applicant mainly argues that the prior art doesn’t teach “iteratively generate different, overlapping portions of one or more panoramic images to complete the one or more panoramic images”. Note that Duan already mentions combining one or more panoramic images, and mentions, in Page 197, that the proposed method improved the performance of image reconstruction in iterations. In addition, Gausebeck teaches the method of constructing a complete panoramic image by combining (stitching) different, overlapping portions (captured 2D images). In paragraph [0100] and Paragraph [0110], Gausebeck also mentions “stitching two or more images together to generate a panoramic images” And “The stitching component 508 can further employ this initial derived depth information … to facilitate aligning the respective 2D images … combining the images to generate a single panoramic image”, and in paragraph [0141], forming a panoramic image from two or more images with partially overlapping fields-of-view and stitching them into a panoramic image, and “related 2D images can include two or more images respectively captured by two or more cameras with partially overlapping fields-of-view or different perspective of an environment (e.g., captured by different cameras at or near the same time)…”. Furthermore, Gausebeck also teaches in Paragraph [0226] about cropping a panoramic image to select a desired portion, including selecting a specific segmented object (and even mentions user input to identify the desired portion). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Contact information Any inquiry concerning this communication or earlier communications from the examiner should be directed to KENT WU CHANG whose telephone number is (571)272-7667. The examiner can normally be reached Monday to Friday between 9:00am – 4:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KENT W CHANG/Supervisory Patent Examiner, Art Unit 2614
Read full office action

Prosecution Timeline

May 10, 2023
Application Filed
Apr 04, 2025
Non-Final Rejection — §103
Aug 11, 2025
Response Filed
Feb 01, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586283
REAL-TIME HAND-HELD MARKERLESS HUMAN MOTION RECORDING AND AVATAR RENDERING IN A MOBILE PLATFORM
2y 5m to grant Granted Mar 24, 2026
Patent 12586164
Vehicle Control Apparatus And Method Thereof
2y 5m to grant Granted Mar 24, 2026
Patent 12579749
METHOD FOR PROCESSING DATA REPRESENTING A THREE-DIMENSIONAL VOLUMETRIC SCENE
2y 5m to grant Granted Mar 17, 2026
Patent 12567202
COMPUTED SYSTEMS LAYOUT LEVERAGING AUGMENTED REALITY IN THE DATA CENTER
2y 5m to grant Granted Mar 03, 2026
Patent 12505603
Dynamic Weather in Virtual Environments
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
13%
Grant Probability
13%
With Interview (+0.0%)
2y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 38 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month