Prosecution Insights
Last updated: April 19, 2026
Application No. 18/555,059

INTERMEDIATE VIEW SYNTHESIS BETWEEN WIDE-BASELINE PANORAMAS

Final Rejection §103
Filed
Oct 12, 2023
Examiner
ZALALEE, SULTANA MARCIA
Art Unit
2614
Tech Center
2600 — Communications
Assignee
Google LLC
OA Round
2 (Final)
71%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
86%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
346 granted / 488 resolved
+8.9% vs TC avg
Strong +15% interview lift
Without
With
+15.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
30 currently pending
Career history
518
Total Applications
across all art units

Statute-Specific Performance

§101
7.8%
-32.2% vs TC avg
§103
56.3%
+16.3% vs TC avg
§102
11.4%
-28.6% vs TC avg
§112
13.8%
-26.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 488 resolved cases

Office Action

§103
DETAILED ACTION Response to Arguments Applicant's arguments filed 12/01/2025 regarding the 35 USC 103 rejections with respect to the amended limitations of claims 1-20 have been considered but are moot in view of the new ground(s) of rejection necessitated by the amendment. Examiner-Initiated Interview Summary In an attempt to expedite prosecution on 1/21/2026, The Examiner initiated an interview to indicate allowable subject matter of dependent claim 9. Then further proposed amending the independent claims to include the subject matter. Applicant didn't response as of 2/5/2026. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “depth predictor”, “mesh renderer”, and “fusion network” in claims 10-18. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-2, 4, 10-11, 13 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Lai et al (Lai, Po Kong, et al. "Real-time panoramic depth maps from omni-directional stereo images for 6 dof videos in virtual reality." 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). IEEE, 2019.), and further in view of Varshney et al (US 20200219323 A1) and Cheng et al (Cheng, Hsien-Tzu, et al. "Cube padding for weakly-supervised saliency prediction in 360 videos." Proceedings of the IEEE conference on computer vision and pattern recognition. 2018.). RE claim 1, Lai teaches A method comprising: predicting a depth associated with a first panoramic image and a second panoramic image, the first panoramic image and the second panoramic image being captured with a time period between the capture of the first panoramic image and the second panoramic image (Abstract, Fig 2, wherein the ODS pair can be a first and second panoramic image being captured with a time period between the capture of the images, as an obvious matter of design choice eg., see page 406 col 1); generating a first mesh representing a structure that is three-dimensional based on the first panoramic image and a depth corresponding to the first panoramic image; generating a second mesh representing a structure that is three-dimensional based on the second panoramic image and a depth corresponding to the second panoramic image (Figs 3-5, page 409 col 1- page 410 col 1); and generating a third panoramic image (Figs 3-5, page 409 col 1- page 410 col 1, page 407 col 2). Lai is silent RE: generating the third panoramic image based on fusing the first mesh with the second mesh using a model configured to join left and right portion of the third panoramic image. However Varshney teaches generating the third panoramic image based on fusing the first mesh with the second mesh using join left and right portion of the third panoramic image in Figs 6-9, [0067], [0072]-[0074], generating a seamless representation from 3D meshes. In addition Cheng teaches joining left and right portion of panoramic image using a CNN model in Figs 1-3, abstract, page 1423 col 1to generate smooth boundary within the images. Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include in Lai a system and method of generating the third panoramic image based on fusing the first mesh with the second mesh using a model configured to join left and right portion of the third panoramic image, combining the above teachings of by Varshney and Cheng, in order to generate a distortion free third image from the desired viewpoint and thereby increasing system effectiveness and user experience. RE claim 2, Lai teaches wherein the first panoramic image and the second panoramic image are 360-degree, wide-baseline equirectangular projection (ERP) panoramas (abstract, Fig 2, page 406 col 2). RE claim 4, Lai teaches wherein the predicting of the depth estimates a low-resolution depth based on a first features map associated with the first panoramic image and the second panoramic image, and the predicting of the depth estimates a high-resolution depth based on the first features map and a second features map associated with the first panoramic image (Fig 1, page 406 col 2). Claims 10-11 and 13 recite limitations similar in scope with limitations of claims 1-2 and 4 and therefore rejected under the same rationale. In addition Lai teaches A system comprising: a depth predictor a first and a second differential mesh renderer and a fusion network (Abstract, Figs 1-2). Claim 19 recites limitations similar in scope with limitations of claim 2 and therefore rejected under the same rationale. In addition Lai teaches A non-transitory computer-readable storage medium comprising instructions stored thereon (Abstract, Figs 1-2). Claims 3 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Lai as modified by Varshney and Cheng, and further in view of Won et al (Won, Changhee, Jongbin Ryu, and Jongwoo Lim. "Omnimvs: End-to-end learning for omnidirectional stereo matching." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019.). RE claim 3, Lai as modified by Varshney and Cheng is silent RE wherein the predicting of the depth estimates a depth of each of the first panoramic image and the second panoramic image using a spherical sweep cost volume based on the first panoramic image and the second panoramic image and at least one target position. However Won teaches in Figs 1-2, page 8989 cols 1-2. Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include in Lai as modified by Varshney and Cheng a system and method wherein the predicting of the depth estimates a depth of each of the first panoramic image and the second panoramic image using a spherical sweep cost volume based on the first panoramic image and the second panoramic image and at least one target position, as suggested by Won, in order to further tune the depth estimation network and thereby increasing system effectiveness and user experience. Claim 12 recites limitations similar in scope with limitations of claim 3 and therefore rejected under the same rationale. Claims 5 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Lai as modified by Varshney and Cheng, and further in view of Won et al (US 20120299920 A1). RE claim 5, Lai as modified by Varshney and Cheng is silent RE wherein the generating of the first mesh is based on the first panoramic image and discontinuities determined based the depth corresponding to the first panoramic image, and the generating of the second mesh is based on the second panoramic image and discontinuities determined based on the depth corresponding to the second panoramic image. However Coombe teaches in Fig 5, [0087]-[0088]. Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include in Lai as modified by Varshney and Cheng a system and method wherein the generating of the first mesh is based on the first panoramic image and discontinuities determined based the depth corresponding to the first panoramic image, and the generating of the second mesh is based on the second panoramic image and discontinuities determined based on the depth corresponding to the second panoramic image, applying Coombe to generate the first and second meshes free from artifacts/missing data and thereby increasing system effectiveness and user experience. Claim 14 recites limitations similar in scope with limitations of claim 5 and therefore rejected under the same rationale. Claims 6-7, 15-16 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Lai as modified by Varshney and Cheng, and further in view of Chen et al (US 20180234669 A1). RE claim 6, Lai as modified by Varshney and Cheng is silent RE wherein the generating of the first mesh includes rendering the first mesh into a first 360-degree panorama based on a first target position, the generating of the second mesh includes rendering the second mesh into a first 360-degree panorama based on a second target position, and the first target position and the second target position are based on the time period between the capture of the first panoramic image and the second panoramic image. However Chen teaches in Figs 5-6, abstract, [0049]-[0050], [0058]-[0060], [0073]-[0074] etc. wherein viewpoints are determined based on the interpolated/fixed camera positions of the images. Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include in Lai as modified by Varshney and Cheng a system and method wherein the generating of the first mesh includes rendering the first mesh into a first 360-degree panorama based on a first target position, the generating of the second mesh includes rendering the second mesh into a first 360-degree panorama based on a second target position, and the first target position and the second target position are based on the time period between the capture of the first panoramic image and the second panoramic image, applying Chen as set forth above, in order to generate the desired view image with smooth transition and thereby increasing system effectiveness and user experience. RE claim 7, Lai as modified by Varshney and Cheng teaches wherein the generating of the third panoramic image includes fusing the first mesh together with the second mesh, and inpainting holes in the generated third panoramic image (Varshney and Cheng Fig 7D, col 7 lines 30-35, Col 14 lines 22-25). Lai as modified by Varshney and Cheng is silent RE resolving ambiguities between the first mesh and the second mesh, However Chen teaches in Fig 4, [0077]-[0078], [0081] etc. to provide dense mesh free of artifacts, noise or outliers, resolving ambiguous overlapping regions. Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include in Lai as modified by Varshney and Cheng a system and method of resolving ambiguities between the first mesh and the second mesh, applying Chen as set forth above, in order to provide dense mesh free of artifacts, noise or outliers resolving ambiguous overlapping regions and thereby increasing system effectiveness and user experience. Claims 15-16 recite limitations similar in scope with limitations of claims 6-7 and therefore rejected under the same rationale. Claim 20 recites limitations similar in scope with limitations of claim 6 and therefore rejected under the same rationale. Allowable Subject Matter Claims 8-9, 17-18 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is an examiner’s statement of reasons for allowance: No single or combination of prior art was found to teach the following subject matter in combination with the limitations of the base claims. Claim 8 and 17: wherein the generating of the third panoramic image includes generating a binary visibility mask to identify holes the first mesh based on negative regions in the depth corresponding to the first panoramic image and the second mesh based on negative regions in the depth corresponding to the second panoramic image. Claim 9 and 18: wherein the trained neural network uses circular padding at each convolutional layer, to join left and right edges of the third panoramic image. Any comments considered necessary by applicant must be submitted no later than the payment of the issue fee and, to avoid processing delays, should preferably accompany the issue fee. Such submissions should be clearly labeled “Comments on Statement of Reasons for Allowance.” Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure (see attached 892). Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SULTANA MARCIA ZALALEE whose telephone number is (571)270-1411. The examiner can normally be reached Monday- Friday 8:00am-4:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at (571)272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Sultana M Zalalee/ Primary Examiner, Art Unit 2614
Read full office action

Prosecution Timeline

Oct 12, 2023
Application Filed
Aug 29, 2025
Non-Final Rejection — §103
Dec 01, 2025
Response Filed
Jan 21, 2026
Examiner Interview (Telephonic)
Feb 05, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602876
ANNOTATION TOOLS FOR RECONSTRUCTING THREE-DIMENSIONAL ROOF GEOMETRY
2y 5m to grant Granted Apr 14, 2026
Patent 12592035
Fused Bounding Volume Hierarchy for Multiple Levels of Detail
2y 5m to grant Granted Mar 31, 2026
Patent 12586146
PROGRESSIVE MATERIAL CACHING
2y 5m to grant Granted Mar 24, 2026
Patent 12573150
POLYGON CORRECTION METHOD AND APPARATUS, POLYGON GENERATION METHOD AND APPARATUS, ELECTRONIC DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 10, 2026
Patent 12561908
TOPOLOGICALLY CONSISTENT MULTI-VIEW FACE INFERENCE USING VOLUMETRIC SAMPLING
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
71%
Grant Probability
86%
With Interview (+15.1%)
2y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 488 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month