Prosecution Insights
Last updated: April 19, 2026
Application No. 18/503,797

3D OBJECT RECONSTRUCTION

Non-Final OA §103
Filed
Nov 07, 2023
Examiner
LHYMN, SARAH
Art Unit
2613
Tech Center
2600 — Communications
Assignee
Nvidia Corporation
OA Round
3 (Non-Final)
65%
Grant Probability
Favorable
3-4
OA Rounds
2y 4m
To Grant
81%
With Interview

Examiner Intelligence

Grants 65% — above average
65%
Career Allow Rate
357 granted / 546 resolved
+3.4% vs TC avg
Strong +15% interview lift
Without
With
+15.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
30 currently pending
Career history
576
Total Applications
across all art units

Statute-Specific Performance

§101
5.4%
-34.6% vs TC avg
§103
63.2%
+23.2% vs TC avg
§102
5.9%
-34.1% vs TC avg
§112
15.3%
-24.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 546 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment / Arguments Claim Objections. Applicant’s amendment overcomes the objections. Prior Art Rejections. Applicant’s arguments with respect to the amended claims have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-3, 8-110 and 15-17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Liu (U.S. Patent App. Pub. No. 2024/0119671) in view of Geng, J., Weng, Y., Wang, L., & Zhou, K. (2021). Single-view facial reflectance inference with a differentiable renderer. Science China Information Sciences, 64(11), 210101, pp. 1-17 (“Geng”). Regarding claim 1: Liu teaches: a processor, comprising: one or more circuits (claim 19, a programmable processor capable of executing instructions has circuits) to use one or more neural networks (para. 61, an end-to-end neural network) to… Regarding the remaining claim features, consider the following. In analogous art, Geng teaches: one or more neural networks to infer depth information for one or more objects based, at least in part, on one or more two-dimensional (“2D”) images including the one or more objects (Fig. 2: estimate/infer a 3D mesh from an input 2D image. See also related description in Sections 3, 4); infer albedo information for the one or more objects based, at least in part, on the one or more 2D images (Fig. 2 and Section 5, infer albedo, specular and normal information – these are “reflectance components”), and infer specular information for the one or more objects based, at least in part, on the one or more 2D images (Fig. 2 and Section 5, infer albedo, specular and normal information – these are “reflectance components”); and use a differentiable renderer (Fig 2, the renderer is a differentiable renderer. See also Intro, page 2, which states in part, “Introducing a differentiable renderer in an iterative optimization grants us two immediate benefits. First, the discrepancy between the original image and the reconstruction is directly minimized both numerically and perceptually. Starting from a good initialization output by the neural network, the optimization also converges very fast. Second, the iterative optimization makes our neural network less prone to the difference between the real input and training data, significantly reducing the requirements for training data.”) to generate one or more three-dimensional (“3D”) objects based, at least in part, on a combination of the inferred depth information, the inferred albedo information, and the inferred specular information (e.g. Fig. 2 or section 8, a 3D face is rendered). Accordingly, it would have been obvious for one of ordinary skill in the art to have combined and modified the applied reference(-s), in view of same, to have obtained the above, and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A). That is, to optimize the rendered or Liu to include the teachings of Geng, when both references render 3D outputs (Liu, e.g. para. 61, which teaches creating a face avatar that is production-ready; and para. 64, the production-ready avatar can be 3D). Geng also teaches several benefits to using a differentiable rendered, as reproduced in the above mapping. This is additional motivation. The prior art included each element recited in claim 1, although not necessarily in a single embodiment, with the only difference being between the claimed element and the prior art being the lack of actual combination of certain elements in a single prior art embodiment, as described above. One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention. Regarding claim 2: Geng teaches: the processor of claim 1, wherein the differentiable rendered is to generate the one or more 3D objects based, at least in part, on inferenced color information (see mapping to claim 1, Geng teaches albedo maps, which teach inferred color information. See also Fig. 2). Modifying the applied references, in view of same, to have included inferenced color information from its albedo map, would have been obvious as known features/properties of an albedo map, as defining base colors of objects. The motivation would be to be able to access information to provide true renderings. Regarding claim 3: Liu teaches: the processor of claim 1, wherein the differentiable rendered is to generate the one or more 3D objects based, at least in part, on a depth map (para. 17, depth maps are known). Modifying the applied references, in view of same, to have included a depth map to contain inferred depth information, as mapped in claim 1, as a known data structure for depth information, is all of taught and suggested by the prior art, and would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A). One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention. Regarding claim 8: see also claim 1. Liu teaches: a system (claim 19, a system). The one or more processors of claim 8 corresponds to the processor of claim 1; the same rationale for rejection applies. Regarding claim 9: see claim 2. The claims are similar; the same rationale for rejection applies. Regarding claim 10: see claim 3. The claims are similar; the same rationale for rejection applies. Regarding claim 15: see claim 1. The method of claim 15 is performed by the processor of claim 1; the same rationale for rejection applies. Regarding claim 16: see claim 2. The claims are similar; the same rationale for rejection applies. Regarding claim 17: see claim 3. The claims are similar; the same rationale for rejection applies. Claim(s) 4, 11 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Liu in view of Geng, and further in view of Iqbal (U.S. Patent App. Pub. No. 2024/0005637) and Fu C, Yuan H, Xu H, Zhang H, Shen L. TMSO-Net: Texture adaptive multi-scale observation for light field image depth estimation. Journal of Visual Communication and Image Representation. 2023 Feb 1;90:103731 (Available online 14 December 2022) (“Fu”). Regarding claim 4: It would have been obvious for one of ordinary skill in the art to have combined and modified the applied reference(-s), in view of same, to have obtained: the processor of claim 1, wherein the one or more neural networks are to generate the one or more 3D objects based, at least in part, a neural network (Liu, claim 1, end-to-end neural network) to infer depth information using a feature map, specular map and an albedo map, and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A). Iqbal teaches inferring depth information using a feature map (claim 1,para. 7), which can be done using neural networks (paras. 33, 36, 37). Likewise, Fu teaches that a core of deep learning-based depth estimation is a CNN-based feature extraction and depth map reconstruction (Section 2.2). To improve upon this, Fu teaches inferring depth information using texture information (see Section 3). Re: texture information, Liu teaches obtaining specular and albedo maps (see e.g. claim 10), as texture information (*Claim interpretation: the examiner’s interpretation of albedo and specular maps as texture information or maps is a broad, reasonable interpretation consistent with technology and with Applicant’s specification as filed; see specification, paragraph 117). Modifying the applied references, such to include the teachings of Fu and Iqbal, in the system of Liu, such to infer depth information using feature map (Iqbal, Liu also teaches feature maps (see claim 1 of Liu)), and specular and albedo maps, of Liu, as beneficial texture information, per Fu, is all of taught, suggested and motivated by the prior art, and would have been obvious and predictable to one of ordinary skill. The prior art included each element recited in claim 4, although not necessarily in a single embodiment, with the only difference being between the claimed element and the prior art being the lack of actual combination of certain elements in a single prior art embodiment, as described above. One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention. Regarding claim 11: see claim 4. The claims are similar; the same rationale for rejection applies. Regarding claim 18: see claim 4. The claims are similar; the same rationale for rejection applies. Claim(s) 5, 6, 12, 13, 19 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Liu in view Geng, and further in view of Iqbal. Regarding claim 5: It would have been obvious for one of ordinary skill in the art to have combined and modified the applied reference(-s), in view of same, to have obtained: the processor of claim 1, wherein the differentiable renderer is to generate the one or more 3D objects based, at least in part, on an inferred depth map (Iqbal, Abstract, generate a depth map from a feature map, para. 33, feature map can be from neural network; and para. 36, depth map also from machine learning program), albedo map, and specular map ((Liu, claim 10) or (Geng, Fig. 2)), and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A). Modifying Liu, such to include the neural network teachings of Iqbal, to have inferred a depth map, to arrive at the 3D models of Liu, is all of taught and suggested by the prior art, and would have been obvious and predictable to one of ordinary skill, further motivated to obtain depth information for said final 3D model. The prior art included each element recited in claim 5, although not necessarily in a single embodiment, with the only difference being between the claimed element and the prior art being the lack of actual combination of certain elements in a single prior art embodiment, as described above. One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention. Regarding claim 6: It would have been obvious for one of ordinary skill in the art to have combined and modified the applied reference(-s), in view of same, to have obtained: the processor of claim 1, wherein the one or more neural networks are to generate the o: the processor of claim 1, wherein the differentiable renderer is to generate the one or more 3D objects based, at least in part, on one or more jointly trained neural networks to infer a depth map (Iqbal, Abstract, generate a depth map from a feature map, para. 33, feature map can be from neural network; and para. 36, depth map also from machine learning program) (see also para. 37, jointly trained machine learning programs such as a deep neural network, are known), an albedo map, and a specular map (Liu, claim 10 and mapping to claim 1) (Liu, para. 141, all models can be jointly trained) (alternatively, see Geng, Fig. 2), and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A). The prior art included each element recited in claim 6, although not necessarily in a single embodiment, with the only difference being between the claimed element and the prior art being the lack of actual combination of certain elements in a single prior art embodiment, as described above. One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention. Regarding claim 12: see claim 5. The claims are similar; the same rationale for rejection applies. Regarding claim 13: see claim 6. The claims are similar; the same rationale for rejection applies. Regarding claim 19: see claim 5. The claims are similar; the same rationale for rejection applies. Regarding claim 20: It would have been obvious for one of ordinary skill in the art to have combined and modified the applied reference(-s), in view of same, to have obtained: the method of claim 15, further comprising jointly training a first neural network to infer a depth map, a second neural network to infer an albedo map, and a third neural network to infer a specular map, and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A). Iqbal and/or Liu both teach joint training (Iqbal, para. 37) (Liu, para. 141), and they both teach neural networks. Id. and mapping to claim 6. The prior art also teaches inferring the 3 maps claimed (depth, albedo, specular), as mapped in claim 6. Modifying the applied references, in view of same, such that the maps are obtained via jointly trained neural networks, all of which are taught by the prior art, would have been obvious and predictable to one of ordinary skill, further motivated to take advantage of known programming with respect to machine learning, to achieve desired results. One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention. Claim(s) 7 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Liu in view of Geng, and further in view of Satat (U.S. Patent App. Pub. No. 2023/0247015 A1). Regarding claim 7: It would have been obvious for one of ordinary skill in the art to have combined and modified the applied reference(-s), in view of same, to have obtained: the processor of claim 1, wherein the differentiable renderer is to generate the one or more 3D objects based, at least in part, a feature map (Liu, claim 1, para. 21, feature map) generated from one or more images captured from a monoscopic camera device, and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A). Regarding images from a monoscopic camera device, Satat describes pros and cons of images from different capture devices (para. 26). In terms of the benefits of monoscopic images from capture devices of same, Satat expressly states, in para. 25, that: “monoscopic images may provide more complete depth information that represents such transparent, partially-transparent, refractive, specular materials”. Liu further teaches that its system accepts one or more input image(-s) (para. 21), and also produces a specular and albedo map (see claim 10). Modifying Liu, such that at least one of the input images is from a monoscopic camera device, when Satat expressly teaches benefits of monoscopic images with regard to specular materials and lighting effects, that directly relate to the specular and albedo maps of Liu, is all of taught, suggested and motivated by the prior art, and would have been obvious and predictable to one of ordinary skill. One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention. Regarding claim 14: see claim 7. The claims are similar; the same rationale for rejection applies. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Sarah Lhymn whose telephone number is (571)270-0632. The examiner can normally be reached M-F, 9:00 AM to 6:00 PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao Wu can be reached at 571-272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. Sarah Lhymn Primary Examiner Art Unit 2613 /Sarah Lhymn/Primary Examiner, Art Unit 2613
Read full office action

Prosecution Timeline

Nov 07, 2023
Application Filed
Jun 29, 2025
Non-Final Rejection — §103
Oct 02, 2025
Response Filed
Oct 13, 2025
Final Rejection — §103
Jan 16, 2026
Request for Continued Examination
Jan 26, 2026
Response after Non-Final Action
Mar 20, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602882
AUGMENTED REALITY DISPLAY DEVICE AND AUGMENTED REALITY DISPLAY SYSTEM
2y 5m to grant Granted Apr 14, 2026
Patent 12602764
METHODS OF ARTIFICIAL INTELLIGENCE-ASSISTED INFRASTRUCTURE ASSESSMENT USING MIXED REALITY SYSTEMS
2y 5m to grant Granted Apr 14, 2026
Patent 12602746
SYSTEM AND METHOD FOR BACKGROUND MODELLING FOR A VIDEO STREAM
2y 5m to grant Granted Apr 14, 2026
Patent 12585888
AUTOMATICALLY GENERATING DESCRIPTIONS OF AUGMENTED REALITY EFFECTS
2y 5m to grant Granted Mar 24, 2026
Patent 12586163
INTERACTIVELY REFINING A DIGITAL IMAGE DEPTH MAP FOR NON DESTRUCTIVE SYNTHETIC LENS BLUR
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
65%
Grant Probability
81%
With Interview (+15.2%)
2y 4m
Median Time to Grant
High
PTA Risk
Based on 546 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month