Prosecution Insights
Last updated: April 19, 2026
Application No. 16/913,157

IMAGE GENERATION USING ONE OR MORE NEURAL NETWORKS

Final Rejection §103
Filed
Jun 26, 2020
Examiner
BITAR, NANCY
Art Unit
2664
Tech Center
2600 — Communications
Assignee
Nvidia Corporation
OA Round
6 (Final)
83%
Grant Probability
Favorable
7-8
OA Rounds
2y 11m
To Grant
91%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
786 granted / 946 resolved
+21.1% vs TC avg
Moderate +8% lift
Without
With
+8.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
32 currently pending
Career history
978
Total Applications
across all art units

Statute-Specific Performance

§101
13.3%
-26.7% vs TC avg
§103
62.1%
+22.1% vs TC avg
§102
6.4%
-33.6% vs TC avg
§112
8.9%
-31.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 946 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments, in the amendment filed 7/25/2025 with respect to the rejections of claims 1,7,13,19 and 25 under 35 U.S.C. 102(a)have been fully considered but are moot in view of the new ground(s) of rejection necessitated by the amendments. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Tan et al ("ArtGAN: Artwork Synthesis with Conditional Categorical GANs "). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1,7,13,19 and 25 is/are rejected under 35 U.S.C. 103 as being unpatentable over Takahashi et al. (US 11,686,721 B2) in view of Tan et al ("ArtGAN: Artwork Synthesis with Conditional Categorical GANs "). Regarding claim 1, Takahashi discloses a processor (a processor at Fig. 9-101, col. 15, line 45) comprising: circuitry (“Processor 101 is implemented, for example, by at least one integrated circuit” at col. 15, lines 50-55) to use one or more neural networks (fully neural network at Fig. 6, col. 12, lines 39-55) to generate object classifications indicating one or more object features for the one or more objects depicted in the reference images; (Takashi teaches obtaining an input image that has an object within in it at Fig. 13-312, col. 18, lines 62-65. The input image corresponds to the first image in the claim. Takahashi also teaches the training model from training data, which corresponds to the second images, for detecting an object applied to the input image at Fig. 13-314, col. 18, line 66 – col. 19, line 2. The detecting object is labeled for removal and removed at Fig. 13-316, Fig. 13-S332, col. 19, lines 2-39). While Takahashi meets the limitations above Takahashi fails to teach “generate object classifications indicating one or more object features for the one or more objects depicted in the reference images”. However , Tan et al teaches generate object classifications indicating one or more object features for the one or more objects depicted in the reference images Section 2.3, Paragraph 1 and Figure 2 shows the discriminator network can consume the output image x and run it through the classifier clsNet).Tan teaches in Paragraph 2 shows computing a discriminator loss LD through equation 2. It also mentions updating parameters in the discriminator, D, based on the discriminator loss. Tan further teaches the method further comprising: tuning the GAN by using the latent vector as a pivot for a latent space of the GAN (Section 2.3, Paragraph 3, Figure 2 shows the input image's latent feature Z and the latent vector, output from the zNet, being passed into the decoder phase of the GAN to train the GAN. This can be considered using the latent vector as a pivot).It would have been obvious to one skilled in the art before filing of the claimed invention to depict a classification from the one or more first images in order to learn faster and achieve better generated image quality. Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. b. Regarding claim 7, claim 7 is analogous and corresponds to claim 1. See rejection of claim 1 for further explanation. c. Regarding claim 13, claim 13 is analogous and corresponds to claim 1. See rejection of claim 1 for further explanation. d. Regarding claim 19, claim 19 is analogous and corresponds to claim 1. See rejection of claim 1 for further explanation. e. Regarding claim 25, claim 25 is analogous and corresponds to claim 1. See rejection of claim 1 for further explanation. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 2-4, 8-10, 14-16, 20-22, and 26-28 are rejected under 35 U.S.C. 103 as being unpatentable over Takahashi et al. (US 11,686,721 B2) in view of Tan et al ("ArtGAN: Artwork Synthesis with Conditional Categorical GANs "), and further in view of Bao et al. (“CVAE-GAN: Fine-Grained Image Generation through Asymmetric Training”). a. Regarding claim 12, Takahashi discloses wherein the one or more neural networks (fully neural network at Fig. 6, col. 12, lines 39-55). However, Takahashi does not explicitly disclose a plurality of variational autoencoders (VAEs) trained to encode image features of 3different classes of the one or more object objects into a latent space. Bao discloses wherein the one or more neural networks 2include a plurality of variational autoencoders (VAEs) trained to encode image features of 3different classes of objects into a latent space (Bao discloses that “[t]he function of networks E and G is the same as that in conditional variational auto-encoder (CVAE) [34]. The encoder network E maps the data sample x to a latent representation z through a learned distribution P (z|x, c), where c is the category of the data” at chapter 3). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize CVAE of Bao to Takahashi’s neural network. The suggestion/motivation would have been to “[estimate] a good representation of the input image, and the generated image appears to be more realistic” (Bao; chapter 1). b. Regarding claim 13, the combination applied in claim 2 discloses wherein the one or more neural networks 2further include a gating network to select one of the VAEs to encode image features of the one or more objects into the latent space (Bao discloses that “[t]he function of networks E and G is the same as that in conditional variational auto-encoder (CVAE) [34]. The encoder network E maps the data sample x to a latent representation z through a learned distribution P (z|x, c), where c is the category of the data” at chapter 3). c. Regarding claim 14, the combination applied in claim 2 discloses wherein the one or more neural networks 2further include a generative adversarial network (GAN) for generating an output image 3based on image content of the one or more first image and using the latent space as a constraint to cause 4the output image to not include image content corresponding to the one or more object, wherein the 5GAN is to perform inpainting for a region of the one or more first image previously corresponding to the one or more 6objects (Bao discloses that “[t]he generative network G generates image x’ by sampling from a learned distribution P (x|z, c). The function of network G and D is the same as that in the generative adversarial network (GAN). The network G tries to learn the real data distribution by the gradients given by the discriminative network D which learns to distinguish between “real” and “fake” samples. The function of network C is to measure the posterior P (c|x)” at chapter 3). d. Regarding claims 8-10, claims 8-10 are analogous and correspond to claims 2-4, respectively. See rejection of claims 2-4 for further explanation. e. Regarding claims 14-16, claims 14-16 are analogous and correspond to claims 2-4, respectively. See rejection of claims 2-4 for further explanation. f. Regarding claims 20-22, claims 20-22 are analogous and correspond to claims 2-4, respectively. See rejection of claims 2-4 for further explanation. g. Regarding claims 26-28, claims 26-28 are analogous and correspond to claims 2-4, respectively. See rejection of claims 2-4 for further explanation. Claims 5-6, 11-12, 17-18, 23-24, and 29-30 are rejected under 35 U.S.C. 103 as being unpatentable over Takahashi et al. (US 11,686,721 B2) in view of Tan et al ("ArtGAN: Artwork Synthesis with Conditional Categorical GANs "), and further in view of Huang et al. (US 2021/0048881 A1). a. Regarding claim 5, Takahashi discloses wherein the one or more neural networks (fully neural network at Fig. 6, col. 12, lines 39-55). However, Takahashi does not explicitly disclose herein the one or more objects features are depicted in the one or more objects from the one or more first images and are absent in the modified one or more first images . Huang discloses further to detect one or more anomalies in the one or more first images after the one or more first objects are removed and cause the one or more first images to be regenerated to attempt to remove the one or more anomalies (Huang discloses that “the processor removes a specific object in the training screens to generate a plurality of preprocessed training images. Here, the manner in which the processor 130 removes the specific object from the training screens is the same as the manner in which the processor 130 removes the specific object from the first person view screen in step S202. In other words, the processor 130 can also respectively cut the training screens into a plurality of sub training screens and take at least one sub training screen of each training screen to generate a plurality of preprocessed training images” at Fig. 7-S702 and ¶ 0039. Here it is inherent and readily apparent that S702 would have to erase the one or more first objects from the one or more first images). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to removing process of Huang to Takahashi’s neural network. The suggestion/motivation would have been to enable “the recognition accuracy of the neural network model can be significantly improved” (Huang; ¶ 0027) and “improve a processing efficiency” (Huang; ¶ 0034). b. Regarding claim 16, Takahashi discloses wherein the one or more neural networks (fully neural network at Fig. 6, col. 12, lines 39-55). However, Takahashi does not explicitly disclose further to detect one or more instances of an object in the one or more first images after removal of the at least one depiction of the at least one object corresponding to the object classifications and cause the one or more first images to be regenerated to attempt to remove the one or more instances. Huang discloses wherein the one or more neural networks are further to detect one or more instances of an object in the one or more first images after removal of the one or more first objects and cause the one or more first images to be regenerated to attempt to remove the one or more instances (Huang discloses that “the processor removes a specific object in the training screens to generate a plurality of preprocessed training images. Here, the manner in which the processor 130 removes the specific object from the training screens is the same as the manner in which the processor 130 removes the specific object from the first person view screen in step S202. In other words, the processor 130 can also respectively cut the training screens into a plurality of sub training screens and take at least one sub training screen of each training screen to generate a plurality of preprocessed training images” at Fig. 7-S702 and ¶ 0039). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to removing process of Huang to Takahashi’s neural network. The suggestion/motivation would have been to enable “the recognition accuracy of the neural network model can be significantly improved” (Huang; ¶ 0027) and “improve a processing efficiency” (Huang; ¶ 0034). c. Regarding claims 11-12, claims 11-12 are analogous and correspond to claims 5-6, respectively. See rejection of claims 5-6 for further explanation. d. Regarding claims 17-18, claims 17-18 are analogous and correspond to claims 5-6, respectively. See rejection of claims 5-6 for further explanation. e. Regarding claims 23-24, claims 23-24 are analogous and correspond to claims 5-6, respectively. See rejection of claims 5-6 for further explanation. f. Regarding claims 29-30, claims 29-30 are analogous and correspond to claims 5-6, respectively. See rejection of claims 5-6 for further explanation. Conclusion Following is a list of references pertinent to the claimed invention: Frolova et al. (US 2021/0081754 A1): Systems and methods are disclosed for error correction in convolutional neural networks. In one implementation, a first image is received. A first activation map is generated with respect to the first image within a first layer of the convolutional neural network. A correlation is computed between data reflected in the first activation map and data reflected in a second activation map associated with a second image. Based on the computed correlation, a linear combination of the first activation map and the second activation map is used to process the first image within a second layer of the convolutional neural network. An output is provided based on the processing of the first image within the second layer of the convolutional neural network. Shechtman et al. (US 2019/0251401 A1): The present disclosure relates to an image composite system that employs a generative adversarial network to generate realistic composite images. For example, in one or more embodiments, the image composite system trains a geometric prediction neural network using an adversarial discrimination neural network to learn warp parameters that provide correct geometric alignment of foreground objects with respect to a background image. Once trained, the determined warp parameters provide realistic geometric corrections to foreground objects such that the warped foreground objects appear to blend into background images naturally when composited together. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to NANCY BITAR whose telephone number is (571)270-1041. The examiner can normally be reached Mon-Friday from 8:00 am to 5:00 p.m.. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ms. Jennifer Mahmoud can be reached on 571-272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. NANCY . BITAR Examiner Art Unit 2664 /NANCY BITAR/Primary Examiner, Art Unit 2664
Read full office action

Prosecution Timeline

Jun 26, 2020
Application Filed
Sep 25, 2022
Non-Final Rejection — §103
Mar 30, 2023
Response Filed
Jul 01, 2023
Final Rejection — §103
Aug 17, 2023
Interview Requested
Sep 01, 2023
Applicant Interview (Telephonic)
Sep 07, 2023
Examiner Interview Summary
Jan 08, 2024
Notice of Allowance
Mar 08, 2024
Response after Non-Final Action
Mar 15, 2024
Response after Non-Final Action
May 19, 2024
Non-Final Rejection — §103
Jun 12, 2024
Interview Requested
Jun 20, 2024
Applicant Interview (Telephonic)
Jun 21, 2024
Examiner Interview Summary
Oct 22, 2024
Interview Requested
Dec 09, 2024
Response Filed
Mar 07, 2025
Final Rejection — §103
May 08, 2025
Interview Requested
Jul 02, 2025
Examiner Interview Summary
Jul 02, 2025
Examiner Interview (Telephonic)
Jul 25, 2025
Request for Continued Examination
Jul 30, 2025
Response after Non-Final Action
Sep 05, 2025
Non-Final Rejection — §103
Dec 23, 2025
Response Filed
Mar 18, 2026
Final Rejection — §103
Apr 15, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12599437
PRE-PROCEDURE PLANNING, INTRA-PROCEDURE GUIDANCE FOR BIOPSY, AND ABLATION OF TUMORS WITH AND WITHOUT CONE-BEAM COMPUTED TOMOGRAPHY OR FLUOROSCOPIC IMAGING
2y 5m to grant Granted Apr 14, 2026
Patent 12597132
IMAGE PROCESSING METHOD AND APPARATUS
2y 5m to grant Granted Apr 07, 2026
Patent 12597240
METHOD AND SYSTEM FOR AUTOMATED CENTRAL VEIN SIGN ASSESSMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12597189
METHODS AND APPARATUS FOR SYNTHETIC COMPUTED TOMOGRAPHY IMAGE GENERATION
2y 5m to grant Granted Apr 07, 2026
Patent 12591982
MOTION DETECTION ASSOCIATED WITH A BODY PART
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
83%
Grant Probability
91%
With Interview (+8.2%)
2y 11m
Median Time to Grant
High
PTA Risk
Based on 946 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month