Prosecution Insights
Last updated: April 18, 2026
Application No. 18/319,689

MULTI-DOMAIN GENERATIVE ADVERSARIAL NETWORKS FOR SYNTHETIC DATA GENERATION

Final Rejection §103
Filed
May 18, 2023
Examiner
SHEN, QUN
Art Unit
2662
Tech Center
2600 — Communications
Assignee
Nvidia Corporation
OA Round
2 (Final)
76%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
575 granted / 754 resolved
+14.3% vs TC avg
Strong +39% interview lift
Without
With
+38.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
34 currently pending
Career history
788
Total Applications
across all art units

Statute-Specific Performance

§101
5.6%
-34.4% vs TC avg
§103
61.4%
+21.4% vs TC avg
§102
8.4%
-31.6% vs TC avg
§112
16.8%
-23.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 754 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This communication is a Final office action in merits. Claims 1-20, after amendment, are presently pending and have been elected and considered below. Information Disclosure Statement The information disclosure statement (IDS) submitted on 9/19//2023 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over US 2020/0019642 A1, Dua et al. (hereinafter Dua) in view of US 2019/0295302 A1, Fu et al. (hereinafter Fu) and further in view of US 2021/0358082 A1, Zhu et al. (hereinafter Zhu). As to claim 1, Dua discloses a processor, comprising: one or more circuits to: generate input data according to a noise function (Fig 6, generate noise input vector; pars 0020, 0023-0024, 0027); generate a set of features using the input data (pars 0108-0109), determine, using a generative machine-learning model and based at least on the input data, a plurality of output images each corresponding to one of a respective plurality of image domains (Figs 3, 7, generate output images based on noise input; pars 0006, 0020, 0025); and present, using a display device, the plurality of output images (Fig 3; pars 0131). Dua does not expressly the generative machine-learning model to: generate, using the set of features, a plurality of morph maps each corresponding to one of the respective plurality of image domains, Fu, in the same or similar field of endeavor, further teaches the generative machine-learning model to generate a plurality of morph maps each corresponding to one of the respective plurality of image domains (Figs 1, 3, 6, 8; pars 0037, 0040, 0048, 0105, 0113, 0126, 0132, 0161-0162) and present, using a display device, the plurality of output images (Figs 13-14, 22; par 0188). Zhu, in the same or similar field of endeavor, further teaches the generative machine-learning model to generate a plurality of morph maps each corresponding to one of the respective plurality of image domains (Figs 2A-2B, 8; pars 0004, 0008, 0011-0012, 0014), apply the plurality of morph maps to the set of features to generate a plurality of morphed features, and generate the plurality of output images using the plurality of morphed features (Figs 2A-2B, 3, 8; pars 0014, 0019, 0023, 0044-0045, 0068-0069, 0084, 0093-0095, 0214, 0219). Therefore, consider Dua, Fu, and Zhu’s teachings as a whole, it would have been obvious to one of skill in the art before the filing date of invention to incorporate Fu and Zhu’s teachings in Dua’s processor to provide corresponding output murphy images mapped to or based on input images of a generative machine-learning model/network. As to claim 2, Dua as modified discloses the processor of claim 1, wherein the one or more circuits are to update the generative machine-learning model by applying the input data to a generative neural network of the generative machine-learning model to generate the set of features (Fu: pars 0003, 0006-0009, 0011, generative neural network of the generative machine-learning model being updated to generate a set of output features). As to claim 3, Dua as modified discloses the processor of claim 2, wherein the one or more circuits are to update the generative machine-learning model by applying the plurality of morph maps to the set of output features to generate the set of morphed features (Fu: Figs 1, 8; pars 0037, 0039-0040, 0048, 0073, 0091, 0099, 0112-0113; Zhu: Figs 2A-2B). As to claim 4, Dua as modified discloses the processor of claim 1, wherein the one or more circuits are to update the generative machine-learning model based at least on a plurality of outputs of a respective plurality of discriminator models that respectively receive the plurality of output images as input (Fu: Figs 2B, 2D; pars 0004, 0037, 0040, 0042, 0044, 0046), each of the respective plurality of discriminator models corresponding respectively to one of the respective plurality of image domains (Fu: Figs 2B, 2D; pars 0004, 0037, 0040, 0042, 0044, 0046). As to claim 5, Dua as modified discloses the processor of claim 1, wherein each of the respective plurality of image domains correspond to a geometrically different domain (Fu: Fig 2C, 3; pars 0004, 0034, 0037-0039, different segmentation representing geometrically different domain). As to claim 6, Dua as modified discloses the processor of claim 1, wherein the plurality of morph maps each comprises a pixel-wise transformation vector (Fu: pars 0004, 0039, 0061, 0076, 0117). As to claim 7, Dua as modified discloses the processor of claim 1, wherein the generative machine-learning model comprises a plurality of rendering layers updated to generate the plurality of output images (Fu: Figs 2A-2C; pars 0133, 0143, 0154, convolution layers being rendering layers). As to claim 8, Dua as modified discloses the processor of claim 7, wherein the plurality of rendering layers receive a sum calculated based at least on the plurality of morphed features (Fu: pars 0054, 0137-0138: Zhu: Figs 2A-2B). As to claim 9, Dua as modified discloses the processor of claim 7, wherein the plurality of rendering layers comprise at least one shared weight value (Fu: pars 0084, 0089, 0105, 0147). As to claim 10, Dua as modified discloses the processor of claim 1, wherein the generative machine-learning model comprises a plurality of layers, at least one layer of the generative machine-learning model being a convolution layer (Fu: Figs 1, 2A, 13; pars 0041, 0132, convolution layers in generative network). As to claim 11, Dua as modified discloses the processor of claim 1, wherein the processor is comprised in at least one of: a control system for an autonomous or semi-autonomous machine; a perception system for an autonomous or semi-autonomous machine; a system for performing simulation operations; a system for performing digital twin operations; a system for performing light transport simulation; a system for performing collaborative content creation for 3D assets; a system for performing deep learning operations (Fu: pars 0034, 0036, 0113, deep generative models); a system implemented using an edge device; a system implemented using a robot; a system for performing conversational AI operations; a system for performing generative AI operations using a large language model (LLM) (Dua: pars 0018, 0027); a system for generating synthetic data; a system incorporating one or more virtual machines (VMs); a system implemented at least partially in a data center; or a system implemented at least partially using cloud computing resources. As to claim 12, it is rejected with the same reason as set forth in claims 1 and 4. As to claim 13, Dua as modified discloses the processor of claim 12, wherein the generative machine-learning model comprises at least one of: a pre-trained generative neural network; a variational autoencoder (VAE); or a generative adversarial network (GAN) (Dua: Figs 3, 5; pars 0006, 0022; Fu: pars 0034, 0036-0037). As to claims 14-15, they are rejected with the same reason as claims 5-6. As to claim 16, Dua as modified discloses the processor of claim 12, wherein the plurality of morph maps are generated using at least a first layer of the generative machine learning model (Fu: Figs 1, 2A, 13; pars 0041, 0132), and the one or more circuits are to update the generative machine-learning model by applying the plurality of morph maps to a set of features generated by at least a second layer of the generative machine-learning model (Fu: pars 0054-0056, 0133, 0175). As to claim 17, it is rejected with the same reason as claim 11. As to claim 18, it is a method claim necessitated claim 1. Rejection of claim 1 is therefore incorporated herein. As to claims 19-20, they are rejected with the same reason as set forth in claims 2-3, respectively. Response to Arguments Applicant’s arguments have been considered but they are moot in light of new ground of rejection. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Examiner’s Note Examiner has cited particular column, line number, paragraphs and/or figure(s) in the reference(s) as applied to the claims for the convenience of the Applicant. Although the specified citations are representative of the teachings of the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested from the applicant in preparing responses, to fully consider the reference(s) in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to QUN SHEN whose telephone number is (571)270-7927. The examiner can normally be reached on Mon-Fri 8:30-5:50 PT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amandeep Saini can be reached on 571-272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /QUN SHEN/ Primary Examiner, Art Unit 2662
Read full office action

Prosecution Timeline

May 18, 2023
Application Filed
Nov 01, 2025
Non-Final Rejection — §103
Jan 26, 2026
Examiner Interview Summary
Jan 26, 2026
Applicant Interview (Telephonic)
Mar 05, 2026
Response Filed
Apr 05, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602799
REGISTRATION CHAINING WITH INFORMATION TRANSFER
2y 5m to grant Granted Apr 14, 2026
Patent 12579609
High Resolution Input Processing in a Neural Network
2y 5m to grant Granted Mar 17, 2026
Patent 12566972
DATA DENOISING METHOD AND RELATED DEVICE
2y 5m to grant Granted Mar 03, 2026
Patent 12561997
CONTEXT-BASED REVIEW TRANSLATION
2y 5m to grant Granted Feb 24, 2026
Patent 12560726
Low-Power-Consumption Positioning Method and Related Apparatus
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+38.6%)
3y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 754 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month