Prosecution Insights
Last updated: April 19, 2026
Application No. 18/207,953

USING NEURAL NETWORKS TO GENERATE SYNTHETIC DATA

Non-Final OA §101§102§103§112
Filed
Jun 09, 2023
Examiner
TUCKER, WESLEY J
Art Unit
2661
Tech Center
2600 — Communications
Assignee
Nvidia Corporation
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
90%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
596 granted / 715 resolved
+21.4% vs TC avg
Moderate +6% lift
Without
With
+6.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
19 currently pending
Career history
734
Total Applications
across all art units

Statute-Specific Performance

§101
12.3%
-27.7% vs TC avg
§103
35.7%
-4.3% vs TC avg
§102
39.4%
-0.6% vs TC avg
§112
8.3%
-31.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 715 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 7 and 13 and dependent claim 6 are rejected under 35 U.S.C. 101 because the claimed invention is directed to one neural network training another without significantly more. The claim(s) recite(s) 1.A processor comprising: one or more circuits to use one or more first neural networks to train one or more second neural networks to perform according to one or more performance metrics. 7. A method, comprising: using one or more first neural networks to train one or more second neural networks to perform according to one or more performance metrics. 13. A system, comprising: one or more processors to use one or more first neural networks to train one or more second neural networks to perform according to one or more performance metrics. This judicial exception is not integrated into a practical application because there is not recited practical application. There is simply a neural network training another neural network. There is no recitation of how a neural network is trained and there is no recitation of what the other neural network is trained to do. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because only two neural networks are recited and they are simply generic computing components with no specifics as to how or why they function and there are no specific recited about any specialized hardware or computing components. Without any specific recitation of what the neural networks do, the claim is directed to the abstract idea of a computing component training another computing component with no specific recited task. The training of a neural network by another neural network without any recitation of data, intent or outcome is essentially a purely mathematical relationship with no recited practical application. There is no integration into a practical application because there is no recitation of what the computing components do. Appropriate correction is required. Claim 6 merely recites a kind of neural network but does not recite a function or task performed by the neural network. Claim 6 is also rejected under 101 as being directed to an abstract idea. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The term “poorly” in claim 5 is a relative term which renders the claim indefinite. The term “poorly” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-11, 13-16 and 18-19 are rejected under 35 U.S.C. 102(a) as being anticipated by USPN 2021/0174072 to Zhang et al. With regard to claim 1, Zhang discloses a processor (paragraphs [0017]-[0019], [0161]-[0162] Fig. 9, processor 1001 comprising: one or more circuits to use one or more first neural networks (Fig. 7, neural networks model 30g and 30h) to train one or more second neural networks to perform according to one or more performance metrics (Fig. 7, neural network sample generative model 30d generates synthetic facial images by exaggerating facial expressions to create exaggerated images of facial micro-expressions, The neural networks 30g and 30h are used to train the generative model 30d by calculating error function 30k, which consists of loss values for the models and adjusting the weights of the models thereby training the generative models. See paragraphs [0107]-[0109]). With regard to claim 2, Zhang discloses the processor of claim 1, wherein: the one or more second neural networks are to perform facial recognition (Fig. 7, neural networks 30g and 30h recognize facial expressions); and the one or more first neural networks are to generate synthetic data based (Fig. 7, neural network sample generative model 30d consisting of models 30b and 30c are used to generates synthetic facial images by exaggerating facial expressions to create exaggerated images of facial micro-expressions), at least in part, on the one or more performance metrics of the one or more second neural networks performing facial recognition using one or more images (The neural networks 30g and 30h are used to train the generative model 30d by calculating error function 30k, which consists of loss values for the models and adjusting the weights of the models thereby training the generative models. See paragraphs [0107]-[0109]). With regard to claim 3, Zhang discloses the processor of claim 1, wherein the one or more first neural networks are to generate one or more synthetic images based, at least part, on the one or more performance metrics to train the one or more second neural networks (The neural networks 30g and 30h are used to train the generative model 30d by calculating error function 30k, which consists of loss values for the models and adjusting the weights of the models thereby training the generative models. The loss values are considered to be the performance metrics used to train the models by adjusting the weights. See paragraphs [0107]-[0109]). With regard to claim 4, Zhang discloses the processor of claim 1, wherein the one or more second neural networks are to be trained to perform facial recognition based on one or more synthetic images generated by the one or more first neural networks and one or more other images (The neural networks 30g and 30h are used to train the generative model 30d by calculating error function 30k, which consists of loss values for the models and adjusting the weights of the models thereby training the generative models. The loss values are considered to be the performance metrics used to train the models by adjusting the weights. See paragraphs [0107]-[0109]). With regard to claim 5, Zhang discloses the processor of claim 1, wherein the one or more performance metrics comprises information indicating which group of images having facial features or attributes where the one or more second neural networks have performed poorly on (The neural networks 30g and 30h are used to train the generative model 30d by calculating error function 30k, which consists of loss values for the models and adjusting the weights of the models thereby training the generative models. The loss values are considered to be the performance metrics used to train the models by adjusting the weights. See paragraphs [0107]-[0109]. The training continues until the loss values are low enough. This is interpreted as images where the neural networks have “performed poorly.”). With regard to claim 6, Zhang discloses the processor of claim 1, wherein the one or more second neural networks comprise one or more Generative Adversarial Networks (GANs) (paragraph [0072], “Therefore, the image augmentation model may correspond to a sample generative model in an adversarial network, and the adversarial network includes the sample generative model and a sample discriminative model.”). With regard to claim 7, the discussion of claim 1 applies. With regard to claim 8, Zhang discloses the method of claim 7, further comprising: wherein the one or more first neural networks are to use the one or more performance metrics to generate one or more synthetic images (The neural networks 30g and 30h are used to train the generative model 30d by calculating error function 30k, which consists of loss values for the models and adjusting the weights of the models thereby training the generative models. The loss values are considered to be the performance metrics used to train the models by adjusting the weights. See paragraphs [0107]-[0109].), wherein the one or more performance metrics comprise information from one or more groups of images having one or more facial features that cause the second neural network to misidentify the one or more facial features (paragraphs [0036]-[0043], and Fig. 2, 10b, 10c, 10d, 10e, 10f, 10g, 10h, 10k; Facial features are recognized and exaggerated in order to better recognize facial expressions. If the facial features and corresponding expressions are misidentified or poorly then the loss values or performance metrics discussed in paragraphs [0107]-[0109] will require the facial expression recognition and synthetic image generation to adjusts weights to continue training the neural networks). With regard to claim 9, Zhang discloses the method of claim 7, further comprising: using the one or more second neural networks to perform facial recognition; calculating the one or more performance metrics of the one or more second neural networks (If the facial features and corresponding expressions are misidentified or poorly then the loss values or performance metrics discussed in paragraphs [0107]-[0109] will require the facial expression recognition and synthetic image generation to adjusts weights to continue training the neural networks); identifying, based at least in part on the one or more performance metrics, a group of images where the one or more performance metrics are below a defined threshold (paragraphs [0107]-[0109], When the loss values are large enough to continue to train and adjust the weights of the neural networks, this is interpreted as a performance metric being below a threshold of satisfaction); and generating, by the one or more first neural networks, one or more synthetic images associated with the identified one or more groups of images (paragraphs [0107]-[0109], When the loss values are large enough to continue to train and adjust the weights of the neural networks, this is interpreted as a performance metric being below a threshold of satisfaction. The generative neural networks then continue to train the system and generate synthetic images with the determined adjusted weights). With regard to claim 10, Zhang discloses the method of claim 7, further comprising: identifying a set of images from one or more training images used to train the one or more second neural networks that are below a quantity threshold, wherein the set of images comprises images of a specific group (If the facial features and corresponding expressions are misidentified or poorly then the loss values or performance metrics discussed in paragraphs [0107]-[0109] will require the facial expression recognition and synthetic image generation to adjusts weights to continue training the neural networks); causing the one or more first neural networks to generate one or more synthetic images having facial features that correspond to the facial features of the set of images (If the facial features and corresponding expressions are misidentified or poorly then the loss values or performance metrics discussed in paragraphs [0107]-[0109] will require the facial expression recognition and synthetic image generation to adjusts weights to continue training the neural networks by generating additional synthetic images); and using the one or more synthetic images and the one or more training images to train the one or more second neural networks (paragraphs [0107]-[0109], The additional generated synthetic images are used to continue training the models). With regard to claim 11, Zhang discloses the method of claim 7, further comprising: using the one or more second neural networks to perform pattern recognition (Fig. 2, model 20c is considered to recognize the pattern of different facial features and the exaggerated pattern of facial features to generate a facial expression matching the determine facial feature pattern. See also paragraphs [0036]-[0042]); and using the one or more first neural networks to generate synthetic data, based least in part on, the one or more performance metrics of the one or more second neural networks performing pattern recognition of one or more images (paragraphs [0036]-[0042], the performance or loss values of the facial expression recognition of 20 c is used to adjust the weights of the facial feature exaggeration components). With regard to claim 13, the discussion of claim 1 applies. With regard to claim 14, Zhang discloses the system of claim 13, wherein the one or more processors are to: cause the one or more first neural networks to generate one or more synthetic images to train the one or more second neural networks (Fig. 7, The neural networks 30g and 30h are used to train the generative model 30d by calculating error function 30k, which consists of loss values for the models and adjusting the weights of the models thereby training the generative models. The loss values are considered to be the performance metrics used to train the models by adjusting the weights. See paragraphs [0107]-[0109]), wherein the one or more synthetic images are generated based, at least in part, on the one or more performance metrics of the one or more second neural networks performing facial recognition on one or more other images (Fig. 7, The neural networks 30g and 30h are used to train the generative model 30d by calculating error function 30k, which consists of loss values for the models and adjusting the weights of the models thereby training the generative models. The loss values are considered to be the performance metrics used to train the models by adjusting the weights. See paragraphs [0107]-[0109]). With regard to claim 15, Zhang discloses the system of claim 13, wherein the one or more processors are to further: select a reference image; and use the one or more first neural networks to synthetically generate variations of the reference image to train the one or more second neural networks (Fig. 2, neural network models 20a and 20b are used synthetically generate variations of the reference image 10a in order to exaggerate a facial expression for recognition). With regard to claim 16, Zhang discloses the system of claim 13, wherein the one or more processors are to: use the one or more second neural networks to perform object classification (Fig. 2 neural network model 20c performs object classification by detecting and recognizing facial components or objects in order to detect and classify a facial expression); and use the one or more first neural networks to generate synthetic data based, at least in part, on the one or more performance metrics of the one or more second neural networks performing object classification of one or more images (Fig. 7, The neural networks 30g and 30h are used to train the generative model 30d by calculating error function 30k, which consists of loss values for the models and adjusting the weights of the models thereby training the generative models. The loss values are considered to be the performance metrics used to train the models by adjusting the weights. See paragraphs [0107]-[0109]). With regard to claim 18, Zhang discloses the system of claim 13, wherein the one or more processors cause the one or more first neural networks to generate one or more synthetic images of higher quality than one or more images initially used to train the one or more second neural networks Fig. 7, The neural networks 30g and 30h are used to train the generative model 30d by calculating error function 30k, which consists of loss values for the models and adjusting the weights of the models thereby training the generative models. The loss values are considered to be the performance metrics used to train the models by adjusting the weights. See paragraphs [0107]-[0109]. Adjusting the weights of the models that generate the synthetic images is interpreted as causing the model to generate syn thetic images of higher quality. The training is recursive and the quality of the synthetic images should increase over time with each iterative of weight adjustment). With regard to claim 19, Zhang discloses the system of claim 13, wherein the one or more performance metrics comprises information indicating which types of features result in the one or more second neural networks misidentifying faces (paragraphs [0036]-[0043], and Fig. 2, 10b, 10c, 10d, 10e, 10f, 10g, 10h, 10k; Facial features are recognized and exaggerated in order to better recognize facial expressions. If the facial features and corresponding expressions are misidentified or poorly then the loss values or performance metrics discussed in paragraphs [0107]-[0109] will require the facial expression recognition and synthetic image generation to adjusts weights to continue training the neural networks). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 12 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of USPNs 2021/0174072 to Zhang et al. and 2023/0368502 to Ranganathan et al. With regard to claims 12 and 17, Zhang discloses the method of claim 7, but does not disclose further comprising: determining a reference image associated with an identity; filtering one or more synthetic images generated by the one or more first neural networks by comparing the one or more synthetic images with the reference image; and discarding a synthetic image from the one or more synthetic images that comprise facial features that are different with the identity of the reference image. Ranganathan discloses a system for training a facial image recognition by generating synthetic images in the form of altered facial images of a reference facial image of a specific person (See Fig. 1B). The system then attempts to recognize the image as the image of the person after the image has been synthetically altered (See Fig. 1C). Facial images are then attempted to be recognized and based on the comparison the face is recognized as the known person (paragraphs [0017]-[0020]). If the image does not match the synthetic image then the person is not considered to be identified a the registered reference image person. Therefore it would have been obvious to one of ordinary skill in the art before time of filing to use the synthetic image generation taught by Ranganathn in combination with the facial feature recognition of Zhang in order to determine a robust facial verification system. Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over the combination of USPNs 2021/0174072 to Zhang et al. and 2024/0326825 to Quintao Severgnini et al. With regard to claim 20, Zhang discloses the system of claim 13, but does not disclose wherein the one or more processors are to the use one or more second neural networks to perform facial recognition in an autonomous vehicle. Monitoring facial images and facial expressions of people inside vehicles is well known in the art. Quintao Severgnini discloses monitoring an occupant of the vehicle to recognize the face of the occupant as well as the emotional response to the occupant through facial expression recognition. Therefore it would have been obvious to one of ordinary skill in the art before time of filing to use the facial recognition taught by Zhang in the environment of monitoring the face of an autonomous vehicle as taught by Quintao Severgnini in order to observe and track the occupant’s emotional state. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to WESLEY J TUCKER whose telephone number is (571)272-7427. The examiner can normally be reached 9AM-5PM Monday-Friday. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JOHN VILLECCO can be reached at 571-272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WESLEY J TUCKER/Primary Examiner, Art Unit 2661
Read full office action

Prosecution Timeline

Jun 09, 2023
Application Filed
Oct 22, 2025
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597221
IMAGE PROCESSING APPARATUS AND ELECTRONIC APPARATUS
2y 5m to grant Granted Apr 07, 2026
Patent 12597222
METHOD AND SYSTEM FOR DETERMINING A REGION OF WATER CLEARANCE OF A WATER SURFACE
2y 5m to grant Granted Apr 07, 2026
Patent 12592057
SYSTEM AND METHOD FOR DETECTING AND CLASSIFYING RETINAL MICROANEURYSMS
2y 5m to grant Granted Mar 31, 2026
Patent 12585939
SYSTEMS AND METHODS FOR DISTRIBUTED DATA ANALYTICS
2y 5m to grant Granted Mar 24, 2026
Patent 12586410
Method and Device for Dynamic Recognition of Emotion Based on Facial Muscle Movement Monitoring
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
90%
With Interview (+6.1%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 715 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month