Prosecution Insights
Last updated: April 19, 2026
Application No. 17/827,141

DOMAIN ADAPTATION USING DOMAIN-ADVERSARIAL LEARNING IN SYNTHETIC DATA SYSTEMS AND APPLICATIONS

Final Rejection §101§103§112
Filed
May 27, 2022
Examiner
THOMPSON, KYLE ALLMAN
Art Unit
2125
Tech Center
2100 — Computer Architecture & Software
Assignee
Nvidia Corporation
OA Round
2 (Final)
83%
Grant Probability
Favorable
3-4
OA Rounds
4y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
5 granted / 6 resolved
+28.3% vs TC avg
Strong +33% interview lift
Without
With
+33.3%
Interview Lift
resolved cases with interview
Typical timeline
4y 3m
Avg Prosecution
22 currently pending
Career history
28
Total Applications
across all art units

Statute-Specific Performance

§101
40.5%
+0.5% vs TC avg
§103
43.8%
+3.8% vs TC avg
§102
7.2%
-32.8% vs TC avg
§112
8.5%
-31.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 6 resolved cases

Office Action

§101 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments with respect to the rejection of the claims 35 U.S.C. 101 have been fully considered but they are not persuasive: Applicant argues: With respect to 101: Applicant argues that the claims allegedly provide an improvement to the technological field of machine learning, as stated on pages 10 and 11 of the remarks. Examiner’s answer: Regarding Applicant’s assertion that the claim as a whole is directed to an improvement in neural network training, Examiner respectfully disagrees as discussed in MPEP § 2106.05(a). As seen on pages 10 and 11 of Applicant’s remarks, paragraphs 17 and 18 of the applicant’s specification merely repeats the claimed functions without explaining how to implement the invention. As for paragraphs 48 – 59, the specification does not describe the invention in sufficient detail that an ordinary artisan would recognize the claimed invention as providing an improvement. Thus, the judicial exception is not integrated into a practical application. With respect to 103: Applicant’s arguments with respect to claims 1 - 20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1 - 20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The term “reduced” in claims 1, 11 and 16 is a relative term which renders the claim indefinite. The term “reduce” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. Claims 2 – 10, 12 – 15, and 17 – 20, which depend directly or indirectly from claims 1, 11 and 16, respectively, are rejected under 35 U.S.C. 112(b) as being indefinite under the same rationale as claims 1, 11 and 16. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1 – 20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The following sections following the 2019 PEG guidelines for analyzing subject matter eligibility. The analysis below of the claims’ subject matter eligibility follows the 2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50-57 (January 7, 2019) (“2019 PEG”) and the 2024 Guidance Update on Patent Subject Matter Eligibility, Including on Artificial Intelligence, 89 Fed. Reg. 58128-58138 (July 17, 2024) (“2024 AI SME Update”). When considering subject matter eligibility under 35 U.S.C. 101, it must be determined whether the claim is directed to one of the four statutory categories of invention, 1.e., process, machine, manufacture, or composition of matter (Step 1). If the claim does fall within one of the statutory categories, the second step in the analysis is to determine whether the claim is directed to a judicial exception (Step 2A). The Step 2A analysis is broken into two prongs. In the first prong (Step 2A, Prong 1), it is determined whether or not the claims recite a judicial exception (e.g., mathematical concepts, mental processes, certain methods of organizing human activity). If it is determined in Step 2A, Prong 1 that the claims recite a judicial exception, the analysis proceeds to the second prong (Step 2A, Prong 2), where it is determined whether or not the claims integrate the judicial exception into a practical application. If it is determined at step 2A, Prong 2 that the claims do not integrate the judicial exception into a practical application, the analysis proceeds to determining whether the claim is a patent-eligible application of the exception (Step 2B). If an abstract idea is present in the claim, any element or combination of elements in the claim must be sufficient to ensure that the claim integrates the judicial exception into a practical application, or else amounts to significantly more than the abstract idea itself. Claim 1 Step 1: The claim recites a method, which is one of the four statutory categories of eligible matter. Step 2A Prong 1: generating one or more outputs (Mental Processes: Can be performed in the human mind, or by a human using a pen and paper, making observations, evaluations and judgments as claimed) computing, with respect to the one or more parameters and using one or more loss values corresponding to at least two competing tasks and determined from the one or more outputs using one or more cost functions, a first gradient and a second gradient of a higher order than the first gradient; (Mathematical Concepts: are defined as mathematical relationships, mathematical formulas or equations, or mathematical calculations) adjusting the one or more first values corresponding to of the one or more parameters using a combination of the first gradient and the second gradient to one or more second values such that the one or more loss values are reduced. (Mental Processes: Can be performed in the human mind, or by a human using a pen and paper, making observations, evaluations and judgments as claimed) Step 2A Prong 2: The judicial exceptions are not integrated into a practical application. In particular, the claim recites these additional elements: using one or more neural networks, the one or more neural networks comprising one or more parameters having one or more first values; (Mere instructions to apply an exception as it recites only the idea of a solution or outcome as discussed in MPEP § 2106.05(f)) Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. using one or more neural networks, the one or more neural networks comprising one or more parameters having one or more first values; (Mere instructions to apply an exception as it recites only the idea of a solution or outcome as discussed in MPEP § 2106.05(f)) The courts have found that adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer does not qualify as “significantly more”. (See MPEP § 2106.05(I)(A)) As an ordered whole, the claim is directed to a method of creating and adjusting gradients this is nothing more than creating and modifying input values. Nothing in the claim provides significantly more than this. As such, the claim is not patent eligible. Claim 2 incorporates the rejection of claim 1. Step 1: The claim recites a method, which is one of the four statutory categories of eligible matter. Step 2A Prong 1: The judicial exceptions of claim 1 are incorporated. Please see the analysis of claim 1 above. Regarding the method steps recited in claim 1, these steps cover mental processes based on creating and modifying gradients. Therefore, claim 2 is directed to an abstract idea – mental processes (i.e., observation and evaluation/judgment/opinion). Step 2A Prong 2: The judicial exceptions are not integrated into a practical application. In particular, the claim recites these additional elements: wherein the one or more neural networks include first neural network and a second neural network, the first neural network and the second neural network are trained using adversarial training. (Field of use and technological environment, it does no more than generally link a judicial exception to a particular technological environment. MPEP § 2106.05(h)) Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. wherein the one or more neural networks include first neural network and a second neural network, the first neural network and the second neural network are trained using adversarial training. (Field of use and technological environment, it does no more than generally link a judicial exception to a particular technological environment. MPEP § 2106.05(h)) The courts have found that generally linking the use of the judicial exceptions to a particular technological environment or field of use does not qualify as “significantly more”. (See MPEP § 2106.05(I)(A)) Claim 3 incorporates the rejection of claim 1. Step 1: The claim recites a method, which is one of the four statutory categories of eligible matter. Step 2A Prong 1: The judicial exceptions of claim 1 are incorporated. Please see the analysis of claim 1 above. Regarding the method steps recited in claim 1, these steps cover mental processes based on creating and modifying gradients. Therefore, claim 3 is directed to an abstract idea – mental processes (i.e., observation and evaluation/judgment/opinion). Step 2A Prong 2: The judicial exceptions are not integrated into a practical application. In particular, the claim recites these additional elements: wherein the one or more neural networks include plurality of neural networks and the plurality of neural network are trained, at least in part, by: (Field of use and technological environment, it does no more than generally link a judicial exception to a particular technological environment. MPEP § 2106.05(h)) training at least one first neural network of the plurality of neural networks to generate a representation of one or more features that is invariant to a first domain corresponding to a first dataset input to the at least one first neural network and a second domain corresponding to a second dataset input to the at least one first neural network; and (Mere instructions to apply an exception as it recites only the idea of a solution or outcome as discussed in MPEP § 2106.05(f)) training at least one second neural network of the plurality of neural networks to classify whether the representation corresponds to the first domain or the second domain. (Mere instructions to apply an exception as it recites only the idea of a solution or outcome as discussed in MPEP § 2106.05(f)) Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. wherein the one or more neural networks include plurality of neural networks and the plurality of neural network are trained, at least in part, by: (Field of use and technological environment, it does no more than generally link a judicial exception to a particular technological environment. MPEP § 2106.05(h)) training at least one first neural network of the plurality of neural networks to generate a representation of one or more features that is invariant to a first domain corresponding to a first dataset input to the at least one first neural network and a second domain corresponding to a second dataset input to the at least one first neural network; and (Mere instructions to apply an exception as it recites only the idea of a solution or outcome as discussed in MPEP § 2106.05(f)) training at least one second neural network of the plurality of neural networks to classify whether the representation corresponds to the first domain or the second domain. (Mere instructions to apply an exception as it recites only the idea of a solution or outcome as discussed in MPEP § 2106.05(f)) The courts have found that generally linking the use of the judicial exceptions to a particular technological environment or field of use does not qualify as “significantly more”. (See MPEP § 2106.05(I)(A)) The courts have found that adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer does not qualify as “significantly more”. (See MPEP § 2106.05(I)(A)) Claim 4 incorporates the rejection of claim 3. Step 1: The claim recites a method, which is one of the four statutory categories of eligible matter. Step 2A Prong 1: The judicial exceptions of claim 3 are incorporated. Please see the analysis of claim 3 above. Regarding the method steps recited in claim 3, these steps cover mental processes based on creating and modifying gradients. Therefore, claim 4 is directed to an abstract idea – mental processes (i.e., observation and evaluation/judgment/opinion). Step 2A Prong 2: The judicial exceptions are not integrated into a practical application. In particular, the claim recites these additional elements: wherein the first domain corresponds to synthetic data and the second domain corresponds to real-world data. (Field of use and technological environment, it does no more than generally link a judicial exception to a particular technological environment. MPEP § 2106.05(h)) Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. wherein the first domain corresponds to synthetic data and the second domain corresponds to real-world data. (Field of use and technological environment, it does no more than generally link a judicial exception to a particular technological environment. MPEP § 2106.05(h)) The courts have found that generally linking the use of the judicial exceptions to a particular technological environment or field of use does not qualify as “significantly more”. (See MPEP § 2106.05(I)(A)) Claim 5 incorporates the rejection of claim 3. Step 1: The claim recites a method, which is one of the four statutory categories of eligible matter. Step 2A Prong 1: The judicial exceptions of claim 3 are incorporated. Please see the analysis of claim 3 above. Regarding the method steps recited in claim 3, these steps cover mental processes based on creating and modifying gradients. Therefore, claim 5 is directed to an abstract idea – mental processes (i.e., observation and evaluation/judgment/opinion). Step 2A Prong 2: The judicial exceptions are not integrated into a practical application. In particular, the claim recites these additional elements: further comprising training, (Mere instructions to apply an exception as it recites only the idea of a solution or outcome as discussed in MPEP § 2106.05(f)) using one or more ground-truth labels assigned to the first dataset, at least one third neural network of the plurality of neural networks to classify the representation. (Field of use and technological environment, it does no more than generally link a judicial exception to a particular technological environment. MPEP § 2106.05(h)) Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. further comprising training, (Mere instructions to apply an exception as it recites only the idea of a solution or outcome as discussed in MPEP § 2106.05(f)) using one or more ground-truth labels assigned to the first dataset, at least one third neural network of the plurality of neural networks to classify the representation. (Field of use and technological environment, it does no more than generally link a judicial exception to a particular technological environment. MPEP § 2106.05(h)) The courts have found that adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer does not qualify as “significantly more”. (See MPEP § 2106.05(I)(A)) The courts have found that generally linking the use of the judicial exceptions to a particular technological environment or field of use does not qualify as “significantly more”. (See MPEP § 2106.05(I)(A)) Claim 6 incorporates the rejection of claim 1. Step 1: The claim recites a method, which is one of the four statutory categories of eligible matter. Step 2A Prong 1: The judicial exceptions of claim 1 are incorporated. Please see the analysis of claim 1 above. Regarding the method steps recited in claim 1, these steps cover mental processes based on creating and modifying gradients. Therefore, claim 6 is directed to an abstract idea – mental processes (i.e., observation and evaluation/judgment/opinion). Step 2A Prong 2: The judicial exceptions are not integrated into a practical application. In particular, the claim recites these additional elements: wherein the combination includes a statistical combination of at least the first gradient and the second gradient. (Field of use and technological environment, it does no more than generally link a judicial exception to a particular technological environment. MPEP § 2106.05(h)) Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. wherein the combination includes a statistical combination of at least the first gradient and the second gradient. (Field of use and technological environment, it does no more than generally link a judicial exception to a particular technological environment. MPEP § 2106.05(h)) The courts have found that generally linking the use of the judicial exceptions to a particular technological environment or field of use does not qualify as “significantly more”. (See MPEP § 2106.05(I)(A)) Claim 7 incorporates the rejection of claim 1. Step 1: The claim recites a method, which is one of the four statutory categories of eligible matter. Step 2A Prong 1: The judicial exceptions of claim 1 are incorporated. Please see the analysis of claim 1 above. Regarding the method steps recited in claim 1, these steps cover mental processes based on creating and modifying gradients. Therefore, claim 7 is directed to an abstract idea – mental processes (i.e., observation and evaluation/judgment/opinion). Step 2A Prong 2: The judicial exceptions are not integrated into a practical application. In particular, the claim recites these additional elements: wherein the first gradient is a first order gradient of the one or more cost functions and the second gradient is a second order gradient of the one or more cost functions. (Field of use and technological environment, it does no more than generally link a judicial exception to a particular technological environment. MPEP § 2106.05(h)) Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. wherein the first gradient is a first order gradient of the one or more cost functions and the second gradient is a second order gradient of the one or more cost functions. (Field of use and technological environment, it does no more than generally link a judicial exception to a particular technological environment. MPEP § 2106.05(h)) The courts have found that generally linking the use of the judicial exceptions to a particular technological environment or field of use does not qualify as “significantly more”. (See MPEP § 2106.05(I)(A)) Claim 8 incorporates the rejection of claim 1. Step 1: The claim recites a method, which is one of the four statutory categories of eligible matter. Step 2A Prong 1: The judicial exceptions of claim 1 are incorporated. Please see the analysis of claim 1 above. Regarding the method steps recited in claim 1, these steps cover mental processes based on creating and modifying gradients. Therefore, claim 8 is directed to an abstract idea – mental processes (i.e., observation and evaluation/judgment/opinion). Step 2A Prong 2: The judicial exceptions are not integrated into a practical application. In particular, the claim recites these additional elements: wherein the one or more neural networks include one or more adversarial neural networks and the method further includes determining convergence of the one or more parameters of the one or more adversarial neural networks to a local Nash Equilibria. (Field of use and technological environment, it does no more than generally link a judicial exception to a particular technological environment. MPEP § 2106.05(h)) Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. wherein the one or more neural networks include one or more adversarial neural networks and the method further includes determining convergence of the one or more parameters of the one or more adversarial neural networks to a local Nash Equilibria. (Field of use and technological environment, it does no more than generally link a judicial exception to a particular technological environment. MPEP § 2106.05(h)) The courts have found that generally linking the use of the judicial exceptions to a particular technological environment or field of use does not qualify as “significantly more”. (See MPEP § 2106.05(I)(A)) Claim 9 incorporates the rejection of claim 1. Step 1: The claim recites a method, which is one of the four statutory categories of eligible matter. Step 2A Prong 1: The judicial exceptions of claim 1 are incorporated. Please see the analysis of claim 1 above. Regarding the method steps recited in claim 1, these steps cover mental processes based on creating and modifying gradients. Therefore, claim 9 is directed to an abstract idea – mental processes (i.e., observation and evaluation/judgment/opinion). Step 2A Prong 2: The judicial exceptions are not integrated into a practical application. In particular, the claim recites these additional elements: wherein the one or more neural networks include a gradient reversal layer. (Field of use and technological environment, it does no more than generally link a judicial exception to a particular technological environment. MPEP § 2106.05(h)) Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. wherein the one or more neural networks include a gradient reversal layer. (Field of use and technological environment, it does no more than generally link a judicial exception to a particular technological environment. MPEP § 2106.05(h)) The courts have found that generally linking the use of the judicial exceptions to a particular technological environment or field of use does not qualify as “significantly more”. (See MPEP § 2106.05(I)(A)) Claim 10 incorporates the rejection of claim 1. Step 1: The claim recites a method, which is one of the four statutory categories of eligible matter. Step 2A Prong 1: The judicial exceptions of claim 1 are incorporated. Please see the analysis of claim 1 above. Regarding the method steps recited in claim 1, these steps cover mental processes based on creating and modifying gradients. Therefore, claim 10 is directed to an abstract idea – mental processes (i.e., observation and evaluation/judgment/opinion). Step 2A Prong 2: The judicial exceptions are not integrated into a practical application. In particular, the claim recites these additional elements: further comprising using the one or more neural networks to perform one or more operations within a system, the system comprising or being comprised in at least one of: (Mere instructions to apply an exception as it recites only the idea of a solution or outcome as discussed in MPEP § 2106.05(f)) a control system for an autonomous or semi-autonomous machine (Field of use and technological environment, it does no more than generally link a judicial exception to a particular technological environment. MPEP § 2106.05(h)) Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. further comprising using the one or more neural networks to perform one or more operations within a system, the system comprising or being comprised in at least one of: (Mere instructions to apply an exception as it recites only the idea of a solution or outcome as discussed in MPEP § 2106.05(f)) a control system for an autonomous or semi-autonomous machine (Field of use and technological environment, it does no more than generally link a judicial exception to a particular technological environment. MPEP § 2106.05(h)) The courts have found that adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer does not qualify as “significantly more”. (See MPEP § 2106.05(I)(A)) The courts have found that generally linking the use of the judicial exceptions to a particular technological environment or field of use does not qualify as “significantly more”. (See MPEP § 2106.05(I)(A)) Claim 11 Step 1: The claim recites a system, which is one of the four statutory categories of eligible matter. Step 2A Prong 1: generate one or more first outputs (Mental Processes: Can be performed in the human mind, or by a human using a pen and paper, making observations, evaluations and judgments as claimed) determine, with respect to a joint set of parameters of the one or more first neural networks and the one or more second neural networks and using one or more loss values corresponding to at least two competing tasks of the one or more first neural networks and the one or more second neural networks and determined from the one or more first outputs and the one or more second outputs using one or more cost functions, a first gradient and a second gradient of a higher order than the first gradient; (Mathematical Concepts: are defined as mathematical relationships, mathematical formulas or equations, or mathematical calculations) update values of the joint set of parameters using a combination of the first gradient and the second gradient such that the one or more loss values are reduced. (Mental Processes: Can be performed in the human mind, or by a human using a pen and paper, making observations, evaluations and judgments as claimed) Step 2A Prong 2: The judicial exceptions are not integrated into a practical application. In particular, the claim recites these additional elements: one or more processing units to: (Mere instructions to apply an exception as it recites only the idea of a solution or outcome as discussed in MPEP § 2106.05(f)) of one or more first neural networks and one or more second outputs of one or more second neural networks; (Mere instructions to apply an exception as it recites only the idea of a solution or outcome as discussed in MPEP § 2106.05(f)) Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. one or more processing units to: (Mere instructions to apply an exception as it recites only the idea of a solution or outcome as discussed in MPEP § 2106.05(f)) of one or more first neural networks and one or more second outputs of one or more second neural networks; (Mere instructions to apply an exception as it recites only the idea of a solution or outcome as discussed in MPEP § 2106.05(f)) The courts have found that adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer does not qualify as “significantly more”. (See MPEP § 2106.05(I)(A)) As an ordered whole, the claim is directed to a method of creating and adjusting gradients this is nothing more than creating and modifying input values. Nothing in the claim provides significantly more than this. As such, the claim is not patent eligible. Claim 12 incorporates the rejection of claim 11. Step 1: The claim recites a system, which is one of the four statutory categories of eligible matter. Step 2A Prong 1: The judicial exceptions of claim 11 are incorporated. updating one or more first parameters of the one or more first neural networks to generate a representation of one or more features that is invariant to a first domain corresponding to a first dataset input to the one or more first neural networks and a second domain corresponding to a second dataset input to the one or more first neural networks; (Mental Processes: Can be performed in the human mind, or by a human using a pen and paper, making observations, evaluations and judgments as claimed) updating one or more second parameters of the one or more second neural networks to classify whether the representation corresponds to the first domain or the second domain. (Mental Processes: Can be performed in the human mind, or by a human using a pen and paper, making observations, evaluations and judgments as claimed) Step 2A Prong 2: The judicial exceptions are not integrated into a practical application. In particular, the claim recites these additional elements: wherein the values of the joint set of parameters are updated by: (Mere instructions to apply an exception as it recites only the idea of a solution or outcome as discussed in MPEP § 2106.05(f)) Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. wherein the values of the joint set of parameters are updated by: (Mere instructions to apply an exception as it recites only the idea of a solution or outcome as discussed in MPEP § 2106.05(f)) The courts have found that adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer does not qualify as “significantly more”. (See MPEP § 2106.05(I)(A)) Claim 13 incorporates the rejection of claim 12. Step 1: The claim recites a system, which is one of the four statutory categories of eligible matter. Step 2A Prong 1: The judicial exceptions of claim 12 are incorporated. Please see the analysis of claim 12 above. Regarding the method steps recited in claim 12, these steps cover mental processes based on creating and modifying gradients. Therefore, claim 13 is directed to an abstract idea – mental processes (i.e., observation and evaluation/judgment/opinion). Step 2A Prong 2: The judicial exceptions are not integrated into a practical application. In particular, the claim recites these additional elements: wherein the one or more processing units are further to train (Mere instructions to apply an exception as it recites only the idea of a solution or outcome as discussed in MPEP § 2106.05(f)) using one or more ground-truth labels assigned to the first dataset, one or more third neural networks to classify the representation. (Field of use and technological environment, it does no more than generally link a judicial exception to a particular technological environment. MPEP § 2106.05(h)) Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. wherein the one or more processing units are further to train (Mere instructions to apply an exception as it recites only the idea of a solution or outcome as discussed in MPEP § 2106.05(f)) using one or more ground-truth labels assigned to the first dataset, one or more third neural networks to classify the representation. (Field of use and technological environment, it does no more than generally link a judicial exception to a particular technological environment. MPEP § 2106.05(h)) The courts have found that adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer does not qualify as “significantly more”. (See MPEP § 2106.05(I)(A)) The courts have found that generally linking the use of the judicial exceptions to a particular technological environment or field of use does not qualify as “significantly more”. (See MPEP § 2106.05(I)(A)) Claim 14 incorporates the rejection of claim 12. Step 1: The claim recites a system, which is one of the four statutory categories of eligible matter. Step 2A Prong 1: The judicial exceptions of claim 12 are incorporated. wherein the combination includes an average of at least the first gradient and the second gradient. (Mathematical Concepts: are defined as mathematical relationships, mathematical formulas or equations, or mathematical calculations) Step 2A Prong 2: The judicial exceptions are not integrated into a practical application. In particular, the claim recites these additional elements: The claim does not recite any additional limitations. Therefore, there are no additional elements to integrate the abstract ideas into a practical application. (Merely asserting that a judicial exception is to be carried out on a generic computer (i.e., “updating the values from gradient combination” of base claim 11) cannot meaningfully integrate the judicial exception into a practical application. See MPEP § 2106.05(f)) Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Mere instructions to apply an exception (i.e., adjusting values for the combination of gradients method of base claim 11) cannot provide an inventive concept. The claim is not patent eligible. Claim 15 incorporates the rejection of claim 12. Step 1: The claim recites a system, which is one of the four statutory categories of eligible matter. Step 2A Prong 1: The judicial exceptions of claim 12 are incorporated. Please see the analysis of claim 12 above. Regarding the method steps recited in claim 12, these steps cover mental processes based on creating and modifying gradients. Therefore, claim 15 is directed to an abstract idea – mental processes (i.e., observation and evaluation/judgment/opinion). Step 2A Prong 2: The judicial exceptions are not integrated into a practical application. In particular, the claim recites these additional elements: wherein the one or more processing units are further to perform one or more operations using at least one of the one or more first neural networks or the one or more second neural networks, the system comprised in at least one of: (Mere instructions to apply an exception as it recites only the idea of a solution or outcome as discussed in MPEP § 2106.05(f)) a control system for an autonomous or semi-autonomous machine (Field of use and technological environment, it does no more than generally link a judicial exception to a particular technological environment. MPEP § 2106.05(h)) Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. wherein the one or more processing units are further to perform one or more operations using at least one of the one or more first neural networks or the one or more second neural networks, the system comprised in at least one of: (Mere instructions to apply an exception as it recites only the idea of a solution or outcome as discussed in MPEP § 2106.05(f)) a control system for an autonomous or semi-autonomous machine (Field of use and technological environment, it does no more than generally link a judicial exception to a particular technological environment. MPEP § 2106.05(h)) The courts have found that adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer does not qualify as “significantly more”. (See MPEP § 2106.05(I)(A)) The courts have found that generally linking the use of the judicial exceptions to a particular technological environment or field of use does not qualify as “significantly more”. (See MPEP § 2106.05(I)(A)) Claim 16 Step 1: The claim recites a processor, which is one of the four statutory categories of eligible matter. Step 2A Prong 1: updating one or more values of one or more parameters of the one or more neural networks using multi-order gradients corresponding to one or more cost functions such that one or more loss values corresponding to at least two competing tasks and determined from one or more outputs of the one or more neural networks are reduced. (Mathematical Concepts: are defined as mathematical relationships, mathematical formulas or equations, or mathematical calculations) Step 2A Prong 2: The judicial exceptions are not integrated into a practical application. In particular, the claim recites these additional elements: one or more circuits to perform one or more operations using one or more neural networks, the one or more neural networks trained by, at least in part (Field of use and technological environment, it does no more than generally link a judicial exception to a particular technological environment. MPEP § 2106.05(h)) Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. one or more circuits to perform one or more operations using one or more neural networks, the one or more neural networks trained by, at least in part (Field of use and technological environment, it does no more than generally link a judicial exception to a particular technological environment. MPEP § 2106.05(h)) The courts have found that generally linking the use of the judicial exceptions to a particular technological environment or field of use does not qualify as “significantly more”. (See MPEP § 2106.05(I)(A)) As an ordered whole, the claim is directed to a method of creating and adjusting gradients this is nothing more than creating and modifying input values. Nothing in the claim provides significantly more than this. As such, the claim is not patent eligible. Claim 17 incorporates the rejection of claim 16. Step 1: The claim recites a processor, which is one of the four statutory categories of eligible matter. Step 2A Prong 1: The judicial exceptions of claim 16 are incorporated. Please see the analysis of claim 16 above. Regarding the method steps recited in claim 16, these steps cover mental processes based on creating and modifying gradients. Therefore, claim 17 is directed to an abstract idea – mental processes (i.e., observation and evaluation/judgment/opinion). Step 2A Prong 2: The judicial exceptions are not integrated into a practical application. In particular, the claim recites these additional elements: wherein the one or more neural networks include a plurality of neural networks and the updating the one or more values is performed using adversarial training amongst the plurality of neural networks. (Mere instructions to apply an exception as it recites only the idea of a solution or outcome as discussed in MPEP § 2106.05(f)) Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. wherein the one or more neural networks include a plurality of neural networks and the updating the one or more values is performed using adversarial training amongst the plurality of neural networks. (Mere instructions to apply an exception as it recites only the idea of a solution or outcome as discussed in MPEP § 2106.05(f)) The courts have found that adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer does not qualify as “significantly more”. (See MPEP § 2106.05(I)(A)) Claim 18 incorporates the rejection of claim 16. Step 1: The claim recites a processor, which is one of the four statutory categories of eligible matter. Step 2A Prong 1: The judicial exceptions of claim 16 are incorporated. Please see the analysis of claim 16 above. Regarding the method steps recited in claim 16, these steps cover mental processes based on creating and modifying gradients. Therefore, claim 18 is directed to an abstract idea – mental processes (i.e., observation and evaluation/judgment/opinion). Step 2A Prong 2: The judicial exceptions are not integrated into a practical application. In particular, the claim recites these additional elements: wherein the updating the one or more values includes transferring knowledge from a labeled source domain to an unlabeled target domain in a representation space learned by the one or more neural networks. (Mere instructions to apply an exception as it recites only the idea of a solution or outcome as discussed in MPEP § 2106.05(f)) Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. wherein the updating the one or more values includes transferring knowledge from a labeled source domain to an unlabeled target domain in a representation space learned by the one or more neural networks. (Mere instructions to apply an exception as it recites only the idea of a solution or outcome as discussed in MPEP § 2106.05(f)) The courts have found that adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer does not qualify as “significantly more”. (See MPEP § 2106.05(I)(A)) Claim 19 incorporates the rejection of claim 16. Step 1: The claim recites a processor, which is one of the four statutory categories of eligible matter. Step 2A Prong 1: The judicial exceptions of claim 16 are incorporated. updating one or more first parameters of one or more first neural networks of the plurality of neural networks to generate a representation of one or more features that is invariant to a first domain corresponding to a first dataset input to the one or more first neural networks and a second domain corresponding to a second dataset input to the one or more first neural networks; (Mental Processes: Can be performed in the human mind, or by a human using a pen and paper, making observations, evaluations and judgments as claimed) updating one or more second parameters of one or more second neural networks of the plurality of neural networks to classify whether the representation corresponds to the first domain or the second domain. (Mental Processes: Can be performed in the human mind, or by a human using a pen and paper, making observations, evaluations and judgments as claimed) Step 2A Prong 2: The judicial exceptions are not integrated into a practical application. In particular, the claim recites these additional elements: wherein the one or more neural networks include a plurality of neural networks and the updating the one or more values of the one or more parameters includes (Mere instructions to apply an exception as it recites only the idea of a solution or outcome as discussed in MPEP § 2106.05(f)) Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. wherein the one or more neural networks include a plurality of neural networks and the updating the one or more values of the one or more parameters includes (Mere instructions to apply an exception as it recites only the idea of a solution or outcome as discussed in MPEP § 2106.05(f)) The courts have found that adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer does not qualify as “significantly more”. (See MPEP § 2106.05(I)(A)) Claim 20 incorporates the rejection of claim 16. Step 1: The claim recites a processor, which is one of the four statutory categories of eligible matter. Step 2A Prong 1: The judicial exceptions of claim 16 are incorporated. Please see the analysis of claim 16 above. Regarding the method steps recited in claim 16, these steps cover mental processes based on creating and modifying gradients. Therefore, claim 20 is directed to an abstract idea – mental processes (i.e., observation and evaluation/judgment/opinion). Step 2A Prong 2: The judicial exceptions are not integrated into a practical application. In particular, the claim recites these additional elements: wherein the at least one processor is comprised in at least one of: (Field of use and technological environment, it does no more than generally link a judicial exception to a particular technological environment. MPEP § 2106.05(h)) a system for performing simulation operations; (Field of use and technological environment, it does no more than generally link a judicial exception to a particular technological environment. MPEP § 2106.05(h)) Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. wherein the at least one processor is comprised in at least one of: (Field of use and technological environment, it does no more than generally link a judicial exception to a particular technological environment. MPEP § 2106.05(h)) a system for performing simulation operations; (Field of use and technological environment, it does no more than generally link a judicial exception to a particular technological environment. MPEP § 2106.05(h)) The courts have found that generally linking the use of the judicial exceptions to a particular technological environment or field of use does not qualify as “significantly more”. (See MPEP § 2106.05(I)(A)) Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 2, 6, 10, 11, 16, 17 and 20 are rejected under 35 U.S.C 103 as being unpatentable over UCHIDA (US 20190294955 A1) in view of DIERCKX (US 20220373989 A1) further in view of KIM (US 20210166346 A1) Regarding claim 1, UCHIDA teaches generating one or more outputs using one or more neural networks, the one or more neural networks comprising one or more parameters having one or more first values; (See e.g. [0053], In the present embodiment as well, input data is input to the first neural network [parameters having one or more first values], and first gradients ΔE are obtained with respect to the respective weights (i.e. parameters) of the first neural network [one or more neural networks] based on the correct answer labels associated with the input data.) (See e.g. [0057], The first neural network, upon receiving input data, outputs output data, which is compared with training labels.) adjusting the one or more first values corresponding to of the one or more parameters using a combination of the first gradient and the second gradient to one or more second values such that the one or more loss values are reduced. (See e.g. [0058], In this way, each weight [adjusting the one or more first values corresponding to of the one or more parameters] of the first neural network is updated based on the sum of the first gradient obtained with respect to the weight and the second gradient obtained with respect to the input [combination of the first gradient and the second gradient] to the second neural network, the input being obtained from the weight.) (See e.g. [0043], A “loss function” is defined in order to calculate the error…Finally, a convergent calculation is executed in which the weights in the respective layers are adjusted to appropriate values such that the error is reduced. [one or more loss values are reduced]) UCHIDA does not teach computing, with respect to the one or more parameters and [using one or more loss values corresponding to at least two competing tasks and] determined from the one or more outputs using one or more cost functions, [a first] gradient and [a second] gradient of a higher order than the [first] gradient; DIERCKX teaches computing, with respect to the one or more parameters and [using one or more loss values corresponding to at least two competing tasks and] determined from the one or more outputs using one or more cost functions, a [first] gradient and a [second] gradient of a higher order than the [first] gradient; (See e.g. [0031], The one or more operating conditions preferably comprise one or more parameters relating to a (preferably local) grid frequency measured at the asset.) (See e.g. [0188], If not, then in step 628, the next point to evaluate is selected based on the current point and using information determined during steps 618-624, i.e. the value, gradient, and higher-order gradient(s) of the extended cost function.) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of UCHIDA and DIERCKX before them, to include DIERCKX’s cost function and higher gradient ordering which would allow UCHIDA’s program to adjust the input values of the neural networks. One would have been motivated to make such a combination in order to improve robustness of the solution and dependence between assets implemented by evaluating a model, as suggested by DERCKX (US 20220373989 A1) (0164) UCHIDA and DIERCKX do not teach using one or more loss values corresponding to at least two competing tasks KIM teaches using one or more loss values corresponding to at least two competing tasks (See e.g. [0156], The parameters included in the plurality of neural network layers may be optimized by learning results of the AI model. For example, the parameters may be updated such that a loss value or a cost value obtained by the AI model is reduced or minimized during the learning process. Artificial neural networks may include deep neural networks (DNNs), for example, and without limitation, Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Generic Adversarial Networks (GANs) [two competing tasks] (Examiner’s notes: a GAN generates a fake image (task 1) then discriminates a fake image from a real image (task 2))) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of UCHIDA, DIERCKX and KIM before them, to include KIM’s joint set training and competing tasks which would allow UCHIDA and DIERCKX’s model to have improved accuracy and efficiency in training. One would have been motivated to make such a combination in order to improve quality of output data and increased processing rate, as suggested by KIM (US 20210166346 A1) (0046) Regarding claim 2, UCHIDA, DEIRCKX and KIM, UCHIDA teaches the method of claim 1, wherein the one or more neural networks include first neural network and a second neural network (See e.g. [0055], First, values based on the “weights” of the first neural network are input to the second neural network.) UCHIDA and DEIRCKX do not teach the first neural network and the second neural network are trained using adversarial training. KIM teaches the first neural network and the second neural network are trained using adversarial training. (See e.g. [0156], Artificial neural networks may include deep neural networks (DNNs), for example, and without limitation, Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Generic Adversarial Networks (GANs) [adversarial training]) (See e.g. [0231], According to an embodiment, through joint training of the first DNN and the second DNN, parameters of the second DNN) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of UCHIDA, DIERCKX and KIM before them, to include KIM’s joint set training and competing tasks which would allow UCHIDA and DIERCKX’s model to have improved accuracy and efficiency in training. One would have been motivated to make such a combination in order to improve quality of output data and increased processing rate, as suggested by KIM (US 20210166346 A1) (0046) Regarding claim 6, UCHIDA, DEIRCKX and KIM, UCHIDA teaches the method of claim 1, wherein the combination includes a statistical combination of at least the first gradient and the second gradient. (See e.g. [0043], The training refers to an operation to appropriately update weights in the respective layers using an error between the output data from the output layer corresponding to input data and the correct answer label associated with the input data.) Regarding claim 10, UCHIDA, DEIRCKX and KIM, UCHIDA teaches the method of claim 1, further comprising using the one or more neural networks to perform one or more operations within a system, the system comprising or being comprised in at least one of: a system for performing simulation operations (See e.g. [0003], The neural network refers to a mathematical model for expressing characteristics of the brain of a living body by computer simulation.) Regarding claim 11, UCHIDA teaches one or more processing units (See e.g. [0082], as a result of being executed by one or more processors of a computer) generate one or more first outputs of one or more first neural networks and one or more second outputs of one or more second neural networks; (See e.g. [0074], With this, the second neural network outputs the extracted watermark bits.) (See e.g. [0079], the first neural network outputs data using the trained model parameter) update values [of the joint set] of parameters using a combination of the first gradient and the second gradient such that the one or more loss values are reduced. (See e.g. [0081], Specifically, similarly to the training of a common neural network, first gradients are obtained with respect to respective weights of the weight filters based on a training data group. When a certain weight is updated based on the sum of the first gradient obtained with respect to the weight and the second gradient obtained by the gradient calculation unit 50 with respect to a weight of the averaged weight filter calculated based on the weight.) (See e.g. [0043], A “loss function” is defined in order to calculate the error…Finally, a convergent calculation is executed in which the weights in the respective layers are adjusted to appropriate values such that the error is reduced. [one or more loss values are reduced]) UCHIDA does not teach determined from the one or more [first] outputs and the one or more [second] outputs using one or more cost functions, [a first] gradient and [a second] gradient of a higher order than the [first] gradient; DIERCKX teaches determined from the one or more [first] outputs and the one or more [second] outputs using one or more cost functions, [a first] gradient and [a second] gradient of a higher order than the [first] gradient; (See e.g. [0123, Figure 3], Simulation of the assets and training of neural networks as described above results in a set of neural networks 304, each corresponding to a respective asset or asset type. The neural networks are shown as combined in a meta-network 306 which combines the neural network outputs into a combined output indicative of demand response performance of the assets. In the described embodiment, the meta-network 306 is in the form of the cost function (though other approaches could be employed for combining the neural network outputs, such as a further neural network). The inputs to the neural networks 304 are the control configurations of the assets and the neural networks provide various performance indicator outputs used in the optimization of the cost function. [determined using one or more cost functions]) (See e.g. [0180], The current point is defined by a vector of the input variables, i.e. the neural network inputs that define the asset configurations of the assets over which the optimization is performed, and similarly the gradient is a gradient vector defined over those input variables.) (See e.g. [0188], the next point to evaluate is selected based on the current point and using information determined during steps [gradient parameters] 618-624, i.e. the value, gradient, and higher-order gradient(s) of the extended cost function. The process then returns to the evaluation step 618 based on the selected next point as the current point.) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of UCHIDA and DIERCKX before them, to include DIERCKX’s cost function and higher gradient ordering which would allow UCHIDA’s program to adjust the input values of the neural networks. One would have been motivated to make such a combination in order to improve robustness of the solution and dependence between assets implemented by evaluating a model, as suggested by DERCKX (US 20220373989 A1) (0164) UCHIDA and DIERCKX do not teach determine, with respect to a joint set of parameters of the one or more first neural networks and the one or more second neural networks and [using one or more loss values] corresponding to at least two competing tasks of the one or more first neural networks and the one or more second neural networks and KIM teaches determine, with respect to a joint set of parameters of the one or more first neural networks and the one or more second neural networks and [using one or more loss values] corresponding to at least two competing tasks of the one or more first neural networks and the one or more second neural networks and (See e.g. [0156], For example, the parameters may be updated such that a loss value or a cost value obtained by the AI model is reduced or minimized during the learning process. Artificial neural networks may include deep neural networks (DNNs), for example, and without limitation, Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Generic Adversarial Networks (GANs) [two competing tasks]) (Examiner’s note: two competing tasks are not being mapped to the GAN itself but the generative task can be mapped to the first neural network and discriminative task can be mapped to the second neural network. A GAN would also have shared parameters between the generative and discriminative task thus providing joint set of parameters.) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of UCHIDA, DIERCKX and KIM before them, to include KIM’s joint set training and competing tasks which would allow UCHIDA and DIERCKX’s model to have improved accuracy and efficiency in training. One would have been motivated to make such a combination in order to improve quality of output data and increased processing rate, as suggested by KIM (US 20210166346 A1) (0046) Regarding claim 16, UCHIDA teaches At least one processor comprising: one or more circuits to perform one or more operations using one or more neural networks, the one or more neural networks trained by, at least in part, updating one or more values of one or more parameters of the one or more neural networks [using multi-order gradients corresponding to one or more cost functions] such that one or more loss values [corresponding to at least two competing tasks] and determined from one or more outputs of the one or more neural networks are reduced. (See e.g. [0082], as a result of being executed by one or more processors of a computer) (See e.g. [Claim 7], wherein the training unit is further configured to update the weights based on values obtained by adding first gradients of the weights of [updating one or more values of one or more parameters] the first neural network that have been obtained based on backpropagation and the respective second gradients.) (See e.g. [0043], A “loss function” is defined in order to calculate the error…Finally, a convergent calculation is executed in which the weights in the respective layers are adjusted to appropriate values such that the error is reduced. [one or more loss values are reduced]) UCHIDA does not teach using multi-order gradients corresponding to one or more cost functions DIERCKX teaches using multi-order gradients corresponding to one or more cost functions (See e.g. [0188], If not, then in step 628, the next point to evaluate is selected based on the current point and using information determined during steps 618-624, i.e. the value, gradient, and higher-order gradient(s) of the extended cost function.) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of UCHIDA and DIERCKX before them, to include DIERCKX’s cost function and higher gradient ordering which would allow UCHIDA’s program to adjust the input values of the neural networks. One would have been motivated to make such a combination in order to improve robustness of the solution and dependence between assets implemented by evaluating a model, as suggested by DERCKX (US 20220373989 A1) (0164) UCHIDA and DIERCKX do not teach corresponding to at least two competing tasks KIM teaches corresponding to at least two competing tasks (See e.g. [0156], For example, the parameters may be updated such that a loss value or a cost value obtained by the AI model is reduced or minimized during the learning process. Artificial neural networks may include deep neural networks (DNNs), for example, and without limitation, Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Generic Adversarial Networks (GANs) [two competing tasks]) (Examiner’s notes: a GAN generates a fake image (task 1) then discriminates a fake image from a real image (task 2)) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of UCHIDA, DIERCKX and KIM before them, to include KIM’s joint set training and competing tasks which would allow UCHIDA and DIERCKX’s model to have improved accuracy and efficiency in training. One would have been motivated to make such a combination in order to improve quality of output data and increased processing rate, as suggested by KIM (US 20210166346 A1) (0046) Regarding claim 17, UCHIDA, DEIRCKX and KIM, UCHIDA teaches the processor of claim 16, wherein the one or more neural networks include a plurality of neural networks and the updating the one or more values is performed (See e.g. [0024], updating the weights based on values obtained by adding first gradients of the weights of the first neural network that have been obtained based on backpropagation and the respective second gradients.) UCHIDA and DEIRCKX do not teach using adversarial training amongst the plurality of neural networks. KIM teaches using adversarial training amongst the plurality of neural networks. (See e.g. [0156], Artificial neural networks may include deep neural networks (DNNs), for example, and without limitation, Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Generic Adversarial Networks (GANs) [adversarial training]) (See e.g. [0231], According to an embodiment, through joint training of the first DNN and the second DNN, parameters of the second DNN) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of UCHIDA, DIERCKX and KIM before them, to include KIM’s joint set training and competing tasks which would allow UCHIDA and DIERCKX’s model to have improved accuracy and efficiency in training. One would have been motivated to make such a combination in order to improve quality of output data and increased processing rate, as suggested by KIM (US 20210166346 A1) (0046) Regarding claim 20, UCHIDA, DEIRCKX and KIM, UCHIDA teaches the processor of claim 16, wherein the at least one processor is comprised in at least one of: a system for performing simulation operations; (See e.g. [0082], as a result of being executed by one or more processors of a computer) (See e.g. [0003], The neural network refers to a mathematical model for expressing characteristics of the brain of a living body by computer simulation.) Claims 3, 12, 14, 15 and 19 are rejected under 35 U.S.C 103 as being unpatentable over UCHIDA (US 20190294955 A1) in view of DIERCKX (US 20220373989 A1) further in view of KIM (US 20210166346 A1) further in view of ZHAO (US 20220198339 A1) Regarding claim 3, UCHIDA, DEIRCKX and KIM teach the method of claim 1. UCHIDA, DEIRCKX and KIM do not training at least one first neural network of the plurality of neural networks to generate a representation of one or more features that is invariant to a first domain corresponding to a first dataset input to the at least one first neural network and a second domain corresponding to a second dataset input to the at least one first neural network; and training at least one second neural network of the plurality of neural networks to classify whether the representation corresponds to the first domain or the second domain. ZHAO teaches training at least one first neural network of the plurality of neural networks to generate a representation of one or more features that is invariant to a first domain corresponding to a first dataset input to the at least one first neural network and a second domain corresponding to a second dataset input to the at least one first neural network (See e.g. [0048], In some embodiments, the first processing unit [first neural network] 132 may include a linear regression model, a neural network, or the like, or any combination thereof.) (See e.g. [0029], The initial machine learning model may be trained based on the feature extraction unit, the first processing unit, [first neural network] the adversarial unit, and the second processing unit. During the training, the feature extraction unit may be trained to extract the commonalities of different domains as much as possible when extracting features to reduce the influence of differences between domains.) training at least one second neural network of the plurality of neural networks to classify whether the representation corresponds to the first domain or the second domain. (See e.g. [0109], It should be noted that whether the training data is text data, image data, or any other forms of data (e.g., audio data), the purpose of the machine learning model training methods based on cross-domain data as described in the present disclosure is to training the training model) (See e.g. [0041], In some other embodiments, feature cross-domain transfer learning may be realized by learning a domain discriminator by minimizing the domain classification error that distinguishes source domain features from target domain features.) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of UCHIDA, DIERCKX, KIM, and ZHAO before them, to include ZHAO’s processing unit which would allow UCHIDA, DIERCKX and KIM’s program to classify features between different domains. One would have been motivated to make such a combination in order to increase the performance of training models by implementing cross-domain data, as suggested by ZHAO (US 20220198339 A1) (0002) Regarding claim 12, UCHIDA, DEIRCKX, KIM and ZHAO, UCHIDA teaches the system of claim 11, wherein the values of the joint set of parameters are updated by: (See e.g. [Claim 7], wherein the training unit is further configured to update the weights based on values obtained by adding first gradients of the weights [parameters] of the first neural network that have been obtained based on backpropagation and the respective second gradients.) UCHIDA, DEIRCKX and KIM do not teach [updating] one or more first parameters of one or more first neural networks of the plurality of neural networks to generate a representation of one or more features that is invariant to a first domain corresponding to a first dataset input to the one or more first neural networks and a second domain corresponding to a second dataset input to the one or more first neural networks; and [updating] one or more second parameters of one or more second neural networks of the plurality of neural networks to classify whether the representation corresponds to the first domain or the second domain. ZHAO teaches [updating] one or more first parameters of one or more first neural networks of the plurality of neural networks to generate a representation of one or more features that is invariant to a first domain corresponding to a first dataset input to the one or more first neural networks and a second domain corresponding to a second dataset input to the one or more first neural networks; and (See e.g. [0048], In some embodiments, the first processing unit [first neural network] 132 may include a linear regression model, a neural network, or the like, or any combination thereof.) (See e.g. [0029], The initial machine learning model may be trained based on the feature extraction unit, the first processing unit, [first neural network] the adversarial unit, and the second processing unit. During the training, the feature extraction unit may be trained to extract the commonalities of different domains as much as possible when extracting features to reduce the influence of differences between domains.) [updating] one or more second parameters of one or more second neural networks of the plurality of neural networks to classify whether the representation corresponds to the first domain or the second domain. (See e.g. [0109], It should be noted that whether the training data is text data, image data, or any other forms of data (e.g., audio data), the purpose of the machine learning model training methods based on cross-domain data as described in the present disclosure is to training the training model) (See e.g. [0041], In some other embodiments, feature cross-domain transfer learning may be realized by learning a domain discriminator by minimizing the domain classification error that distinguishes source domain features from target domain features.) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of UCHIDA, DIERCKX, KIM, and ZHAO before them, to include ZHAO’s processing unit which would allow UCHIDA, DIERCKX and KIM’s program to classify features between different domains. One would have been motivated to make such a combination in order to increase the performance of training models by implementing cross-domain data, as suggested by ZHAO (US 20220198339 A1) (0002) Regarding claim 14, UCHIDA, DEIRCKX, KIM and ZHAO, UCHIDA teaches the system of claim 12, wherein the combination includes an average of at least the first gradient and the second gradient. (See e.g. [0043], The training refers to an operation to appropriately update weights in the respective layers using an error between the output data from the output layer corresponding to input data and the correct answer label associated with the input data.) Regarding claim 15, UCHIDA, DEIRCKX, KIM and ZHAO, UCHIDA teaches the system of claim 12, wherein the one or more processing units are further to perform one or more operations using at least one of the one or more first neural networks or the one or more second neural networks, the system comprised in at least one of: a system for performing simulation operations; (See e.g. [0082], as a result of being executed by one or more processors of a computer) (See e.g. [0003], The neural network refers to a mathematical model for expressing characteristics of the brain of a living body by computer simulation.) Regarding claim 19, UCHIDA, DEIRCKX and KIM, UCHIDA teaches the processor of claim 16, wherein the one or more neural networks include a plurality of neural networks and the updating the one or more values of the one or more parameters includes: (See e.g. [Claim 7], wherein the training unit is further configured to update the weights based on values obtained by adding first gradients of the weights of [updating one or more values of one or more parameters] the first neural network that have been obtained based on backpropagation and the respective second gradients.) UCHIDA, DEIRCKX and KIM do not teach [updating] one or more first parameters of one or more first neural networks of the plurality of neural networks to generate a representation of one or more features that is invariant to a first domain corresponding to a first dataset input to the one or more first neural networks and a second domain corresponding to a second dataset input to the one or more first neural networks; and [updating] one or more second parameters of one or more second neural networks of the plurality of neural networks to classify whether the representation corresponds to the first domain or the second domain. ZHAO teaches [updating] one or more first parameters of one or more first neural networks of the plurality of neural networks to generate a representation of one or more features that is invariant to a first domain corresponding to a first dataset input to the one or more first neural networks and a second domain corresponding to a second dataset input to the one or more first neural networks; and (See e.g. [0048], In some embodiments, the first processing unit [first neural network] 132 may include a linear regression model, a neural network, or the like, or any combination thereof.) (See e.g. [0029], The initial machine learning model may be trained based on the feature extraction unit, the first processing unit, [first neural network] the adversarial unit, and the second processing unit. During the training, the feature extraction unit may be trained to extract the commonalities of different domains as much as possible when extracting features to reduce the influence of differences between domains.) [updating] one or more second parameters of one or more second neural networks of the plurality of neural networks to classify whether the representation corresponds to the first domain or the second domain. (See e.g. [0109], It should be noted that whether the training data is text data, image data, or any other forms of data (e.g., audio data), the purpose of the machine learning model training methods based on cross-domain data as described in the present disclosure is to training the training model) (See e.g. [0041], In some other embodiments, feature cross-domain transfer learning may be realized by learning a domain discriminator by minimizing the domain classification error that distinguishes source domain features from target domain features.) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of UCHIDA, DIERCKX, KIM, and ZHAO before them, to include ZHAO’s processing unit which would allow UCHIDA, DIERCKX and KIM’s program to classify features between different domains. One would have been motivated to make such a combination in order to increase the performance of training models by implementing cross-domain data, as suggested by ZHAO (US 20220198339 A1) (0002) Claim 4 is rejected under 35 U.S.C 103 as being unpatentable over UCHIDA (US 20190294955 A1) in view of DIERCKX (US 20220373989 A1) further in view of KIM (US 20210166346 A1) further in view of ZHAO (US 20220198339 A1) further in view of Wang (US 20220101476 A1) Regarding claim 4, UCHIDA, DEIRCKX, KIM and ZHAO, ZHAO teaches the method of claim 3, wherein the first domain corresponds [to synthetic data] and the second domain corresponds [to real-world data.] (See e.g. [0039], In some embodiments, for a situation that manually labeling features in a large amount of data can result in high cost and time-consuming, data in a first domain with features that are labeled can be used as the source domain data, and data in a second domain that is different from the first domain with features that are unlabeled can be used as the target domain data.) UCHIDA, DEIRCKX, KIM and ZHAO do not teach wherein the [first] domain corresponds to synthetic data and the [second] domain corresponds to real-world data. Wang teaches wherein the [first] domain corresponds to synthetic data and the [second] domain corresponds to real-world data. (See e.g. [0032], Sharing the cross-domain depth encoder between the real data and synthetic data domains, together with limiting feature size, encourages the encoder to learn a joint feature encoding of two domains and distill the similarities between the two domains.) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of UCHIDA, DIERCKX, KIM, ZHAO and Wang before them, to include Wang’s synthetic and real-world data which would allow UCHIDA, DIERCKX, KIM and ZHAO’s program to utilize a large dataset for training. One would have been motivated to make such a combination in order to increase the refinement of image acquisition data using synthetic and real domain adaption training, as suggested by Wang (US 20220101476 A1) (0001) Claims 5 and 13 are rejected under 35 U.S.C 103 as being unpatentable over UCHIDA (US 20190294955 A1) in view of DIERCKX (US 20220373989 A1) further in view of KIM (US 20210166346 A1) further in view of ZHAO (US 20220198339 A1) further in view of NODA (US 20210209452 A1) Regarding claim 5, UCHIDA, DEIRCKX, KIM and ZHAO teaches the method of claim 3. UCHIDA, DEIRCKX, KIM and ZHAO do not teach using one or more ground-truth labels assigned to the first dataset, at least one third neural network of the plurality of neural networks to classify the representation. NODA teaches using one or more ground-truth labels assigned to the first dataset, at least one third neural network of the plurality of neural networks to classify the representation. (See e.g. [0015], a ground truth label of the first translated data, the first inference result, and a ground truth label of the first domain data.) (See e.g. [0072], The third neural network 103 receives input of first domain data or first translated data.) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of UCHIDA, DIERCKX, KIM, ZHAO and NODA before them, to include NODA’s ground-truth label which would allow UCHIDA, DIERCKX, KIM, and ZHAO’s program to improve training with evaluation and validation. One would have been motivated to make such a combination in order to improve the generalization performance of the estimation network, as suggested by NODA (US 20210209452 A1) (0069) Regarding claim 13, UCHIDA, DEIRCKX, KIM and ZHAO teaches the system of claim 12. UCHIDA, DEIRCKX, KIM and ZHAO do not teach wherein the one or more processing units are further to train, using one or more ground-truth labels assigned to the first dataset, one or more third neural networks to classify the representation. NODA teaches wherein the one or more processing units are further to train, using one or more ground-truth labels assigned to the first dataset, one or more third neural networks to classify the representation. (See e.g. [0015], a ground truth label of the first translated data, the first inference result, and a ground truth label of the first domain data.) (See e.g. [0072], The third neural network 103 receives input of first domain data or first translated data.) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of UCHIDA, DIERCKX, KIM, ZHAO and NODA before them, to include NODA’s ground-truth label which would allow UCHIDA, DIERCKX, KIM, and ZHAO’s program to improve training with evaluation and validation. One would have been motivated to make such a combination in order to improve the generalization performance of the estimation network, as suggested by NODA (US 20210209452 A1) (0069) Claim 7 is rejected under 35 U.S.C 103 as being unpatentable over UCHIDA (US 20190294955 A1) in view of DIERCKX (US 20220373989 A1) further in view of KIM (US 20210166346 A1) further in view of KWON (US 20210209452 A1) Regarding claim 7, UCHIDA, DEIRCKX, and KIM teaches the method of claim 1. UCHIDA, DEIRCKX, and KIM do not teach wherein the first gradient is a first order gradient of the one or more cost functions and the second gradient is a second order gradient of the one or more cost functions. KWON teaches wherein the first gradient is a first order gradient of the one or more cost functions and the second gradient is a second order gradient of the one or more cost functions. (See e.g. [0078], The gradients are used to update the weights during offline training using a stochastic gradient descent method [order gradient]) (See e.g. [Claim 9], wherein the first gradient of the cost function is determined) (See e.g. [Claim 10], wherein the second gradient of the cost function is determined) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of UCHIDA, DIERCKX, KIM and KWON before them, to include KWON’s order gradient which would allow UCHIDA, DIERCKX and KIM’s program to optimize computational costs and enhance analysis. One would have been motivated to make such a combination in order to improve the model by decreasing the error rate, as suggested by KWON (US 20200293896 A1) (0003) Claim 8 is rejected under 35 U.S.C 103 as being unpatentable over UCHIDA (US 20190294955 A1) in view of DIERCKX (US 20220373989 A1) further in view of KIM (US 20210166346 A1) further in view of DENLI (US 20200183047 A1) Regarding claim 8, UCHIDA, DEIRCKX, and KIM teaches the method of claim 1. UCHIDA, DEIRCKX, and KIM do not teach wherein the one or more neural networks include one or more adversarial neural networks and the method further includes determining convergence of the one or more parameters of the one or more adversarial neural networks to a local Nash Equilibria. DENLI teaches wherein the one or more neural networks include one or more adversarial neural networks and the method further includes determining convergence of the one or more parameters of the one or more adversarial neural networks to a local Nash Equilibria. (See e.g. [0078], Specifically, FIG. 6A is a first example block diagram 600 of a conditional generative-adversarial neural network (CGAN) [adversarial neural networks] schema in which the input to the generative model G) (See e.g. [0080], This competition between G [adversarial neural networks] and D networks may converge at a local Nash equilibrium [a local Nash Equilibria] of Game Theory (or GAN convergences when the D and G weights do not change more 1% of its starting weight values) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of UCHIDA, DIERCKX, KIM and DENLI before them, to include DENLI’s Nash Equilibria which would allow UCHIDA, DIERCKX and KIM’s program to increase the framework for development of the system. One would have been motivated to make such a combination in order to make predictions on geophysical locations and data along with environmental impacts, as suggested by DENLI (US 20200183047 A1) (0004) Claim 9 is rejected under 35 U.S.C 103 as being unpatentable over UCHIDA (US 20190294955 A1) in view of DIERCKX (US 20220373989 A1) further in view of KIM (US 20210166346 A1) further in view of TAKIMOTO (US 20230169334 A1) Regarding claim 9, UCHIDA, DEIRCKX, and KIM teaches the method of claim 1. UCHIDA, DEIRCKX, and KIM do not teach wherein the one or more neural networks include a gradient reversal layer. TAKIMOTO teaches wherein the one or more neural networks include a gradient reversal layer. (See e.g. [0034], Here, the time series domain adaptation unit 102 according to Example 1 is achieved with a neural network model containing a gradient reversal layer (GRL) and one or more fully connected neural network layers.) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of UCHIDA, DIERCKX, KIM and TAKIMOTO before them, to include TAKIMOTO’s gradient reversal layer which would allow UCHIDA, DIERCKX and KIM’s program to improve domain adaption. One would have been motivated to make such a combination in order to increase estimation accuracy, as suggested by TAKIMOTO (US 20230169334 A1) (0003) Claim 18 is rejected under 35 U.S.C 103 as being unpatentable over UCHIDA (US 20190294955 A1) in view of DIERCKX (US 20220373989 A1) further in view of KIM (US 20210166346 A1) further in view of Rostami (US 11448753 B2) Regarding claim 18, UCHIDA, DEIRCKX and KIM, UCHIDA teaches the processor of claim 16, wherein the updating the one or more values (See e.g. [Claim 7], wherein the training unit is further configured to update the weights based on values obtained by adding first gradients of the weights of [updating one or more values] the first neural network that have been obtained based on backpropagation and the respective second gradients.) UCHIDA, DEIRCKX and KIM do not teach transferring knowledge from a labeled source domain to an unlabeled target domain in a representation space learned by the one or more neural networks. Rostami teaches transferring knowledge from a labeled source domain to an unlabeled target domain in a representation space learned by the one or more neural networks. (See e.g. [Claim 10], A system for transferring learned knowledge from an electro-optical (EO) domain to a synthetic-aperture-radar (SAR) domain, the system comprising: … wherein the set of labeled data points in the SAR domain and a set of unlabeled data points in the SAR domain are used to align an EO probability distribution) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of UCHIDA, DIERCKX, KIM and Rostami before them, to include Rostami’s transference of knowledge labels which would allow UCHIDA, DIERCKX and KIM’s program reduce in training time and build upon pre-existing knowledge. One would have been motivated to make such a combination in order to make predictions on unlabeled datapoints using pre-labeled data, as suggested by Rostami (US 11448753 B2) (C3:L10 – 29) Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KYLE ALLMAN THOMPSON whose telephone number is (571)272-3671. The examiner can normally be reached Monday - Thursday, 6 a.m. - 3 p.m. ET.. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kamran Afshar can be reached at (571) 272-7796. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /K.A.T./Examiner, Art Unit 2125 /KAMRAN AFSHAR/Supervisory Patent Examiner, Art Unit 2125
Read full office action

Prosecution Timeline

May 27, 2022
Application Filed
Aug 07, 2025
Non-Final Rejection — §101, §103, §112
Nov 05, 2025
Response Filed
Jan 23, 2026
Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12547932
MACHINE LEARNING-ASSISTED MULTI-DOMAIN PLANNING
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
83%
Grant Probability
99%
With Interview (+33.3%)
4y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 6 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month