Prosecution Insights
Last updated: April 19, 2026
Application No. 17/520,448

NOVEL METHOD OF TRAINING A NEURAL NETWORK

Final Rejection §101§102
Filed
Nov 05, 2021
Examiner
JONES, CHARLES JEFFREY
Art Unit
2122
Tech Center
2100 — Computer Architecture & Software
Assignee
Nvidia Corporation
OA Round
2 (Final)
27%
Grant Probability
At Risk
3-4
OA Rounds
4y 2m
To Grant
93%
With Interview

Examiner Intelligence

Grants only 27% of cases
27%
Career Allow Rate
4 granted / 15 resolved
-28.3% vs TC avg
Strong +66% interview lift
Without
With
+65.9%
Interview Lift
resolved cases with interview
Typical timeline
4y 2m
Avg Prosecution
27 currently pending
Career history
42
Total Applications
across all art units

Statute-Specific Performance

§101
34.5%
-5.5% vs TC avg
§103
29.1%
-10.9% vs TC avg
§102
17.7%
-22.3% vs TC avg
§112
17.7%
-22.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 15 resolved cases

Office Action

§101 §102
DETAILED ACTION This action is responsive to the Application/amendment filed on 09/30/2025. Claims 1-31 are pending in the case. Claims 1, 9, 16 and 23 are independent claims. Claims 1-2, 6, 9, 12, 16-24, 26 and 28 are amended. Information Disclosure Statement The information disclosure statement (IDS) submitted on 12/19/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Previous Office Action Claim Objections and Ineligible Claims Applicants amendments have overcome the previous objections concerning claim 21 and non-statutory subject matter of claims 9-22. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-31 are rejected under 35 U.S.C. 101 because the claims are directed to an abstract idea or mental process. Regarding claim 1: Subject Matter Eligibility Analysis Step 2A Prong 1: The claim recites calculate respective task descriptors based, at least in part, on respective inputs to a neural network which is an abstract idea (Mathematical Calculations (see MPEP 2106.04(a)(2)(I)(C))). The claim recites infer respectively different information for the respective input which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitations encompass a user predicting sets of information. See 2106.04.(a)(2).III.C. Subject Matter Eligibility Analysis Step 2A Prong 2: Processor (merely recites a generic computer on which to perform the abstract idea, e.g. "apply it on a computer" (see MPEP 2106.05(f))) one or more circuits (merely recites a generic computer on which to perform the abstract idea, e.g. "apply it on a computer" (see MPEP 2106.05(f))) selectively activate, based, at least in part, on the calculated respective task descriptors, different portions of the neural network to(merely recites a generic computer on which to perform the abstract idea, e.g. "apply it on a computer" (see MPEP 2106.05(f))) Subject Matter Eligibility Analysis Step 2B: Additional elements (a) (b) and (c) do not integrate the abstract idea into a practical application nor do the additional limitation provide significantly more than the abstract idea because the limitation amount to no more than mere instructions to apply the exception using a generic computer component. Please see MPEP §2106.05(f). The additional element(s) (a) (b) and (c) in claim 1 do/does not include any additional elements , when considered separately and in combination, that amount to an integration of the judicial exception into a practical application, nor significantly more than the judicial exception for the reasons set forth in step 2A prong 2 analysis above. The claim is not patent eligible. Regarding claim 2: Subject Matter Eligibility Analysis Step 2A Prong 1: The claim recites infer the respectively different information based, at least in part, on one or more features of input data which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitations encompass a user predicting sets of information that is derived from a separate set of information. See 2106.04.(a)(2).III.C. Subject Matter Eligibility Analysis Step 2A Prong 2: different portions comprise respective sets of neurons of the neural network(merely specifies a particular technological environment in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h))) to be input into the neural network(merely specifies a particular technological environment in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h))) Subject Matter Eligibility Analysis Step 2B: Additional elements (a) and (b) do not integrate the abstract idea into a practical application nor do the additional limitation provide significantly more than the abstract idea because the limitation merely specifies a field of use in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h)). The additional element(s) (a) and (b) in claim 2 do/does not include any additional elements , when considered separately and in combination, that amount to an integration of the judicial exception into a practical application, nor significantly more than the judicial exception for the reasons set forth in step 2A prong 2 analysis above. The claim is not patent eligible. Regarding claim 3: Subject Matter Eligibility Analysis Step 2A Prong 1: The claim recites to infer the respectively different information which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitations encompass a user predicting sets of information. See 2106.04.(a)(2).III.C. The claim recites compute one or more data values … the one or more data values indicating a state of the one or more convolutional layers which is an abstract idea (Mathematical Calculations (see MPEP 2106.04(a)(2)(I)(C))). Subject Matter Eligibility Analysis Step 2A Prong 2: wherein the neural network is a neural coding network comprising one or more convolutional layers(merely specifies a particular technological environment in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h))) Subject Matter Eligibility Analysis Step 2B: Additional elements (a) do not integrate the abstract idea into a practical application nor do the additional limitation provide significantly more than the abstract idea because the limitation merely specifies a field of use in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h)). The additional element(s) (a) in claim 3 do/does not include any additional elements , when considered separately and in combination, that amount to an integration of the judicial exception into a practical application, nor significantly more than the judicial exception for the reasons set forth in step 2A prong 2 analysis above. The claim is not patent eligible. Regarding claim 4: Subject Matter Eligibility Analysis Step 2A Prong 1: The claim recites wherein the respectively different information comprises a first information computed… based, at least in part, on a first set of image data input… and a second information computed… based, at least in part, on a second set of image data input which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitations encompass a user predicting sets of information derived from separate images. See 2106.04.(a)(2).III.C. Subject Matter Eligibility Analysis Step 2A Prong 2: neural network(merely recites a generic computer on which to perform the abstract idea, e.g. "apply it on a computer" (see MPEP 2106.05(f))) Subject Matter Eligibility Analysis Step 2B: Additional element (a) does not integrate the abstract idea into a practical application nor do the additional limitation provide significantly more than the abstract idea because the limitation amount to no more than mere instructions to apply the exception using a generic computer component. Please see MPEP §2106.05(f). The additional element(s) (a) in claim 4 do/does not include any additional elements , when considered separately and in combination, that amount to an integration of the judicial exception into a practical application, nor significantly more than the judicial exception for the reasons set forth in step 2A prong 2 analysis above. The claim is not patent eligible. Regarding claim 5: Subject Matter Eligibility Analysis Step 2A Prong 1: The claim recites computing at least a set of state data, a set of error data, and a set of corrected state data to be used to infer the respectively different information which is an abstract idea (Mathematical Calculations (see MPEP 2106.04(a)(2)(I)(C))). Subject Matter Eligibility Analysis Step 2A Prong 2: neural network comprises one or more neural coding blocks(merely specifies a particular technological environment in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h))) Subject Matter Eligibility Analysis Step 2B: Additional elements (a) do not integrate the abstract idea into a practical application nor do the additional limitation provide significantly more than the abstract idea because the limitation merely specifies a field of use in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h)). The additional element(s) (a) in claim 5 do/does not include any additional elements , when considered separately and in combination, that amount to an integration of the judicial exception into a practical application, nor significantly more than the judicial exception for the reasons set forth in step 2A prong 2 analysis above. The claim is not patent eligible. Regarding claim 6: Subject Matter Eligibility Analysis Step 2A Prong 1: The claim recites compute a set of data comprising a respective task descriptor to indicate a respective different portions based of the neural network which is an abstract idea (Mathematical Calculations (see MPEP 2106.04(a)(2)(I)(C))). Subject Matter Eligibility Analysis Step 2A Prong 2: neural network comprises one or more layers(merely specifies a particular technological environment in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h))) Subject Matter Eligibility Analysis Step 2B: Additional element (a) does not integrate the abstract idea into a practical application nor do the additional limitation provide significantly more than the abstract idea because the limitation merely specifies a field of use in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h)). The additional element(s) (a) in claim 6 do/does not include any additional elements , when considered separately and in combination, that amount to an integration of the judicial exception into a practical application, nor significantly more than the judicial exception for the reasons set forth in step 2A prong 2 analysis above. The claim is not patent eligible. Regarding claim 7: Subject Matter Eligibility Analysis Step 2A Prong 1: The claim recites of one or more computational operations…to generate state data for the one or more layers which is an abstract idea (Mathematical Calculations (see MPEP 2106.04(a)(2)(I)(C))). Subject Matter Eligibility Analysis Step 2A Prong 2: the neural network comprises at least one block …to be performed on one or more layers of the neural network(merely specifies a particular technological environment in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h))) Subject Matter Eligibility Analysis Step 2B: Additional element (a) does not integrate the abstract idea into a practical application nor do the additional limitation provide significantly more than the abstract idea because the limitation merely specifies a field of use in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h)). The additional element(s) (a) in claim 7 do/does not include any additional elements , when considered separately and in combination, that amount to an integration of the judicial exception into a practical application, nor significantly more than the judicial exception for the reasons set forth in step 2A prong 2 analysis above. The claim is not patent eligible. Regarding claim 8 Subject Matter Eligibility Analysis Step 2A Prong 1: The claim does not contain elements that would warrant a Step 2A Prong 1 analysis. Subject Matter Eligibility Analysis Step 2A Prong 2: The claim recites wherein the neural network comprises one or more convolutional layers(merely specifies a particular technological environment in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h))) Subject Matter Eligibility Analysis Step 2B: Additional element (a) does not integrate the abstract idea into a practical application nor do the additional limitation provide significantly more than the abstract idea because the limitation merely specifies a field of use in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h)). The additional element(s) (a) in claim 8 do/does not include any additional elements , when considered separately and in combination, that amount to an integration of the judicial exception into a practical application, nor significantly more than the judicial exception for the reasons set forth in step 2A prong 2 analysis above. The claim is not patent eligible. Regarding claim 9: Subject Matter Eligibility Analysis Step 2A Prong 1: The claim recites calculate respective task descriptors based, at least in part, on respective inputs to a neural network which is an abstract idea (Mathematical Calculations (see MPEP 2106.04(a)(2)(I)(C))). The claim recites infer respectively different information for the respective input which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitations encompass a user predicting sets of information. See 2106.04.(a)(2).III.C. Subject Matter Eligibility Analysis Step 2A Prong 2: comprising non-transitory memory to store instructions that(merely recites a generic computer on which to perform the abstract idea, e.g. "apply it on a computer" (see MPEP 2106.05(f))) as a result of execution by one or more processors (merely recites a generic computer on which to perform the abstract idea, e.g. "apply it on a computer" (see MPEP 2106.05(f))) selectively activate, based, at least in part, on the calculated respective task descriptors, different portions of the neural network to(merely recites a generic computer on which to perform the abstract idea, e.g. "apply it on a computer" (see MPEP 2106.05(f))) Subject Matter Eligibility Analysis Step 2B: Additional elements (a) (b) and (c) do not integrate the abstract idea into a practical application nor do the additional limitation provide significantly more than the abstract idea because the limitation amount to no more than mere instructions to apply the exception using a generic computer component. Please see MPEP §2106.05(f). The additional element(s) (a) (b) and (c) in claim 9 do/does not include any additional elements , when considered separately and in combination, that amount to an integration of the judicial exception into a practical application, nor significantly more than the judicial exception for the reasons set forth in step 2A prong 2 analysis above. The claim is not patent eligible. Regarding claim 10: Subject Matter Eligibility Analysis Step 2A Prong 1: The claim recites infer the respectively different information based on a first set of input image data and a second set of input image data, the first set of input image data comprising a first type of information and the second set of input image data comprising a second type of information which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitations encompass a user predicting sets of information that is derived from a separate set of information. See 2106.04.(a)(2).III.C. Subject Matter Eligibility Analysis Step 2A Prong 2: The claim does not contain elements that would warrant a Step 2A Prong 2 analysis. Subject Matter Eligibility Analysis Step 2B: The claim does not include any additional element, when considered separately and in combination, that amount to an integration of the judicial exception into a practical application, nor to significantly more than the judicial exception. The claim is not patent eligible. Regarding claim 11: Subject Matter Eligibility Analysis Step 2A Prong 1: The claim recites computing at least a set of data representing a predicted state of the neural coding block for each input to the neural coding block which is an abstract idea (Mathematical Calculations (see MPEP 2106.04(a)(2)(I)(C))). The claim recites the predicted state to be used to infer the respectively different information which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitations encompass a user predicting sets of information based on a calculation. See 2106.04.(a)(2).III.C. Subject Matter Eligibility Analysis Step 2A Prong 2: neural network comprises one or more neural coding blocks(merely specifies a particular technological environment in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h))) Subject Matter Eligibility Analysis Step 2B: Additional element (a) does not integrate the abstract idea into a practical application nor do the additional limitation provide significantly more than the abstract idea because the limitation merely specifies a field of use in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h)). The additional element(s) (a) in claim 11 do/does not include any additional elements , when considered separately and in combination, that amount to an integration of the judicial exception into a practical application, nor significantly more than the judicial exception for the reasons set forth in step 2A prong 2 analysis above. The claim is not patent eligible. Regarding claim 12: Subject Matter Eligibility Analysis Step 2A Prong 1: The claim recites compute a set of data comprising a respective task descriptor to indicate a respective different portion based of the neural network which is an abstract idea (Mathematical Calculations (see MPEP 2106.04(a)(2)(I)(C))). Subject Matter Eligibility Analysis Step 2A Prong 2: neural network comprises one or more layers(merely specifies a particular technological environment in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h))) Subject Matter Eligibility Analysis Step 2B: Additional element(s) (a) does not integrate the abstract idea into a practical application nor do the additional limitation provide significantly more than the abstract idea because the limitation merely specifies a field of use in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h)). The additional element(s) (a) in claim 12 do/does not include any additional elements , when considered separately and in combination, that amount to an integration of the judicial exception into a practical application, nor significantly more than the judicial exception for the reasons set forth in step 2A prong 2 analysis above. The claim is not patent eligible. Regarding claim 13: Subject Matter Eligibility Analysis Step 2A Prong 1: The claim recites one or more blocks of computational operations… based, at least on part, on a predicted state computed which is an abstract idea (Mathematical Calculations (see MPEP 2106.04(a)(2)(I)(C))). The claim recites and an error representing the difference between the predicted state and a correct state for the respectively different information which is an abstract idea (Mathematical Relationships (see MPEP 2106.04(a)(2)(I)(A)))). Subject Matter Eligibility Analysis Step 2A Prong 2: to correct one or more states of the neural network(merely specifies a particular technological environment in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h))) by the neural network(merely recites a generic computer on which to perform the abstract idea, e.g. "apply it on a computer" (see MPEP 2106.05(f))) Subject Matter Eligibility Analysis Step 2B: Additional element(s) (a) does not integrate the abstract idea into a practical application nor do the additional limitation provide significantly more than the abstract idea because the limitation merely specifies a field of use in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h)). Additional element(s) (b) does not integrate the abstract idea into a practical application nor do the additional limitation provide significantly more than the abstract idea because the limitation amount to no more than mere instructions to apply the exception using a generic computer component. Please see MPEP §2106.05(f). The additional element(s) (a) and (b) in claim 13 do/does not include any additional elements , when considered separately and in combination, that amount to an integration of the judicial exception into a practical application, nor significantly more than the judicial exception for the reasons set forth in step 2A prong 2 analysis above. The claim is not patent eligible. Regarding claim 14: Subject Matter Eligibility Analysis Step 2A Prong 1: The claim recites infer a first output…based, at least in part, on a first input to the neural network, and a second output…based, at least in part, on a second input to the neural network which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitations encompass a user predicting an outputs based on inputs. See 2106.04.(a)(2).III.C. Subject Matter Eligibility Analysis Step 2A Prong 2: using a first portion of the neural network … using a second portion of the neural network(merely recites a generic computer on which to perform the abstract idea, e.g. "apply it on a computer" (see MPEP 2106.05(f))) Subject Matter Eligibility Analysis Step 2B: Additional element (a) does not integrate the abstract idea into a practical application nor do the additional limitation provide significantly more than the abstract idea because the limitation amount to no more than mere instructions to apply the exception using a generic computer component. Please see MPEP §2106.05(f). The additional element(s) (a) in claim 14 do/does not include any additional elements , when considered separately and in combination, that amount to an integration of the judicial exception into a practical application, nor significantly more than the judicial exception for the reasons set forth in step 2A prong 2 analysis above. The claim is not patent eligible. Regarding claim 15: Subject Matter Eligibility Analysis Step 2A Prong 1: The claim recites infer the respectively different information, the respectively different information comprising one or more features of image data input to the neural network which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitations encompass a user predicting sets of information from an image. See 2106.04.(a)(2).III.C. Subject Matter Eligibility Analysis Step 2A Prong 2: wherein the neural network is a neural coding network comprising one or more convolutional layers(merely specifies a particular technological environment in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h))) Subject Matter Eligibility Analysis Step 2B: Additional element (a) does not integrate the abstract idea into a practical application nor do the additional limitation provide significantly more than the abstract idea because the limitation merely specifies a field of use in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h)). The additional element(s) (a) in claim 15 do/does not include any additional elements , when considered separately and in combination, that amount to an integration of the judicial exception into a practical application, nor significantly more than the judicial exception for the reasons set forth in step 2A prong 2 analysis above. The claim is not patent eligible. Regarding claim 16: Subject Matter Eligibility Analysis Step 2A Prong 1: The claim recites calculate respective task descriptors based, at least in part, on respective inputs to a neural network which is an abstract idea (Mathematical Calculations (see MPEP 2106.04(a)(2)(I)(C))). The claim recites infer respectively different information for the respective input which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitations encompass a user predicting sets of information. See 2106.04.(a)(2).III.C. Subject Matter Eligibility Analysis Step 2A Prong 2: non-transitory machine-readable medium having stored thereon one or more instructions which if performed by one or more processors, cause the one or more processors(merely recites a generic computer on which to perform the abstract idea, e.g. "apply it on a computer" (see MPEP 2106.05(f))) selectively activate, based, at least in part, on the calculated respective task descriptors, different portions of the neural network to(merely recites a generic computer on which to perform the abstract idea, e.g. "apply it on a computer" (see MPEP 2106.05(f))) Subject Matter Eligibility Analysis Step 2B: Additional element(s) (a) and (b) do not integrate the abstract idea into a practical application nor do the additional limitation provide significantly more than the abstract idea because the limitation amount to no more than mere instructions to apply the exception using a generic computer component. Please see MPEP §2106.05(f). The additional element(s) (a) and (b) in claim 16 do/does not include any additional elements , when considered separately and in combination, that amount to an integration of the judicial exception into a practical application, nor significantly more than the judicial exception for the reasons set forth in step 2A prong 2 analysis above. The claim is not patent eligible. Regarding claim 17: Subject Matter Eligibility Analysis Step 2A Prong 1: The claim recites select respectively different sets of neurons of the neural network which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitations encompass a user choosing a portion of the neural network to operate. See 2106.04.(a)(2).III.C. The claim recites infer the respectively different information based, at least in part, on one or more features of input data to be input into the neural network which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitations encompass a user predicting sets of information from a set of data. See 2106.04.(a)(2).III.C. Subject Matter Eligibility Analysis Step 2A Prong 2: The claim does not contain elements that would warrant a Step 2A Prong 2 analysis. Subject Matter Eligibility Analysis Step 2B: The claim does not include any additional element, when considered separately and in combination, that amount to an integration of the judicial exception into a practical application, nor to significantly more than the judicial exception. The claim is not patent eligible. Regarding claim 18: Subject Matter Eligibility Analysis Step 2A Prong 1: The claim recites compute a set of data comprising information to indicate one or more neurons of the neural network which is an abstract idea (Mathematical Calculations (see MPEP 2106.04(a)(2)(I)(C))). The claim recites to infer the respectively different information which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitations encompass a user predicting sets of information. See 2106.04.(a)(2).III.C. Subject Matter Eligibility Analysis Step 2A Prong 2: The claim does not contain elements that would warrant a Step 2A Prong 2 analysis. Subject Matter Eligibility Analysis Step 2B: The claim does not include any additional element, when considered separately and in combination, that amount to an integration of the judicial exception into a practical application, nor to significantly more than the judicial exception. The claim is not patent eligible. Regarding claim 19: Subject Matter Eligibility Analysis Step 2A Prong 1: The claim recites to compute a set of data comprising information to indicate the one or more portions based, at least in part, on each of the respectively different information which is an abstract idea (Mathematical Calculations (see MPEP 2106.04(a)(2)(I)(C))). Subject Matter Eligibility Analysis Step 2A Prong 2: wherein the neural network comprises one or more layers(merely specifies a particular technological environment in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h))) Subject Matter Eligibility Analysis Step 2B: Additional elements (a) do not integrate the abstract idea into a practical application nor do the additional limitation provide significantly more than the abstract idea because the limitation merely specifies a field of use in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h)). The additional element(s) (a) in claim 19 do/does not include any additional elements , when considered separately and in combination, that amount to an integration of the judicial exception into a practical application, nor significantly more than the judicial exception for the reasons set forth in step 2A prong 2 analysis above. The claim is not patent eligible. Regarding claim 20: Subject Matter Eligibility Analysis Step 2A Prong 1: The claim recites one or more computational operations to be performed …to generate state data… which is an abstract idea (Mathematical Calculations (see MPEP 2106.04(a)(2)(I)(C))). Subject Matter Eligibility Analysis Step 2A Prong 2: ...to be performed on one or more layers of the neural network…(merely specifies a particular technological environment in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h))) …for the one or more layers(merely specifies a particular technological environment in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h))) Subject Matter Eligibility Analysis Step 2B: Additional elements (a) and (b) do not integrate the abstract idea into a practical application nor do the additional limitation provide significantly more than the abstract idea because the limitation merely specifies a field of use in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h)). The additional element(s) (a) and (b) in claim 20 do/does not include any additional elements , when considered separately and in combination, that amount to an integration of the judicial exception into a practical application, nor significantly more than the judicial exception for the reasons set forth in step 2A prong 2 analysis above. The claim is not patent eligible. Regarding claim 21: Subject Matter Eligibility Analysis Step 2A Prong 1: The claim recites compute a set of data representing a predicted state for each layer of one or more neural coding blocks of the neural network which is an abstract idea (Mathematical Calculations (see MPEP 2106.04(a)(2)(I)(C))). The claim recites the predicted state usable to infer the respectively different information which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitations encompass a user predicting a set of information based on a calculation. See 2106.04.(a)(2).III.C. Subject Matter Eligibility Analysis Step 2A Prong 2: further comprising instructions that, if performed by the one or more processors, cause the one or more processors(merely recites a generic computer on which to perform the abstract idea, e.g. "apply it on a computer" (see MPEP 2106.05(f))) Subject Matter Eligibility Analysis Step 2B: Additional element(s) (a) do not integrate the abstract idea into a practical application nor do the additional limitation provide significantly more than the abstract idea because the limitation amount to no more than mere instructions to apply the exception using a generic computer component. Please see MPEP §2106.05(f). The additional element(s) (a) in claim 21 do/does not include any additional elements , when considered separately and in combination, that amount to an integration of the judicial exception into a practical application, nor significantly more than the judicial exception for the reasons set forth in step 2A prong 2 analysis above. The claim is not patent eligible. Regarding claim 22: Subject Matter Eligibility Analysis Step 2A Prong 1: The claim does not contain elements that would warrant a Step 2A Prong 1 analysis. Subject Matter Eligibility Analysis Step 2A Prong 2: wherein the respectively different information comprises a first type of image information inferred by the neural network using a first portion of the neural network… a second type of information inferred by the neural network using a second portion of the neural network(merely specifies a particular technological environment in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h))) Subject Matter Eligibility Analysis Step 2B: Additional elements (a) do not integrate the abstract idea into a practical application nor do the additional limitation provide significantly more than the abstract idea because the limitation merely specifies a field of use in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h)). The additional element(s) (a) in claim 22 do/does not include any additional elements , when considered separately and in combination, that amount to an integration of the judicial exception into a practical application, nor significantly more than the judicial exception for the reasons set forth in step 2A prong 2 analysis above. The claim is not patent eligible. Regarding claim 23: Subject Matter Eligibility Analysis Step 2A Prong 1: The claim recites calculate respective task descriptors based, at least in part, on respective inputs to a neural network which is an abstract idea (Mathematical Calculations (see MPEP 2106.04(a)(2)(I)(C))). The claim recites infer respectively different information for the respective input which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitations encompass a user predicting sets of information. See 2106.04.(a)(2).III.C. Subject Matter Eligibility Analysis Step 2A Prong 2: selectively activate, based, at least in part, on the calculated respective task descriptors, different portions of the neural network to(merely recites a generic computer on which to perform the abstract idea, e.g. "apply it on a computer" (see MPEP 2106.05(f))) Subject Matter Eligibility Analysis Step 2B: Additional element(s) (a) do not integrate the abstract idea into a practical application nor do the additional limitation provide significantly more than the abstract idea because the limitation amount to no more than mere instructions to apply the exception using a generic computer component. Please see MPEP §2106.05(f). The additional element(s) (a) in claim 23 do/does not include any additional elements , when considered separately and in combination, that amount to an integration of the judicial exception into a practical application, nor significantly more than the judicial exception for the reasons set forth in step 2A prong 2 analysis above. The claim is not patent eligible. Regarding claim 24: Subject Matter Eligibility Analysis Step 2A Prong 1: The claim recites selecting the different portions based, at least in part, on one or more features of input data to be input into the neural network which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitations encompass a user choosing a portion of the neural network to operate based on sets of information. See 2106.04.(a)(2).III.C. Subject Matter Eligibility Analysis Step 2A Prong 2: the different portions comprising a set of neurons to(merely recites a generic computer on which to perform the abstract idea, e.g. "apply it on a computer" (see MPEP 2106.05(f))) Subject Matter Eligibility Analysis Step 2B: Additional element(s) (a) do not integrate the abstract idea into a practical application nor do the additional limitation provide significantly more than the abstract idea because the limitation amount to no more than mere instructions to apply the exception using a generic computer component. Please see MPEP §2106.05(f). The additional element(s) (a) in claim 24 do/does not include any additional elements , when considered separately and in combination, that amount to an integration of the judicial exception into a practical application, nor significantly more than the judicial exception for the reasons set forth in step 2A prong 2 analysis above. The claim is not patent eligible. Regarding claim 25: Subject Matter Eligibility Analysis Step 2A Prong 1: The claim recites compute state data based on one or more inputs to the neural network, the state data usable to infer the respectively different information which is an abstract idea (Mathematical Calculations (see MPEP 2106.04(a)(2)(I)(C))). Subject Matter Eligibility Analysis Step 2A Prong 2: the neural network comprises one or more neural coding blocks comprising one or more convolutional layers(merely specifies a particular technological environment in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h))) Subject Matter Eligibility Analysis Step 2B: Additional elements (a) do not integrate the abstract idea into a practical application nor do the additional limitation provide significantly more than the abstract idea because the limitation merely specifies a field of use in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h)). The additional element(s) (a) in claim 25 do/does not include any additional elements , when considered separately and in combination, that amount to an integration of the judicial exception into a practical application, nor significantly more than the judicial exception for the reasons set forth in step 2A prong 2 analysis above. The claim is not patent eligible. Regarding claim 26: Subject Matter Eligibility Analysis Step 2A Prong 1: The claim recites compute a set of data comprising information to indicate the different portions based, at least in part, on the respective task descriptor foo the respectively different information which is an abstract idea (Mathematical Calculations (see MPEP 2106.04(a)(2)(I)(C))). Subject Matter Eligibility Analysis Step 2A Prong 2: one or more layers(merely specifies a particular technological environment in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h))) Subject Matter Eligibility Analysis Step 2B: Additional elements (a) do not integrate the abstract idea into a practical application nor do the additional limitation provide significantly more than the abstract idea because the limitation merely specifies a field of use in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h)). The additional element(s) (a) in claim 26 do/does not include any additional elements , when considered separately and in combination, that amount to an integration of the judicial exception into a practical application, nor significantly more than the judicial exception for the reasons set forth in step 2A prong 2 analysis above. The claim is not patent eligible. Regarding claim 27: Subject Matter Eligibility Analysis Step 2A Prong 1: The claim recites wherein the respectively different information comprises a first type of information inferred … and a second type of information inferred which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitations encompass a user predicting sets of information. See 2106.04.(a)(2).III.C. Subject Matter Eligibility Analysis Step 2A Prong 2: by the neural network using a first set of neurons of the neural network… by the neural network using a second set of neurons of the neural network(merely recites a generic computer on which to perform the abstract idea, e.g. "apply it on a computer" (see MPEP 2106.05(f))) Subject Matter Eligibility Analysis Step 2B: Additional element(s) (a) do not integrate the abstract idea into a practical application nor do the additional limitation provide significantly more than the abstract idea because the limitation amount to no more than mere instructions to apply the exception using a generic computer component. Please see MPEP §2106.05(f). The additional element(s) (a) in claim 27 do/does not include any additional elements , when considered separately and in combination, that amount to an integration of the judicial exception into a practical application, nor significantly more than the judicial exception for the reasons set forth in step 2A prong 2 analysis above. The claim is not patent eligible. Regarding claim 28: Subject Matter Eligibility Analysis Step 2A Prong 1: The claim recites calculating a set of data values to indicate the different portions based, at least in part, on input data to the neural network which is an abstract idea (Mathematical Calculations (see MPEP 2106.04(a)(2)(I)(C))). The claim recites infer the respectively different information based, at least in part, on the input data which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitations encompass a user predicting sets of information that is derived from a separate set of information. See 2106.04.(a)(2).III.C. Subject Matter Eligibility Analysis Step 2A Prong 2: the one or more different portions comprising one or more neurons of one or more layers of the neural network to be used to (merely specifies a particular technological environment in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h))) Subject Matter Eligibility Analysis Step 2B: Additional elements (a) do not integrate the abstract idea into a practical application nor do the additional limitation provide significantly more than the abstract idea because the limitation merely specifies a field of use in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h)). The additional element(s) (a) in claim 28 do/does not include any additional elements , when considered separately and in combination, that amount to an integration of the judicial exception into a practical application, nor significantly more than the judicial exception for the reasons set forth in step 2A prong 2 analysis above. The claim is not patent eligible. Regarding claim 29: Subject Matter Eligibility Analysis Step 2A Prong 1: The claim recites inferring a first output comprising a first of the respectively different information based, at least in part, on a first input comprising a first type of information and inferring a second output comprising a second of the respectively different information based, at least in part, on a second output comprising a second type of information which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitations encompass a user predicting sets of information derived from separate sets of information. See 2106.04.(a)(2).III.C. Subject Matter Eligibility Analysis Step 2A Prong 2: The claim does not contain elements that would warrant a Step 2A Prong 2 analysis. Subject Matter Eligibility Analysis Step 2B: The claim does not include any additional element, when considered separately and in combination, that amount to an integration of the judicial exception into a practical application, nor to significantly more than the judicial exception. The claim is not patent eligible. Regarding claim 30: Subject Matter Eligibility Analysis Step 2A Prong 1: The claim recites infer the respectively different information, the respectively different information comprising one or more features of image data input to the neural network which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitations encompass a user predicting sets of information from an image. See 2106.04.(a)(2).III.C. Subject Matter Eligibility Analysis Step 2A Prong 2: wherein the neural network is a neural coding network comprising one or more convolutional layers(merely specifies a particular technological environment in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h))) Subject Matter Eligibility Analysis Step 2B: Additional elements (a) do not integrate the abstract idea into a practical application nor do the additional limitation provide significantly more than the abstract idea because the limitation merely specifies a field of use in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h)). The additional element(s) (a) in claim 30 do/does not include any additional elements , when considered separately and in combination, that amount to an integration of the judicial exception into a practical application, nor significantly more than the judicial exception for the reasons set forth in step 2A prong 2 analysis above. The claim is not patent eligible. Regarding claim 31: Subject Matter Eligibility Analysis Step 2A Prong 1: The claim does not contain elements that would warrant a Step 2A Prong 2 analysis. Subject Matter Eligibility Analysis Step 2A Prong 2: training a first portion of the neural network based, at least in part, on a first set of data…training a second portion of the neural network based, at least in part, on a second set of data; (merely recites a generic computer on which to perform the abstract idea, e.g. "apply it on a computer" (see MPEP 2106.05(f))) the first set of data comprises a first type of information of the respectively different information and the second set of data comprises a second type of information of the respectively different information(merely recites a generic computer on which to perform the abstract idea, e.g. "apply it on a computer" (see MPEP 2106.05(f))) Subject Matter Eligibility Analysis Step 2B: Additional element(s) (a) and (b) do not integrate the abstract idea into a practical application nor do the additional limitation provide significantly more than the abstract idea because the limitation amount to no more than mere instructions to apply the exception using a generic computer component. Please see MPEP §2106.05(f). The additional element(s) (a) (b) in claim 31 do/does not include any additional elements , when considered separately and in combination, that amount to an integration of the judicial exception into a practical application, nor significantly more than the judicial exception for the reasons set forth in step 2A prong 2 analysis above. The claim is not patent eligible. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-31 is/are rejected under 35 U.S.C. 102 (a)(1) by Abdulnabi et al(Multi-Task CNN Model for Attribute Prediction), henceforth known as Abdulnabi. Regarding claim 1, Abdulnabi teaches a processor(Abdulnabi, Page 1950, Paragraph 4,“The first approach is more or less applicable depending on the available resources (CPU/GPU and Memory)”) comprising: one or more circuits to: calculate respective task descriptors based, at least in part, on respective inputs to a neural network(Abdulnabi, Page 1952, Col. 2, Paragraph 1, “After the forward pass in all of the CNN models, the features generated from the last convolution layers will be fed into our joint MTL loss layer…the weight parameter matrix learned in the loss layer will be decomposed into a latent task matrix and a combination matrix” where a task descriptor is considered a vector and the latent matrix L is operated directly on CNN features extracted from input images corresponds to calculating a task descriptor based on respective inputs(See also Page 1955, Algorithm 1 and Col. 2, Paragraph 2, “During a training epoch, the forward pass will generate the input for the multi-task loss layer from all the CNN models”)) and selectively activate, based, at least in part, on the calculated respective task descriptors, different portions of the neural network to infer respectively different information for the respective input(Abdulnabi, Page 1952, Figure 2 description, “…Each CNN will predict one binary attribute” where each CNN predicting a binary attribute is considered selectively activate, based, at least in part, on the calculated respective task descriptors, different portions of the neural network to infer respectively different information for the respective input as the latent matrix L is part of the CNN-based architecture and is shared across all CNN’s to form task-specific classifiers(See also Page 1952, Col. 2, Paragraph 1, “The latent matrix can be seen as a shared layer between all the CNN models; in other words, the latent layer serves as a shared, fully-connected layer”)) Regarding claim 2, Abdulnabi teaches the processor of claim 1(and thus the rejection of claim 1 is incorporated) Abdulnabi teaches wherein the different portions comprise respective sets of neurons of the neural network to infer the respectively different information based, at least in part, on one or more features of input data to be input into the neural network(Abdulnabi, Page 1952, Figure 2 description, “the input image (left) with attribute labels information is fed into the model. Each CNN will predict one binary attribute” where the information being inferred is based on the input into the neural network is considered inferring respectively different information based on input data that is input into the neural network) Regarding claim 3, Abdulnabi teaches the processor of claim 1(and thus the rejection of claim 1 is incorporated) Abdulnabi teaches wherein the neural network is a neural coding network comprising one or more convolutional layers to compute one or more data values based, at least in part, on the respectively different information, the one or more data values indicating a state of the one or more convolutional layers where the state is to be used to infer the respectively different information. (Abdulnabi, Page 1952, Figure 2 description, “the input image (left) with attribute labels information is fed into the model. Each CNN will predict one binary attribute” where the information CNN-1 to CNN-M are considered CNN’s with convolutional layers that compute one or more data values based on the predicting binary attributes (respectively different information) and the use those values with Shared L to predict the attributes is considered indicating a state that is used to infer respectively different information) Regarding claim 4, Abdulnabi teaches the processor of claim 1(and thus the rejection of claim 1 is incorporated) Abdulnabi teaches wherein the respectively different information comprises a first information computed by the neural network based, at least in part, on a first set of image data input to the neural network, and a second information computed by the neural network based, at least in part, on a second set of image data input to the neural network. (Abdulnabi, Page 1952, Figure 2 description, “the input image (left) with attribute labels information is fed into the model. Each CNN will predict one binary attribute” where the image used is considered a first and second set of image data input and the binary attribute from each CNN is considered respectively different information) Regarding claim 5, Abdulnabi teaches the processor of claim 1(and thus the rejection of claim 1 is incorporated) Abdulnabi teaches wherein the neural network comprises one or more neural coding blocks, each neural coding block computing at least a set of state data(Abdulnabi, Page 1955, “During a training epoch, the forward pass will generate the input for the multi-task loss layer from all the CNN models” where the feature maps and weights that CNN’s inherently have are considered a set of state data and a forward pass is considered computing), a set of error data(Abdulnabi, Page 1955, “During a training epoch, the forward pass will generate the input for the multi-task loss layer from all the CNN models” where the loss layer is considered a set of error data as it contains an error/loss function), and a set of corrected state data to be used to infer the respectively different information(Abdulnabi, Page 1955, “After optimizing (2) using the proposed algorithm III-D, the output is the overall model weight matrix , where each column in will be dedicated to its specific corresponding CNN model and is taken back in the backward pass alongside the gradients with respect to its input” where the updated weights during training makes the neural network better at predicting which is considered a more corrected state data to be used to infer respective information(binary attributes)) Regarding claim 6, Abdulnabi teaches the processor of claim 1(and thus the rejection of claim 1 is incorporated) Abdulnabi teaches wherein the neural network comprises one or more layers(Abdulnabi, Page 1952, Figure 2, where figure 2 shows multiple layers with Conv1-Conv5 that computer the input image (a set of data) uses a neural network (one or more portions) to infer binary attribute(a respective different information)) to compute a set of data comprising a respective task descriptor to indicate a respective different portions based of the neural network(Abdulnabi, Page 1955, “After optimizing (2) using the proposed algorithm III-D, the output is the overall model weight matrix , where each column in will be dedicated to its specific corresponding CNN model and is taken back in the backward pass alongside the gradients with respect to its input” where the updated weights during training makes the neural network better at predicting which is considered a more corrected state data to be used to infer respective information(binary attributes)) Regarding claim 7, Abdulnabi teaches the processor of claim 1(and thus the rejection of claim 1 is incorporated) Abdulnabi teaches wherein the neural network comprises at least one block of one or more computational operations to be performed on one or more layers of the neural network to generate state data for the one or more layers. (Abdulnabi, Page 1955, “During a training epoch, the forward pass will generate the input for the multi-task loss layer from all the CNN models” where training using forward pass and loss layer are considered computations operations that are performed on layers of a neural network that generates the state data of layers of the CNN after the forward pass is completed (See Abdulnabi, Page 1952, Col. 2, Paragraph 1, “After the forward pass in all of the CNN models…the joint loss layer and sharing the visual knowledge, each CNN model will take back its specific parameters through backpropagation in the backward pass” ) Regarding claim 8, Abdulnabi teaches the processor of claim 1(and thus the rejection of claim 1 is incorporated) Abdulnabi teaches wherein the neural network comprises one or more convolutional layers.(Abdulnabi, Page 1952, Figure 2, where figure 2 shows convolutional layers Conv1-Conv5) Regarding claim 9, Abdulnabi teaches a system, comprising non-transitory memory to store instructions that, as a result of execution by one or more processors(Abdulnabi, Page 1950, Paragraph 4,“The first approach is more or less applicable depending on the available resources (CPU/GPU and Memory)”) cause the system to calculate respective task descriptors based, at least in part, on respective inputs to a neural network(Abdulnabi, Page 1952, Col. 2, Paragraph 1, “After the forward pass in all of the CNN models, the features generated from the last convolution layers will be fed into our joint MTL loss layer…the weight parameter matrix learned in the loss layer will be decomposed into a latent task matrix and a combination matrix” where a task descriptor is considered a vector and the latent matrix L is operated directly on CNN features extracted from input images corresponds to calculating a task descriptor based on respective inputs(See also Page 1955, Algorithm 1 and Col. 2, Paragraph 2, “During a training epoch, the forward pass will generate the input for the multi-task loss layer from all the CNN models”)) and selectively activate, based, at least in part, on the calculated respective task descriptor, different portions of the neural network to infer respectively different information for the respective input(Abdulnabi, Page 1952, Figure 2 description, “…Each CNN will predict one binary attribute” where each CNN predicting a binary attribute is considered selectively activate, based, at least in part, on the calculated respective task descriptors, different portions of the neural network to infer respectively different information for the respective input as the latent matrix L is part of the CNN-based architecture and is shared across all CNN’s to form task-specific classifiers(See also Page 1952, Col. 2, Paragraph 1, “The latent matrix can be seen as a shared layer between all the CNN models; in other words, the latent layer serves as a shared, fully-connected layer”)) Regarding claim 10, Abdulnabi teaches the system of claim 9(and thus the rejection of claim 9 is incorporated) Abdulnabi teaches wherein the neural network is to infer the respectively different information based on a first set of input image data and a second set of input image data, the first set of input image data comprising a first type of information and the second set of input image data comprising a second type of information(Abdulnabi, Page 1952, Figure 2 description, “the input image (left) with attribute labels information is fed into the model. Each CNN will predict one binary attribute” where the image used is considered a first and second set of image data input and the binary attribute from each CNN is considered respectively different information) Regarding claim 11, Abdulnabi teaches the system of claim 9(and thus the rejection of claim 9 is incorporated) Abdulnabi teaches wherein the neural network comprises one or more neural coding blocks, each neural coding block computing at least a set of data representing a predicted state of the neural coding block for each input to the neural coding block(Abdulnabi, Page 1955, “During a training epoch, the forward pass will generate the input for the multi-task loss layer from all the CNN models” where the training of the CNN and updating weights during back-propagation is considered computing a set of data representing a predicted state of the neural network coding block based on an input), the predicted state to be used to infer the respectively different information(Abdulnabi, Page 1955, “After optimizing (2) using the proposed algorithm III-D, the output is the overall model weight matrix , where each column in will be dedicated to its specific corresponding CNN model and is taken back in the backward pass alongside the gradients with respect to its input” where the updated weights during training makes the neural network better at predicting which is considered a predicted state data to be used to infer(i.e. predict) respective information(binary attributes)) Regarding claim 12, Abdulnabi teaches the system of claim 9(and thus the rejection of claim 9 is incorporated) Abdulnabi teaches wherein the neural network comprises one or more layers(Abdulnabi, Page 1952, Figure 2, where figure 2 shows multiple layers with Conv1-Conv5 that computer the input image (a set of data) uses a neural network (one or more portions) to infer binary attribute(a respective different information)) to compute a set of data comprising a respective task descriptor to indicate a respective different portions based of the neural network(Abdulnabi, Page 1955, “After optimizing (2) using the proposed algorithm III-D, the output is the overall model weight matrix , where each column in will be dedicated to its specific corresponding CNN model and is taken back in the backward pass alongside the gradients with respect to its input” where the updated weights during training makes the neural network better at predicting which is considered a more corrected state data to be used to infer respective information(binary attributes)) Regarding claim 13, Abdulnabi teaches the system of claim 9(and thus the rejection of claim 9 is incorporated) Abdulnabi teaches wherein the neural network comprises one or more blocks of computational operations to correct one or more states of the neural network(Abdulnabi, Page 1955, “After optimizing (2) using the proposed algorithm III-D, the output is the overall model weight matrix , where each column in will be dedicated to its specific corresponding CNN model and is taken back in the backward pass alongside the gradients with respect to its input” where the updated weights during training makes the neural network better at predicting which is considered a more corrected state data to be used to infer respective information(binary attributes) based, at least on part, on a predicted state computed by the neural network(Abdulnabi, Page 1955, “During a training epoch, the forward pass will generate the input for the multi-task loss layer from all the CNN models” TODO) and an error (Abdulnabi, Page 1955, “During a training epoch, the forward pass will generate the input for the multi-task loss layer from all the CNN models” where the loss layer is considered a set of error data as it contains an error/loss function) representing the difference between the predicted state and a correct state for the respectively different information (Abdulnabi, Page 1954, Col. 1, Equation 2, where the predicted state refers to the model’s predicted output and the correct state refers to the actual label for the attribute and objective function compares the correct state, Yim , to the predicted output, (Lsm )TXim , for attributes m and i being the training sample, by penalizing the model if the predicted outcome is on the wrong side of the margin of correct/confidence) Regarding claim 14, Abdulnabi teaches the system of claim 9(and thus the rejection of claim 9 is incorporated) Abdulnabi teaches wherein the neural network is to infer a first output using a first portion of the neural network based, at least in part, on a first input to the neural network, and a second output using a second portion of the neural network based, at least in part, on a second input to the neural network (Abdulnabi, Page 1952, Figure 2 description, “the input image (left) with attribute labels information is fed into the model. Each CNN will predict one binary attribute” where group1 and groupG are considered a first and second portion of a neural network and the output a respective first and second output(binary attributes) based on an input image which is considered a first and second input) Regarding claim 15, Abdulnabi teaches the system of claim 9(and thus the rejection of claim 9 is incorporated) Abdulnabi teaches wherein the neural network is a neural coding network comprising one or more convolutional layers to infer the respectively different information(Abdulnabi, Page 1952, Figure 2, where figure 2 shows convolutional layers Conv1-Conv5), the respectively different information comprising one or more features of image data input to the neural network (Abdulnabi, Page 1952, Figure 2 description, “the input image (left) with attribute labels information is fed into the model. Each CNN will predict one binary attribute” where the model predicting a binary attribute of the input image is considered inferring one or more feature of the input image) Regarding claim 16, Abdulnabi teaches a non-transitory machine-readable medium having stored thereon one or more instructions, which if performed by one or more processors(Abdulnabi, Page 1950, Paragraph 4,“The first approach is more or less applicable depending on the available resources (CPU/GPU and Memory)”) to at least: calculate respective task descriptors based, at least in part, on respective inputs to a neural network(Abdulnabi, Page 1952, Col. 2, Paragraph 1, “After the forward pass in all of the CNN models, the features generated from the last convolution layers will be fed into our joint MTL loss layer…the weight parameter matrix learned in the loss layer will be decomposed into a latent task matrix and a combination matrix” where a task descriptor is considered a vector and the latent matrix L is operated directly on CNN features extracted from input images corresponds to calculating a task descriptor based on respective inputs(See also Page 1955, Algorithm 1 and Col. 2, Paragraph 2, “During a training epoch, the forward pass will generate the input for the multi-task loss layer from all the CNN models”)) and selectively activate, based, at least in part, on the calculated respective task descriptors, different portions of the neural network to infer respectively different information for the respective input(Abdulnabi, Page 1952, Figure 2 description, “…Each CNN will predict one binary attribute” where each CNN predicting a binary attribute is considered selectively activate, based, at least in part, on the calculated respective task descriptors, different portions of the neural network to infer respectively different information for the respective input as the latent matrix L is part of the CNN-based architecture and is shared across all CNN’s to form task-specific classifiers(See also Page 1952, Col. 2, Paragraph 1, “The latent matrix can be seen as a shared layer between all the CNN models; in other words, the latent layer serves as a shared, fully-connected layer”)) Regarding claim 17, Abdulnabi teaches the machine-readable medium of claim 16(and thus the rejection of claim 16 is incorporated) Abdulnabi teaches further comprising instructions that, if performed by the one or more processors, cause the one or more processors to select respectively different sets of neurons of the neural network to infer the respectively different information based, at least in part, on one or more features of input data to be input into the neural network(Abdulnabi, Page 1952, Figure 2 description, “the input image (left) with attribute labels information is fed into the model. Each CNN will predict one binary attribute” where using the model is considered selecting neurons for information to be inferred that is based on the image/data that is input into the neural network which is considered inferring respectively different information based on input data that is input into the neural network and where the model predicting a binary attribute of the input image/data is considered inferring one or more feature of the input image/data) Regarding claim 18, Abdulnabi teaches the machine-readable medium of claim 16(and thus the rejection of claim 16 is incorporated) Abdulnabi teaches further comprising instructions that, if performed by the one or more processors, cause the one or more processors to compute a set of data comprising information to indicate one or more neurons of the neural network to infer the respectively different information. (Abdulnabi, Page 1955, “During a training epoch, the forward pass will generate the input for the multi-task loss layer from all the CNN models” where a forward pass is considered computing a set of data with one or more neurons and the output of a forward pass is considered producing inferred information) Regarding claim 19, Abdulnabi teaches the machine-readable medium of claim 16(and thus the rejection of claim 16 is incorporated) Abdulnabi teaches wherein the neural network comprises one or more layers to compute a set of data comprising information to indicate the one or more portions based, at least in part, on each of the respectively different information. (Abdulnabi, Page 1955, “After optimizing (2) using the proposed algorithm III-D, the output is the overall model weight matrix , where each column in will be dedicated to its specific corresponding CNN model and is taken back in the backward pass alongside the gradients with respect to its input” where the updated weights during training makes the neural network better at predicting which is considered a more corrected state data to be used to infer respective information(binary attributes)) Regarding claim 20, Abdulnabi teaches the machine-readable medium of claim 16(and thus the rejection of claim 16 is incorporated) Abdulnabi teaches wherein the neural network comprises at least one block of one or more computational operations to be performed on one or more layers of the neural network to generate state data for the one or more layers (Abdulnabi, Page 1955, “During a training epoch, the forward pass will generate the input for the multi-task loss layer from all the CNN models” where training using forward pass and loss layer are considered computations operations that are performed on layers of a neural network that generates the state data of layers of the CNN after the forward pass is completed (See Abdulnabi, Page 1952, Col. 2, Paragraph 1, “After the forward pass in all of the CNN models…the joint loss layer and sharing the visual knowledge, each CNN model will take back its specific parameters through backpropagation in the backward pass” ) Regarding claim 21, Abdulnabi teaches the machine-readable medium of claim 16(and thus the rejection of claim 16 is incorporated) Abdulnabi teaches further comprising instructions that, if performed by the one or more processors, cause the one or more processors to compute a set of data representing a predicted state for each layer of one or more neural coding blocks of the neural network(Abdulnabi, Page 1955, “During a training epoch, the forward pass will generate the input for the multi-task loss layer from all the CNN models” where the training of the CNN and updating weights during back-propagation is considered computing a set of data representing a predicted state of the neural network coding block based on an input), the predicted state usable to infer the respectively different information. (Abdulnabi, Page 1955, “After optimizing (2) using the proposed algorithm III-D, the output is the overall model weight matrix , where each column in will be dedicated to its specific corresponding CNN model and is taken back in the backward pass alongside the gradients with respect to its input” where the updated weights during training makes the neural network better at predicting which is considered a predicted state data to be used to infer(i.e. predict) respective information(binary attributes)) Regarding claim 22, Abdulnabi teaches the machine-readable medium of claim 16(and thus the rejection of claim 16 is incorporated) Abdulnabi teaches wherein the respectively different information comprises a first type of image information inferred by the neural network using a first portion of the neural network(Abdulnabi, Page 1952, Figure 2 description, “the input image (left) with attribute labels information is fed into the model. Each CNN will predict one binary attribute” where CNN-1 is considered a first portion of a neural network which infers image information of Attribute 1) and a second type of information inferred by the neural network using a second portion of the neural network(Abdulnabi, Page 1952, Figure 2 description, “the input image (left) with attribute labels information is fed into the model. Each CNN will predict one binary attribute” where CNN-M is considered a second portion of a neural network which infers a second type of information of Attribute M) Regarding claim 23, Abdulnabi teaches a method comprising: calculate respective task descriptors based, at least in part, on respective inputs to a neural network(Abdulnabi, Page 1952, Col. 2, Paragraph 1, “After the forward pass in all of the CNN models, the features generated from the last convolution layers will be fed into our joint MTL loss layer…the weight parameter matrix learned in the loss layer will be decomposed into a latent task matrix and a combination matrix” where a task descriptor is considered a vector and the latent matrix L is operated directly on CNN features extracted from input images corresponds to calculating a task descriptor based on respective inputs(See also Page 1955, Algorithm 1 and Col. 2, Paragraph 2, “During a training epoch, the forward pass will generate the input for the multi-task loss layer from all the CNN models”)) and selectively activate, based, at least in part, on the calculated respective task descriptor, different portions of the neural network to infer respectively different information for the respective input(Abdulnabi, Page 1952, Figure 2 description, “…Each CNN will predict one binary attribute” where each CNN predicting a binary attribute is considered selectively activate, based, at least in part, on the calculated respective task descriptors, different portions of the neural network to infer respectively different information for the respective input as the latent matrix L is part of the CNN-based architecture and is shared across all CNN’s to form task-specific classifiers(See also Page 1952, Col. 2, Paragraph 1, “The latent matrix can be seen as a shared layer between all the CNN models; in other words, the latent layer serves as a shared, fully-connected layer”)) Regarding claim 24, Abdulnabi teaches the method of claim 23(and thus the rejection of claim 23 is incorporated) Abdulnabi teaches further comprising selecting the different portions based, at least in part, on one or more features of input data to be input into the neural network(Abdulnabi, Page 1952, Figure 2 description, “the input image (left) with attribute labels information is fed into the model. Each CNN will predict one binary attribute” where using the model is considered selecting neurons for information to be inferred that is based on the image/data that is input into the neural network and where the model predicting a binary attribute of the input image/data is considered inferring one or more feature of the input image/data), the different portions comprising a set of neurons to infer the respectively different information. (Abdulnabi, Page 1955, “During a training epoch, the forward pass will generate the input for the multi-task loss layer from all the CNN models” where a forward pass is considered computing a set of data with one or more neurons and the output of a forward pass is considered producing inferred information) Regarding claim 25, Abdulnabi teaches the method of claim 23(and thus the rejection of claim 23 is incorporated) Abdulnabi teaches wherein the neural network comprises one or more neural coding blocks comprising one or more convolutional layers to compute state data based on one or more inputs to the neural network(Abdulnabi, Page 1952, Figure 2, where figure 2 shows convolutional layers Conv1-Conv5), the state data usable to infer the respectively different information (Abdulnabi, Page 1952, Figure 2 description, “the input image (left) with attribute labels information is fed into the model. Each CNN will predict one binary attribute” where the information CNN-1 to CNN-M are considered CNN’s with convolutional layers that compute one or more values based on predicting binary attributes (respectively different information) and the use those values with Shared L to predict attributes is considered a state data that is used to infer respectively different information) Regarding claim 26, Abdulnabi teaches the method of claim 23(and thus the rejection of claim 23 is incorporated) Abdulnabi teaches further comprising one or more layers to compute a set of data comprising information to indicate the different portions based, at least in part, on the respective task descriptor for the respectively different information(Abdulnabi, Page 1955, “After optimizing (2) using the proposed algorithm III-D, the output is the overall model weight matrix , where each column in will be dedicated to its specific corresponding CNN model and is taken back in the backward pass alongside the gradients with respect to its input” where the updated weights during training makes the neural network better at predicting which is considered a more corrected state data to be used to infer respective information(binary attributes)) Regarding claim 27, Abdulnabi teaches the method of claim 23(and thus the rejection of claim 23 is incorporated) Abdulnabi teaches wherein the respectively different information comprises a first type of information inferred by the neural network using a first set of neurons of the neural network(Abdulnabi, Page 1952, Figure 2 description, “the input image (left) with attribute labels information is fed into the model. Each CNN will predict one binary attribute” where CNN-1 is considered a first set of neurons of a neural network which infers image information of Attribute 1) and a second type of information inferred by the neural network using a second set of neurons of the neural network(Abdulnabi, Page 1952, Figure 2 description, “the input image (left) with attribute labels information is fed into the model. Each CNN will predict one binary attribute” where CNN-M is considered a second set of neurons of a neural network which infers a second type of information of Attribute M) Regarding claim 28, Abdulnabi teaches the method of claim 23(and thus the rejection of claim 23 is incorporated) Abdulnabi teaches further comprising calculating a set of data values to indicate the different portions based, at least in part, on input data to the neural network, the different portions comprising one or more neurons of one or more layers of the neural network to be used to infer the respectively different information based, at least in part, on the input data(Abdulnabi, Page 1952, Figure 2 description, “the input image (left) with attribute labels information is fed into the model. Each CNN will predict one binary attribute” where the information CNN-1 to CNN-M are considered CNN’s with convolutional layers that compute one or more data values based on the predicting binary attributes (respectively different information) and the use those values with Shared L to predict the attributes is considered indicating a state that is used to infer respectively different information) Regarding claim 29, Abdulnabi teaches the method of claim 23(and thus the rejection of claim 23 is incorporated) Abdulnabi teaches further comprising inferring a first output comprising a first of the respectively different information based, at least in part, on a first input comprising a first type of information and inferring a second output comprising a second of the respectively different information based, at least in part, on a second output comprising a second type of information(Abdulnabi, Page 1952, Figure 2 description, “the input image (left) with attribute labels information is fed into the model. Each CNN will predict one binary attribute” where the output of group1 and groupG are considered a first and second output of a neural network and the output of each respective first and second output are based inferring different information (different binary attributes) of an input image which is considered a first and second input) Regarding claim 30, Abdulnabi teaches the method of claim 23(and thus the rejection of claim 23 is incorporated) Abdulnabi teaches wherein the neural network is a neural coding network comprising one or more convolutional layers to infer the respectively different information, (Abdulnabi, Page 1952, Figure 2, where figure 2 shows convolutional layers Conv1-Conv5) the respectively different information comprising one or more features of image data input to the neural network. (Abdulnabi, Page 1952, Figure 2 description, “the input image (left) with attribute labels information is fed into the model. Each CNN will predict one binary attribute” where the model predicting a binary attribute of the input image is considered inferring one or more feature of the input image) Regarding claim 31, Abdulnabi teaches the method of claim 23(and thus the rejection of claim 23 is incorporated) Abdulnabi teaches further comprising: training a first portion of the neural network based, at least in part, on a first set of data; training a second portion of the neural network based, at least in part, on a second set of data; and the first set of data comprises a first type of information of the respectively different information and the second set of data comprises a second type of information of the respectively different information(Abdulnabi, Page 1950, Col. 1, Paragraph 4, “we train multi-task CNN models together through MTL, where each CNN model is dedicated to learning one binary attribute” where CNN models being dedicated to learning one binary attribute is considered training a first and second portion of a neural network on a first and second set of data to infer a first and second type of information respectively) Response to Arguments Applicant's arguments filed 09/30/2025 have been fully considered but they are not persuasive. A breakdown for arguments can be found below. 103: Applicant appears to argue that Abdulnabi does not teach to "calculate respective task descriptors based, at least in part, on respective inputs to a neural network; and selectively activate, based, at least in part, on the calculated respective task descriptors, different portions of the neural network to infer respectively different information for the respective input," and cites that Abdulnabi processes the same image through each CNN in parallel and outputs a complete set of attributes for each input image. Examiner respectfully disagrees as Examiner finds the current claim language reads on the prior art as the current claims, under the broadest reasonable interpretation, does not positively recite using multiple and simultaneously different inputs. More specifically, “calculating respective task descriptors based, at least in part, on respective inputs to a neural network” is interpreted as [0059], “…In at least one embodiment, a task identifier is a vector” which Abdulnabi discloses calculating a vector in the form of a latent matrix that is based on a respective input with Abdulnabi, Page 1952, Col. 2, Paragraph 1, “After the forward pass in all of the CNN models, the features generated from the last convolution layers will be fed into our joint MTL loss layer…the weight parameter matrix learned in the loss layer will be decomposed into a latent task matrix and a combination matrix” where a task descriptor is considered a vector and the latent matrix L is operated directly on CNN features extracted from input images corresponds to calculating a task descriptor based on respective inputs. Further, Abdulnabi discloses “selectively activate, based, at least in part, on the calculated respective task descriptors, different portions of the neural network to infer respectively different information for the respective input” in Abdulnabi, Page 1952, Col. 2, Paragraph 1, “The latent matrix can be seen as a shared layer between all the CNN models; in other words, the latent layer serves as a shared, fully-connected layer” the latent matrix L is part of the CNN-based architecture and is shared across all CNN’s to form task-specific classifiers and Each CNN using the latent matrix L to predict a different binary attribute corresponds to selectively activating, based, at least in part, on the calculated respective task descriptors, different portions of the neural network to infer respectively different information for the respective input as(See Abdulnabi, Page 1952, Figure 2 description, “…Each CNN will predict one binary attribute”)) 101: Applicant appears to argue that amended language of “selectively activating, based on calculated respective task descriptors, different portions of a neural network” is not something that can be performed in the human mind. Examiner agrees with Applicant, however “selectively activating, based on calculated respective task descriptors, different portions of a neural network” was not rejected under a mental process and instead rejected under see MPEP 2106.05(f)). Applicant appears to argue that a practical application is being addressed in paragraph [0002] of the specification to preserve older knowledge in a neural network to avoid performance degradations. Examiner respectfully disagrees as the Examiner understands the practical implementation that has been described as the neural networks being used as tools to perform("apply it on a computer" (see MPEP 2106.05(f))), which does not meet the MPEP requirement for practical application. Further, claims that require a computer may still recite a mental process (please see MPEP 2106.04(a)(2).III.C) as the claims appear to recite a generic computer component to transfer/receive information and perform abstract ideas("apply it on a computer" (see MPEP 2106.05(f))) and applicants example does not explain how the additional elements reflect the improvement, or how the improvement is affected by any claimed additional elements. At best applicants example describes in an improvement provided by the claimed abstract idea calculating task descriptors based on respective inputs and inferring respectively information. The claims as presented do not appear to result or highlight an improvement in neural networks or hardware processors and, although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims(see In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). Examiner notes MPEP 2106.05(a) which provides the requirements for how an improvement to the functioning of a computer or to any other technology or technical field is evaluated. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHARLES JEFFREY JONES JR whose telephone number is (703)756-1414. The examiner can normally be reached Monday - Friday 8:00 - 5:00 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kakali Chaki can be reached at 571-272-3719. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /C.J.J./Examiner, Art Unit 2122 /KAKALI CHAKI/Supervisory Patent Examiner, Art Unit 2122
Read full office action

Prosecution Timeline

Nov 05, 2021
Application Filed
Apr 18, 2025
Non-Final Rejection — §101, §102
Sep 30, 2025
Response Filed
Jan 05, 2026
Final Rejection — §101, §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12582959
DATA GENERATION DEVICE AND METHOD, AND LEARNING DEVICE AND METHOD
2y 5m to grant Granted Mar 24, 2026
Patent 12380333
METHOD OF CONSTRUCTING NETWORK MODEL FOR DEEP LEARNING, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Aug 05, 2025
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
27%
Grant Probability
93%
With Interview (+65.9%)
4y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 15 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month