Prosecution Insights
Last updated: April 19, 2026
Application No. 17/374,664

NEURAL NETWORK EVALUATION

Non-Final OA §101§103
Filed
Jul 13, 2021
Examiner
TRAN, DAVID HOANG
Art Unit
2147
Tech Center
2100 — Computer Architecture & Software
Assignee
Nvidia Corporation
OA Round
5 (Non-Final)
14%
Grant Probability
At Risk
5-6
OA Rounds
4y 2m
To Grant
38%
With Interview

Examiner Intelligence

Grants only 14% of cases
14%
Career Allow Rate
2 granted / 14 resolved
-40.7% vs TC avg
Strong +23% interview lift
Without
With
+23.2%
Interview Lift
resolved cases with interview
Typical timeline
4y 2m
Avg Prosecution
35 currently pending
Career history
49
Total Applications
across all art units

Statute-Specific Performance

§101
30.4%
-9.6% vs TC avg
§103
45.5%
+5.5% vs TC avg
§102
9.3%
-30.7% vs TC avg
§112
13.3%
-26.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 14 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on Receipt Date 03/25/2026 has been entered. Response to Arguments Applicant's arguments filed 03/25/2026 on pages 8-12 of Remarks regarding the rejection under 35 U.S.C. 101 for being directed to an abstract idea without significantly more with respect to claims 1-31 have been fully considered but they are not persuasive. Beginning on page 10, applicant asserts that under 101 Step 2A Prong One the amended claim 1 includes elements that cannot be practically performed in the human mind, such as “binding inputs to the identified one or more portions to variables to be used by the one or more modified portions.” However, Examiner respectfully disagrees. The steps used to identify and replace portions of a neural network using a user-provided description and then binding inputs to the portions can be performed mentally perhaps with the aid of pen and paper. MPEP 2106.04(a)(2)(III)(c) talks about mental processes on a generic computer. Also, see MPEP 2106.04(d) and 2106.05(f). The above mentioned sections of the MPEP set forth that a claim may recite a mental process even with the use of a generic computer. Specifically, they amount to mere instructions to apply the exception using a compiler, circuitry and neural networks (e.g., by using these elements as tools). Beginning on page 10, applicant asserts that under 101 Step 2A Prong Two that the claim limitations reciting “binding inputs to the identified one or more portions to variables to be used by the one or more modified portions” have a practical application and should therefore not be considered to be directed to an abstract idea. MPEP 2106.04(d)(1) talks about a claim reciting a judicial exception is not directed to the judicial exception if it also recites additional elements demonstrating that the claim as a whole integrates the exception into a practical application. MPEP 2106.05(a) talks about the judicial exception alone cannot provide the improvement. The improvement can be provided by one or more additional elements. Improving an abstract element is not sufficient to integrate into a practical application. See updated rejection below. Applicant’s arguments on pages 11-13 regarding the rejection under 35 U.S.C. 103 with respect to claims 1-31 have been fully considered but are not persuasive. Beginning on page 11, applicant respectfully asserts that Brady is silent on binding inputs to variables based on user-provided instructions. However, Brady teaches binding inputs to the identified one or more modified portions to variables to be used by the one or more modified portions, wherein the binding of inputs is based, at least in part, on user-provided instructions mapping the inputs to the identified one or more modified portions to the variables. (Brady [0060]: “The operator model 1005 may identify each of the operations (e.g., 1105-1135) and tensors (e.g., 1140, 1145, 1150, 1155, 1160, 1165) within this data flow. The tensors represent an anticipated result of at least one of the operations of the neural network. Accordingly, tensors may be associated with corresponding operations (e.g., operations (e.g., 1110) that will generate the corresponding tensor (e.g., 1150) as a result). In some implementations, an operator model (e.g., 1005) may be generated by mapping each of the nodes in the neural network graph 110 to a respective operation (e.g., 1105-1135) and defining a tensor for each edge in the neural network graph 110.”; and [0071]: “the compilation descriptor 115 may include a listing of compilation passes (e.g., selected by a user engineer or by a system) or may name a particular pre-defined collection, or package, of compilation passes, which the compiler may 105 recognize to determine which sub-set of supported compilation passes to perform in connection with a particular compilation project, among other example implementations. The compilation descriptor 115 may also define an order or dependencies of one or more compilation passes and the conditions for performing one or more the compilation passes,”; Note: The association between a tensor (inputs) and the operation (modified portion) are the binding of inputs to variables. The compilation passes are user-provided instructions. See paragraph [0072] of Brady to see the illustrative example of the compilation descriptor.) Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-31 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding Claim 1, Claim 1 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 Analysis: Claim 1 is directed to a processor, i.e., a machine, one of the statutory categories. Step 2A Prong One Analysis: The limitations: “[One or more processors comprising circuitry to: cause a compiler to] generate a modified version of a program [to perform one or more neural networks by at least:]” “identifying one or more portions of the one or more neural networks based, at least in part, on a user-provided description of the one or more portions; and” “replacing the identified one or more portions with one or more modified portions modified based, at least in part, on user-provided instructions for replacing the one or more portions and” “binding inputs to the identified one or more modified portions to variables to be used by the one or more modified portions, wherein the binding of inputs is based, at least in part, on user-provided instructions mapping the inputs to the identified one or more modified portions to the variables” As drafted, under their broadest reasonable interpretation, cover concepts performed in human mind (including an observation, evaluation, judgement, or opinion, e.g., generating, identifying, replacing, binding). The above limitations in the context of this claim encompass, inter alia, generating a modified version of a program, identifying portions of the neural network, replacing portions of a neural network, binding inputs (corresponding to mental processes which can be done mentally or by pen and paper). Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application. The limitations: “One or more processors comprising circuitry to: cause a compiler to [generate a modified version of a program to] perform one or more neural networks by at least:” As drafted, are additional elements that amount to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). Specifically, they amount to mere instructions to apply the exception using a processor, circuitry and neural network (e.g., by using these elements as tools). Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The limitations: “One or more processors comprising circuitry to: cause a compiler to [generate a modified version of a program to] perform one or more neural networks by at least:” As drafted, are additional elements that amount to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). Specifically, they amount to mere instructions to apply the exception using a processor, circuitry and neural network (e.g., by using these elements as tools). The claim is not patent eligible. Regarding Claim 2, Claim 2 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 Analysis: Claim 2 is directed to a processor, i.e., a machine, one of the statutory categories. Step 2A Prong One Analysis: The limitations: “[the circuitry to load the user-provided description of the one or more portions and] identify at least one portion of the one or more neural networks that matches the user-provided description.” As drafted, under their broadest reasonable interpretation, cover concepts performed in human mind (including an observation, evaluation, judgement, or opinion, e.g., identifying). The above limitations in the context of this claim encompass, inter alia, identifying one portion of the neural network (corresponding to mental processes which can be done mentally or by pen and paper). Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application. The limitations: “the circuitry to [load the user-provided description of the one or more portions and identify at least one portion of the one or more neural networks that matches the description.]” As drafted, are additional elements that amount to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). Specifically, they amount to mere instructions to apply the exception using a circuitry (e.g., by using these elements as tools). The limitations: “the circuitry to load the user-provided description of the one or more portions and [identify at least one portion of the one or more neural networks that matches the description.]” As drafted, amount to insignificant extra-solution activities, which do not integrate a judicial exception into a practical application. For example, the additional elements of "loading the user-provided description" amount to mere data gathering and data storage, respectively, which are insignificant extra-solution activities that do not integrate a judicial exception into a practical application. See MPEP 2106.05(g). Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The limitations: “the circuitry to [load the user-provided description of the one or more portions and identify at least one portion of the one or more neural networks that matches the description.]” As drafted, are additional elements that amount to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). Specifically, they amount to mere instructions to apply the exception using a circuitry (e.g., by using these elements as tools). As discussed above with respect to integration of the abstract idea into a practical application, all of the additional elements are insignificant extra-solution activities or mere instructions to apply an exception. (i.e., the additional element describes a unit for applying the abstract ideas). Insignificant extra-solution activities and mere instructions to apply an exception cannot provide an inventive concept. Moreover, receiving, communicating, and storing data are insignificant extra-solution activities that are well-understood, routine, and conventional. See MPEP 2106.05(d)(II) ("The courts have recognized the following computer functions as well-understood, routine, and conventional functions ... i. Receiving or transmitting data over a network") (citing OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015)). The claim is not patent eligible. Regarding Claim 3, Claim 3 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 Analysis: Claim 3 is directed to a processor, i.e., a machine, one of the statutory categories. Step 2A Prong One Analysis: The limitations: “wherein the user-provided description of the one or more portions comprises one or more statements in a programming language.” As drafted, under their broadest reasonable interpretation, cover concepts performed in human mind (including an observation, evaluation, judgement, or opinion, e.g., identifying). The above limitations in the context of this claim encompass, inter alia, identifying portions of a neural network based on a user-provided description (corresponding to mental processes which can be done mentally or by pen and paper). Step 2A Prong Two Analysis: Please see the corresponding analysis of Claim 1. Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible. Regarding Claim 4, Claim 4 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 Analysis: Claim 4 is directed to a processor, i.e., a machine, one of the statutory categories. Step 2A Prong One Analysis: The limitations: “[the circuitry to load the user-provided description of the one or more portions and] modify performance of the one or more neural networks by replacing at least one portion, of the portions of the one or more neural networks, with an optimized version of the at least one portion.” As drafted, under their broadest reasonable interpretation, cover concepts performed in human mind (including an observation, evaluation, judgement, or opinion, e.g., modifying). The above limitations in the context of this claim encompass, inter alia, a user perhaps can illustrate replacing a portion of a neural network (corresponding to mental processes which can be done mentally or by pen and paper). Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application. The limitations: “the circuitry to [load the user-provided description of the one or more portions and modify performance of the one or more neural networks by replacing at least one portion, of the portions of the one or more neural networks, with an optimized version of the at least one portion.]” As drafted, are additional elements that amount to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). Specifically, they amount to mere instructions to apply the exception using a circuitry (e.g., by using these elements as tools). The limitations: “[the circuitry to] load the user-provided description of the one or more portions and [modify performance of the one or more neural networks by replacing at least one portion, of the portions of the one or more neural networks, with an optimized version of the at least one portion.]” As drafted, amount to insignificant extra-solution activities, which do not integrate a judicial exception into a practical application. For example, the additional elements of "loading the user-provided description" amount to mere data gathering and data storage, respectively, which are insignificant extra-solution activities that do not integrate a judicial exception into a practical application. See MPEP 2106.05(g). Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The limitations: “the circuitry to [load the user-provided description of the one or more portions and modify performance of the one or more neural networks by replacing at least one portion, of the portions of the one or more neural networks, with an optimized version of the at least one portion.]” As drafted, are additional elements that amount to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). Specifically, they amount to mere instructions to apply the exception using a circuitry (e.g., by using these elements as tools). As discussed above with respect to integration of the abstract idea into a practical application, all of the additional elements are insignificant extra-solution activities or mere instructions to apply an exception. (i.e., the additional element describes a unit for applying the abstract ideas). Insignificant extra-solution activities and mere instructions to apply an exception cannot provide an inventive concept. Moreover, receiving, communicating, and storing data are insignificant extra-solution activities that are well-understood, routine, and conventional. See MPEP 2106.05(d)(II) ("The courts have recognized the following computer functions as well-understood, routine, and conventional functions ... i. Receiving or transmitting data over a network") (citing OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015)). The claim is not patent eligible. Regarding Claim 5, Claim 5 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 Analysis: Claim 5 is directed to a processor, i.e., a machine, one of the statutory categories. Step 2A Prong One Analysis: The limitations: “wherein the user-provided instructions for replacing the one or more portions comprises instructions for optimizing performance of a portion of the one or more neural networks that matches the user-provided description of the one or more portions.” As drafted, under their broadest reasonable interpretation, cover concepts performed in human mind (including an observation, evaluation, judgement, or opinion, e.g., optimizing). The above limitations in the context of this claim encompass, inter alia, optimizing performance of a portion of the one or more neural networks (corresponding to mental processes which can be done mentally or by pen and paper). Step 2A Prong Two Analysis: Please see the corresponding analysis of Claim 1. Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible. Regarding Claim 6, Claim 6 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 Analysis: Claim 6 is directed to a processor, i.e., a machine, one of the statutory categories. Step 2A Prong One Analysis: The limitations: “wherein the user-provided description of the one or more portions describes a pattern of nodes in a graph representative of the one or more neural networks, and a rule indicative of a replacement for an instance of said pattern in the graph. As drafted, under their broadest reasonable interpretation, cover concepts performed in human mind (including an observation, evaluation, judgement, or opinion, e.g., describing). The above limitations in the context of this claim encompass, inter alia, describing a pattern of nodes (corresponding to mental processes which can be done mentally or by pen and paper). Step 2A Prong Two Analysis: Please see the corresponding analysis of Claim 1. Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible. Regarding Claim 7, Claim 7 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 Analysis: Claim 7 is directed to a processor, i.e., a machine, one of the statutory categories. Step 2A Prong One Analysis: The limitations: “[the circuitry] to locate a portion of a graph comprising a first ordered series of operations that correspond to a second ordered series of operations indicated by the user-provided description of the one or more portions.” As drafted, under their broadest reasonable interpretation, cover concepts performed in human mind (including an observation, evaluation, judgement, or opinion, e.g., locating). The above limitations in the context of this claim encompass, inter alia, locating a portion of a graph (corresponding to mental processes which can be done mentally or by pen and paper). Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application. The limitations: “the circuitry [to locate a portion of a graph comprising a first ordered series of operations that correspond to a second ordered series of operations indicated by the user-provided description of the one or more portions.]” As drafted, are additional elements that amount to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). Specifically, they amount to mere instructions to apply the exception using a circuitry (e.g., by using these elements as tools). Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The limitations: “the circuitry [to locate a portion of a graph comprising a first ordered series of operations that correspond to a second ordered series of operations indicated by the user-provided description of the one or more portions.]” As drafted, are additional elements that amount to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). Specifically, they amount to mere instructions to apply the exception using a circuitry (e.g., by using these elements as tools). The claim is not patent eligible. Regarding Claim 8, Claim 8 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 Analysis: Claim 8 is directed to a processor, i.e., a machine, one of the statutory categories. Step 2A Prong One Analysis: The limitations: “[the circuitry] to locate a portion of a graph whose inputs and outputs correspond to inputs and outputs correspond to inputs and outputs indicated by the user-provided description of the one or more portions.” As drafted, under their broadest reasonable interpretation, cover concepts performed in human mind (including an observation, evaluation, judgement, or opinion, e.g., locating). The above limitations in the context of this claim encompass, inter alia, locating a portion of a graph (corresponding to mental processes which can be done mentally or by pen and paper). Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application. The limitations: “the circuitry [to locate a portion of a graph whose inputs and outputs correspond to inputs and outputs correspond to inputs and outputs indicated by the user-provided description of the one or more portions.]” As drafted, are additional elements that amount to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). Specifically, they amount to mere instructions to apply the exception using a circuitry (e.g., by using these elements as tools). Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The limitations: “the circuitry [to locate a portion of a graph whose inputs and outputs correspond to inputs and outputs correspond to inputs and outputs indicated by the user-provided description of the one or more portions.]” As drafted, are additional elements that amount to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). Specifically, they amount to mere instructions to apply the exception using a circuitry (e.g., by using these elements as tools). The claim is not patent eligible. Regarding Claim 9, Claim 9 recites a system for performing steps similar of those of claim 1 and is rejected with the same rationale, mutatis mutandis. Regarding Claim 10, Claim 10 recites a system for performing steps similar of those of claim 2 and is rejected with the same rationale, mutatis mutandis. Regarding Claim 11, Claim 11 recites a system for performing steps similar of those of claim 4 and is rejected with the same rationale, mutatis mutandis. Regarding Claim 12, Claim 12 recites a system for performing steps similar of those of claim 5 and is rejected with the same rationale, mutatis mutandis. Regarding Claim 13, Claim 13 recites a system for performing steps similar of those of claim 7 and is rejected with the same rationale, mutatis mutandis. Regarding Claim 14, Claim 14 recites a system for performing steps similar of those of claim 8 and is rejected with the same rationale, mutatis mutandis. Regarding Claim 15, Claim 15 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 Analysis: Claim 15 is directed to a system, i.e., a machine, one of the statutory categories. Step 2A Prong One Analysis: The limitations: “[the one or more processors to] translate the user-provided description of the one or more portions into one or more graphs indicative of a pattern.” As drafted, under their broadest reasonable interpretation, cover concepts performed in human mind (including an observation, evaluation, judgement, or opinion, e.g., translating). The above limitations in the context of this claim encompass, inter alia, translating the user-provided description (corresponding to mental processes which can be done mentally or by pen and paper). Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application. The limitations: “the one or more processors to [translate the user-provided description of the one or more portions into one or more graphs indicative of a pattern.]” As drafted, are additional elements that amount to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). Specifically, they amount to mere instructions to apply the exception using a processor (e.g., by using these elements as tools). Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The limitations: “the one or more processors to [translate the user-provided description of the one or more portions into one or more graphs indicative of a pattern.]” As drafted, are additional elements that amount to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). Specifically, they amount to mere instructions to apply the exception using a processor (e.g., by using these elements as tools). The claim is not patent eligible. Regarding Claim 16, Claim 16 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 Analysis: Claim 16 is directed to a system, i.e., a machine, one of the statutory categories. Step 2A Prong One Analysis: The limitations: “[the one or more processors to] search for the pattern in the one or more neural networks.” As drafted, under their broadest reasonable interpretation, cover concepts performed in human mind (including an observation, evaluation, judgement, or opinion, e.g., searching). The above limitations in the context of this claim encompass, inter alia, searching for the pattern in the one neural network (corresponding to mental processes which can be done mentally or by pen and paper). Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application. The limitations: “the one or more processors to [search for the pattern in the one or more neural networks.]” As drafted, are additional elements that amount to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). Specifically, they amount to mere instructions to apply the exception using a processor (e.g., by using these elements as tools). Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The limitations: “the one or more processors to [search for the pattern in the one or more neural networks.]” As drafted, are additional elements that amount to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). Specifically, they amount to mere instructions to apply the exception using a processor (e.g., by using these elements as tools). The claim is not patent eligible. Regarding Claim 17, Claim 17 recites a non-transitory machine-readable medium for performing steps similar of those of claim 1 and is rejected with the same rationale, mutatis mutandis. Regarding Claim 18, Claim 18 recites a non-transitory machine-readable medium for performing steps similar of those of claim 2 and is rejected with the same rationale, mutatis mutandis. Regarding Claim 19, Claim 19 recites a non-transitory machine-readable medium for performing steps similar of those of claim 4 and is rejected with the same rationale, mutatis mutandis. Regarding Claim 20, Claim 20 recites a non-transitory machine-readable medium for performing steps similar of those of claim 5 and is rejected with the same rationale, mutatis mutandis. Regarding Claim 21, Claim 21 recites a non-transitory machine-readable medium for performing steps similar of those of claim 7 and is rejected with the same rationale, mutatis mutandis. Regarding Claim 22, Claim 22 recites a non-transitory machine-readable medium for performing steps similar of those of claim 8 and is rejected with the same rationale, mutatis mutandis. Regarding Claim 23, Claim 23 recites a non-transitory machine-readable medium for performing steps similar of those of claim 15 and is rejected with the same rationale, mutatis mutandis. Regarding Claim 24, Claim 24 recites a non-transitory machine-readable medium for performing steps similar of those of claim 16 and is rejected with the same rationale, mutatis mutandis. Regarding Claim 25, Claim 25 recites a method for performing steps similar of those of claim 1 and is rejected with the same rationale, mutatis mutandis. Regarding Claim 26, Claim 26 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 Analysis: Claim 26 is directed to a method, i.e., a process, one of the statutory categories. Step 2A Prong One Analysis: The limitations: “identifying at least one portion of the one or more neural networks based, at least in part, on a statement, in a programming language, that defines a pattern and one or more bindings associated with a rule for replacing an instance of the pattern.” As drafted, under their broadest reasonable interpretation, cover concepts performed in human mind (including an observation, evaluation, judgement, or opinion, e.g., identifying). The above limitations in the context of this claim encompass, inter alia, identifying one portion of the neural network (corresponding to mental processes which can be done mentally or by pen and paper). Step 2A Prong Two Analysis: Please see the corresponding analysis of Claim 1. Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible. Regarding Claim 27, Claim 27 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 Analysis: Claim 27 is directed to a method, i.e., a process, one of the statutory categories. Step 2A Prong One Analysis: The limitations: “replacing at least one portion of the one or more neural networks with an optimized version of the at least one portion. “ As drafted, under their broadest reasonable interpretation, cover concepts performed in human mind (including an observation, evaluation, judgement, or opinion, e.g., replacing). The above limitations in the context of this claim encompass, inter alia, a user perhaps can illustrate replacing a portion of a neural network (corresponding to mental processes which can be done mentally or by pen and paper). Step 2A Prong Two Analysis: Please see the corresponding analysis of Claim 1. Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible. Regarding Claim 28, Claim 28 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 Analysis: Claim 28 is directed to a method, i.e., a process, one of the statutory categories. Step 2A Prong One Analysis: The limitations: “wherein the one or more neural networks are modified by at least replacing at least one portion of the one or more neural networks with another portion.” As drafted, under their broadest reasonable interpretation, cover concepts performed in human mind (including an observation, evaluation, judgement, or opinion, e.g., replacing). The above limitations in the context of this claim encompass, inter alia, a user perhaps can illustrate replacing a portion of a neural network (corresponding to mental processes which can be done mentally or by pen and paper). Step 2A Prong Two Analysis: Please see the corresponding analysis of Claim 1. Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible. Regarding Claim 29, Claim 29 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 Analysis: Claim 29 is directed to a method, i.e., a process, one of the statutory categories. Step 2A Prong One Analysis: Please see the corresponding analysis of Claim 1. Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application. The limitations: “loading user-provided instructions for modifying performance of the one or more portions of the one or more neural networks.” As drafted, amount to insignificant extra-solution activities, which do not integrate a judicial exception into a practical application. For example, the additional elements of "loading user-provided instructions" amount to mere data gathering and data storage, respectively, which are insignificant extra-solution activities that do not integrate a judicial exception into a practical application. See MPEP 2106.05(g). Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, all of the additional elements are insignificant extra-solution activities or mere instructions to apply an exception. (i.e., the additional element describes a unit for applying the abstract ideas). Insignificant extra-solution activities and mere instructions to apply an exception cannot provide an inventive concept. Moreover, receiving, communicating, and storing data are insignificant extra-solution activities that are well-understood, routine, and conventional. See MPEP 2106.05(d)(II) ("The courts have recognized the following computer functions as well-understood, routine, and conventional functions ... i. Receiving or transmitting data over a network") (citing OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015)). The claim is not patent eligible. Regarding Claim 30, Claim 30 recites a method for performing steps similar of those of claim 7 and is rejected with the same rationale, mutatis mutandis. Regarding Claim 31, Claim 31 recites a method for performing steps similar of those of claim 8 and is rejected with the same rationale, mutatis mutandis. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-31 are rejected under 35 U.S.C. 103 as being unpatentable over Chen et al. (US20210081691A1); hereinafter Chen in view of Brady et al. (US20190391796A1); hereinafter Brady Claim 1 is rejected over Chen and Brady. Regarding claim 1, Chen teaches: one or more processors comprising circuitry to: (Chen [0172]: “Yet another implementation of the method described in this section can include a system including memory and one or more processors operable to execute instructions, stored in the memory, to perform any of the methods described above.”) identifying one or more portions of the one or more neural networks based, at least in part, on a user-provided description of the one or more portions; and (Chen [0200]: “In one implementation, the operation unit graph can be a deep neural network.”; and Fig. 5 and [0079]: “Architectural hints 202 (user-provided description) are specified by users such as application developers and system architects using high-level languages such as JSON, C, C++, Java, Python, or Spatial”; [0069] and “At action 522, the fusion algorithm 500 traverses, in parallel, upward from the node_pattern_output, and from, node_matched_output, and checks if all corresponding nodes match, until every node in the pattern of operation units has been visited. If all nodes match, then a “matched subgraph” is found. If the matched subgraph is not found, then the fusion algorithm 500 goes back to action 512.”) replacing the identified one or more portions with one or more modified portions based, at least in part, on user-provided instructions for replacing the one or more portions (Chen [0085]: “At action 542, the fusion algorithm 500 replaces (modifies) the matched subgraph with the fused node (fusing the node is modifying the performance of a portion) as specified by the architectural hints 202 (user-specified description).”; and [0072]-[0073]: “Pattern graph 300 is one example of the architectural hints 202. Pattern graph 300 calls for fusing 322 (user-provided instruction) three operation units (Conv2DBNRelu): (1) a two-dimensional (2D) convolution operation unit (Conv2D), (2) a batch normalization operation unit (BatchNorm), and (3) a rectified linear unit (ReLU) operation unit (Relu). Pattern graph 300 specifies these three operation units as nodes 302 and specifies dataflows among these three operation units as edges 312. Pattern graph 300 also calls for fusing 332 (user-provided instruction) two operation units (user-provided instruction) (Conv2DBN): (1) the 2D convolution operation unit and (2) the batch normalization operation unit. Pattern graph 300 also calls for fusing 342 (user-provided instruction) two operation units (Conv2DRelu): (1) the 2D convolution operation unit and (2) the ReLU operation unit. Pattern graph 300 also calls for fusing 352 two operation units (Addmm): (1) a multiplication operation unit (Mm) and (2) an addition operation unit (Add).”; Note: See Figure 3 of Chen to see that Pattern Graph 300 is the user-provided description and each instruction for fusing nodes such as 322, 332, 342 and 352 are user-provided instructions in fusing and replacing the portions of a neural network.) Chen does not appear to explicitly teach cause a compiler to generate a modified version of a program to perform one or more neural networks by at least binding inputs to the identified one or more modified portions to variables to be used by the one or more modified portions, wherein the binding of inputs is based, at least in part, on user-provided instructions mapping the inputs to the identified one or more modified portions to the variables. However, Brady teaches cause a compiler to generate a modified version of a program to perform one or more neural networks by at least: (Brady [0073]: “Adaptation passes 1236 may be compilation passes, which identify opportunities (independent of the target hardware) to modify the neural network graph itself and potentially simplify and optimize operation and data flows associated with the neural network, such as through fusion compilation passes (e.g., to combine two operations into a single operation) or replacement compilation passes (e.g., replace operations with functionally equivalent and more efficient or adaptable replacement operations), among other examples.”) binding inputs to the identified one or more modified portions to variables to be used by the one or more modified portions, wherein the binding of inputs is based, at least in part, on user-provided instructions mapping the inputs to the identified one or more modified portions to the variables. (Brady [0060]: “The operator model 1005 may identify each of the operations (e.g., 1105-1135) and tensors (e.g., 1140, 1145, 1150, 1155, 1160, 1165) within this data flow. The tensors represent an anticipated result of at least one of the operations of the neural network. Accordingly, tensors may be associated with corresponding operations (e.g., operations (e.g., 1110) that will generate the corresponding tensor (e.g., 1150) as a result). In some implementations, an operator model (e.g., 1005) may be generated by mapping each of the nodes in the neural network graph 110 to a respective operation (e.g., 1105-1135) and defining a tensor for each edge in the neural network graph 110.”; and [0071]: “the compilation descriptor 115 may include a listing of compilation passes (e.g., selected by a user engineer or by a system) or may name a particular pre-defined collection, or package, of compilation passes, which the compiler may 105 recognize to determine which sub-set of supported compilation passes to perform in connection with a particular compilation project, among other example implementations. The compilation descriptor 115 may also define an order or dependencies of one or more compilation passes and the conditions for performing one or more the compilation passes,”; Note: The association between a tensor (inputs) and the operation (modified portion) are the binding of inputs to variables. The compilation passes selected by user engineer or system are user-provided instructions. See paragraph [0072] of Brady to see the illustrative example of the compilation descriptor.) It would have been obvious before the effective filing date to combine the fusing of nodes through the architectural hints of Chen with the compiler used in the optimal fusing of nodes or replacement of Brady to optimize data flows associated with the neural network (Brady, [0073]). Chen and Brady are analogous art because they concern fusing operations of neural networks to improve neural network performance. Claim 2 is rejected over Chen and Brady with the incorporation of claim 1. Regarding claim 2, teaches the circuitry to load the user-provided description of the one or more portions (Chen [0077]: “At action 502, the fusion algorithm 500 constructs a “pattern of operation units” based on the user-specified architecture hints 202 (load the user-provided description).”) and identify at least one portion of the one or more neural networks that matches the user-provided description. (Chen [Fig. 5 and 0079]: “At action 522, the fusion algorithm 500 traverses, in parallel, upward from the node_pattern_output, and from, node_matched_output, and checks if all corresponding nodes match, until every node in the pattern of operation units has been visited. If all nodes match, then a “matched subgraph” is found. If the matched subgraph is not found, then the fusion algorithm 500 goes back to action 512.”) Claim 3 is rejected over Chen and Brady with the incorporation of claim 1. Regarding claim 3, Chen teaches wherein the user-provided description of the one or more portions comprises one or more statements in a programming language. (Chen [0021]: “FIG. 3 is a pattern graph written in JSON (JavaScript Object Notation), and is an example of user-specified architectural hints (user-provided description).”) Claim 4 is rejected over Chen and Brady with the incorporation of claim 1. Regarding claim 4, Chen teaches the circuitry to load the user-provided description of the one or more portions and modify performance of the one or more neural networks by replacing at least one portion, of the portions of the one or more neural networks, with an optimized version of the at least one portion. (Chen [0085]: “At action 542, the fusion algorithm 500 replaces the matched subgraph with the fused node (optimized version of the at least one portion) as specified by the architectural hints 202 (user-specified description).”) Claim 5 is rejected over Chen and Brady with the incorporation of claim 1. Regarding claim 5, Chen teaches wherein the user-provided instructions for replacing the one or more portions comprises instructions for optimizing performance of a portion of the one or more neural networks that matches the user-provided description of the one or more portions. (Chen [0085]: “At action 542, the fusion algorithm 500 replaces the matched subgraph with the fused node (fusing the node is modifying performance of a portion) as specified by the architectural hints 202 (user-specified description).”; [0072]-[0073]: “Pattern graph 300 is one example of the architectural hints 202. Pattern graph 300 calls for fusing 322 (user-provided instruction) three operation units (Conv2DBNRelu): (1) a two-dimensional (2D) convolution operation unit (Conv2D), (2) a batch normalization operation unit (BatchNorm), and (3) a rectified linear unit (ReLU) operation unit (Relu). Pattern graph 300 specifies these three operation units as nodes 302 and specifies dataflows among these three operation units as edges 312. Pattern graph 300 also calls for fusing 332 (user-provided instruction) two operation units (user-provided instruction) (Conv2DBN): (1) the 2D convolution operation unit and (2) the batch normalization operation unit. Pattern graph 300 also calls for fusing 342 (user-provided instruction) two operation units (Conv2DRelu): (1) the 2D convolution operation unit and (2) the ReLU operation unit. Pattern graph 300 also calls for fusing 352 two operation units (Addmm): (1) a multiplication operation unit (Mm) and (2) an addition operation unit (Add).”; and [0093]: “The visualization can be used to convey how efficiently the fused operation unit graph 224 is executed by the reconfigurable data processor 100. “Note: See Figure 3 of Chen to see that Pattern Graph 300 is the user-provided description and each instruction for fusing nodes such as 322, 332, 342 and 352 are user-provided instructions in fusing and replacing the portions of a neural network.) Claim 6 is rejected over Chen and Brady with the incorporation of claim 1. Regarding claim 6, Chen teaches wherein the user-provided description of the one or more portions describes a pattern of nodes in a graph representative of the one or more neural networks, and a rule indicative of a replacement for an instance of said pattern in the graph. (Chen [0085]: “At action 542, the fusion algorithm 500 replaces the matched subgraph (pattern) with the fused node as specified by the architectural hints 202 (rule). In one implementation, the fuser 214 fuses operation units of the second nodes and the second edges in the operation unit graph 204 into a consolidated operation units block, thereby producing the fused operation unit graph 224.”) Claim 7 is rejected over Chen and Brady with the incorporation of claim 1. Regarding claim 7, Chen teaches the circuitry to locate a portion of a graph comprising a first ordered series of operations that correspond to a second ordered series of operations indicated by the user-provided description of the one or more portions (Chen [0080]: “In one implementation, action 522 is performed by a detector 714, which in turn comprises a scanner 702 and a matcher 712. Sample code 724 embodying the action 522 is also provided in FIG. 7 to find 700 pattern matches (the matched subgraph). Scanner 702 scans the unfused operation unit graph 204 to detect instances of the patterns of the first operation units (e.g., 322, 332, 342, 252, 422) specified by the architectural hints 202 (second ordered series of operations). Matcher 712 matches second nodes and second edges in the operation unit graph 204 with the first nodes and the first edges in the architectural hints 202, and detects the pattern matches (the matched subgraph) (first ordered series of operations).”) Claim 8 is rejected over Chen and Brady with the incorporation of claim 1. Regarding claim 8, Chen teaches the circuitry to locate a portion of a graph whose inputs and outputs correspond to inputs and outputs indicated by the user-provided description of the one or more portions. (Chen [0078]: “At action 512, the fusion algorithm 500 finds a node in the unfused operation unit graph 204 that matches the output node (e.g., addition output node 622) of the pattern of operation units. This matched node in the unfused operation unit graph 204 is called “node_matched_output””; and [0080]: “In one implementation, action 522 is performed by a detector 714, which in turn comprises a scanner 702 and a matcher 712. Sample code 724 embodying the action 522 is also provided in FIG. 7 to find 700 pattern matches (the matched subgraph). Scanner 702 scans the unfused operation unit graph 204 to detect instances of the patterns of the first operation units (e.g., 322, 332, 342, 252, 422) specified by the architectural hints 202. Matcher 712 matches second nodes and second edges in the operation unit graph 204 with the first nodes and the first edges in the architectural hints 202, and detects the pattern matches (the matched subgraph).”; Note: A matched pattern will have a matched input of a graph and input indicated by the user-provided description.) Claim 9 is rejected over Chen and Brady. Regarding claim 9, Chen teaches a system, comprising: (Chen [0172]: “Yet another implementation of the method described in this section can include a system including memory and one or more processors operable to execute instructions, stored in the memory, to perform any of the methods described above.”) The remainder of claim 9 is claim 1 in the form of a system comprising a processor and is rejected for the same reasons as claim 1 stated above. Dependent claim 10 is claim 2 in the form of a system comprising a processor and is rejected for the same reasons as claim 2 stated above. For the rejection of the limitations specifically pertaining to the system of claim 9, see the rejection of claim 9 above. Dependent claim 11 is claim 4 in the form of a system comprising a processor and is rejected for the same reasons as claim 4 stated above. For the rejection of the limitations specifically pertaining to the system of claim 9, see the rejection of claim 9 above. Dependent claim 12 is claim 5 in the form of a system comprising a processor and is rejected for the same reasons as claim 5 stated above. For the rejection of the limitations specifically pertaining to the system of claim 9, see the rejection of claim 9 above. Dependent claim 13 is claim 7 in the form of a system comprising a processor and is rejected for the same reasons as claim 7 stated above. For the rejection of the limitations specifically pertaining to the system of claim 9, see the rejection of claim 9 above. Dependent claim 14 is claim 8 in the form of a system comprising a processor and is rejected for the same reasons as claim 8 stated above. For the rejection of the limitations specifically pertaining to the system of claim 9, see the rejection of claim 9 above. Claim 15 is rejected over Chen and Brady with the incorporation of claim 9. Regarding claim 15, Chen teaches the one or more processors to translate the user-provided description of the one or more portions into one or more graphs indicative of a pattern. (Chen [0171]: “In one implementation, the architectural hints (user-provided description) are expressed as lists of nodes and edges that translate into a pattern graph.”) Claim 16 is rejected over Chen and Brady with the incorporation of claim 9. Regarding claim 16, Chen teaches the one or more processors to search for the pattern in the one or more neural networks. (Chen [0081]: “In one implementation, action 522 comprises detecting the pattern matches by matching the first output node specified by the architectural hints 202 with a second output node in the operation unit graph 204, and beginning with the second output node in the operation unit graph 204, traversing the operation unit graph 204 to determine that the second nodes and the second edges in the operation unit graph 204 match the first nodes and the first edges in the architectural hints 202. In one implementation, the traversal is an upward traversal.”) Claim 17 is rejected over Chen and Brady. Regarding claim 17, Chen teaches a non-transitory machine-readable medium having stored thereon a set of instructions, which if performed by one or more processors, cause the one or more processors to at least: (Chen [0172]: “Other implementations of the method described in this section can include a non-transitory computer readable storage medium storing instructions executable by a processor to perform any of the methods described above.”) The remainder of claim 17 is claim 1 in the form of a non-transitory machine-readable medium and is rejected for the same reasons as claim 1 stated above. Dependent claim 18 is claim 2 in the form of a non-transitory machine-readable medium and is rejected for the same reasons as claim 2 stated above. For the rejection of the limitations specifically pertaining to the non-transitory machine-readable medium of claim 17, see the rejection of claim 17 above. Dependent claim 19 is claim 4 in the form of a non-transitory machine-readable medium and is rejected for the same reasons as claim 4 stated above. For the rejection of the limitations specifically pertaining to the non-transitory machine-readable medium of claim 17, see the rejection of claim 17 above. Dependent claim 20 is claim 5 in the form of a non-transitory machine-readable medium and is rejected for the same reasons as claim 5 stated above. For the rejection of the limitations specifically pertaining to the non-transitory machine-readable medium of claim 17, see the rejection of claim 17 above. Dependent claim 21 is claim 7 in the form of a non-transitory machine-readable medium and is rejected for the same reasons as claim 7 stated above. For the rejection of the limitations specifically pertaining to the non-transitory machine-readable medium of claim 17, see the rejection of claim 17 above. Dependent claim 22 is claim 8 in the form of a non-transitory machine-readable medium and is rejected for the same reasons as claim 8 stated above. For the rejection of the limitations specifically pertaining to the non-transitory machine-readable medium of claim 17, see the rejection of claim 17 above. Dependent claim 23 is claim 15 in the form of a non-transitory machine-readable medium is rejected for the same reasons as claim 15 stated above. For the rejection of the limitations specifically pertaining to the non-transitory machine-readable medium of claim 17, see the rejection of claim 17 above. Dependent claim 24 is claim 16 in the form of a non-transitory machine-readable medium is rejected for the same reasons as claim 16 stated above. For the rejection of the limitations specifically pertaining to the non-transitory machine-readable medium of claim 17, see the rejection of claim 17 above. Claim 25 is rejected over Chen and Brady. Claim 25 is claim 1 in the form of a method and is rejected for the same reasons as claim 1 stated above. Claim 26 is rejected over Chen and Brady with the incorporation of claim 25. Regarding claim 26, Chen teaches identifying at least one portion of the one or more neural networks based (Chen [0079]: “At action 522, the fusion algorithm 500 traverses, in parallel, upward from the node_pattern_output, and from, node_matched_output, and checks if all corresponding nodes match, until every node in the pattern of operation units has been visited. If all nodes match, then a “matched subgraph” is found. If the matched subgraph is not found, then the fusion algorithm 500 goes back to action 512.), at least in part, on a statement, in a programming language (Chen [0069]: “Architectural hints 202 (user-provided description) are specified by users such as application developers and system architects using high-level languages such as JSON, C, C++, Java, Python, or Spatial.”), that defines a pattern (Chen [0077]: “At action 502, the fusion algorithm 500 constructs a “pattern of operation units” based on the user-specified architecture hints 202 (load the user-provided description).”) and one or more bindings associated with a rule for replacing an instance of the pattern. (Chen [0085]: “At action 542, the fusion algorithm 500 replaces the matched subgraph (pattern) with the fused node as specified by the architectural hints 202 (rule). In one implementation, the fuser 214 fuses operation units of the second nodes and the second edges in the operation unit graph 204 into a consolidated operation units block, thereby producing the fused operation unit graph 224.”) Claim 27 is rejected over Chen and Brady with the incorporation of claim 25. Regarding claim 27, Chen teaches replacing at least one portion of the one or more neural networks with an optimized version of the at least one portion. (Chen [0085]: “At action 542, the fusion algorithm 500 replaces the matched subgraph with the fused node (optimized version of the at least one portion) as specified by the architectural hints 202 (user-specified description).”) Claim 28 is rejected over Chen and Brady with the incorporation of claim 25. Regarding claim 28, Chen teaches wherein the one or more neural networks are modified by at least replacing at least one portion of the one or more neural networks with another portion. (Chen [0085]: “At action 542, the fusion algorithm 500 replaces the matched subgraph with the fused node (another portion) as specified by the architectural hints 202 (user-specified description).”;) Claim 29 is rejected over Chen and Brady with the incorporation of claim 25. Regarding claim 29, Chen teaches loading user-provided instructions (Chen [0077]: “At action 502, the fusion algorithm 500 constructs a “pattern of operation units” based on the user-specified architecture hints 202 (load the user-provided description).”) for modifying performance (Chen [0002]: “The present technology relates to efficiently executing operation unit graphs on reconfigurable architectures, and can be particularly applied to efficient execution of deep neural networks on coarse-grain reconfigurable architectures and other distributed execution systems.”) of the one or more portions of the one or more neural networks. (Chen [0200]: “In one implementation, the operation unit graph can be a deep neural network.”) Dependent claim 30 is claim 7 in the form of a method is rejected for the same reasons as claim 7 stated above. For the rejection of the limitations specifically pertaining to method of claim 25, see the rejection of claim 25 above. Dependent claim 31 is claim 8 in the form of a method is rejected for the same reasons as claim 8 stated above. For the rejection of the limitations specifically pertaining to method of claim 25, see the rejection of claim 25 above. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID H TRAN whose telephone number is (703)756-1525. The examiner can normally be reached M-F 9:30 am - 5:30 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Viker Lamardo can be reached at (571) 270-5871. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DAVID H TRAN/Examiner, Art Unit 2147 /VIKER A LAMARDO/Supervisory Patent Examiner, Art Unit 2147
Read full office action

Prosecution Timeline

Jul 13, 2021
Application Filed
Jul 11, 2024
Non-Final Rejection — §101, §103
Jul 24, 2024
Interview Requested
Aug 08, 2024
Applicant Interview (Telephonic)
Aug 08, 2024
Examiner Interview Summary
Jan 17, 2025
Response Filed
Jan 21, 2025
Final Rejection — §101, §103
Apr 25, 2025
Interview Requested
Apr 29, 2025
Examiner Interview Summary
Apr 29, 2025
Applicant Interview (Telephonic)
May 30, 2025
Request for Continued Examination
Jun 02, 2025
Non-Final Rejection — §101, §103
Jun 02, 2025
Response after Non-Final Action
Jul 21, 2025
Interview Requested
Jul 30, 2025
Applicant Interview (Telephonic)
Jul 30, 2025
Examiner Interview Summary
Sep 10, 2025
Response Filed
Sep 20, 2025
Final Rejection — §101, §103
Oct 20, 2025
Interview Requested
Mar 25, 2026
Request for Continued Examination
Mar 27, 2026
Non-Final Rejection — §101, §103
Mar 27, 2026
Response after Non-Final Action

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579404
PROCESSOR FOR NEURAL NETWORK, PROCESSING METHOD FOR NEURAL NETWORK, AND NON-TRANSITORY COMPUTER READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
14%
Grant Probability
38%
With Interview (+23.2%)
4y 2m
Median Time to Grant
High
PTA Risk
Based on 14 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month