Prosecution Insights
Last updated: April 19, 2026
Application No. 17/550,530

LEARNING METHOD OF NEURAL NETWORK AND NEURAL PROCESSOR

Final Rejection §101§103
Filed
Dec 14, 2021
Examiner
PAULA, CESAR B
Art Unit
2145
Tech Center
2100 — Computer Architecture & Software
Assignee
ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
OA Round
2 (Final)
32%
Grant Probability
At Risk
3-4
OA Rounds
4y 7m
To Grant
41%
With Interview

Examiner Intelligence

Grants only 32% of cases
32%
Career Allow Rate
55 granted / 169 resolved
-22.5% vs TC avg
Moderate +8% lift
Without
With
+8.3%
Interview Lift
resolved cases with interview
Typical timeline
4y 7m
Avg Prosecution
25 currently pending
Career history
194
Total Applications
across all art units

Statute-Specific Performance

§101
16.3%
-23.7% vs TC avg
§103
47.6%
+7.6% vs TC avg
§102
17.1%
-22.9% vs TC avg
§112
14.4%
-25.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 169 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on December 14, 2021 and September 06, 2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-19 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The analysis of the claims will follow 2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50 (“2019 PEG”). Claim 1 Step 1: The claim recites [a] method of training a neural network; therefore, it is directed to the statutory category of process. Step 2A Prong 1: The claim recites, inter alia: [D]etermining intermediate neurons, to perform second learning, from among intermediate neurons of the first intermediate neuron layer, wherein the intermediate neurons are determined based on a number of spikes of output signals of the intermediate neurons of the first intermediate neuron layer: This limitation could encompass a human mentally determining which intermediate neurons will be used to perform second learning based on the number of spikes by viewing the graph and counting the spikes with a pen and paper. Step 2A Prong 2: This judicial exception is not granted into a practical application. The additional elements of the claim are: A learning method of a neural network which includes a first intermediate neuron layer and a second intermediate neuron layer, the method comprising: Stating that the method of learning includes a first intermediate neuron layer and a second intermediate neuron layer is mere instruction to apply the exception using a generic computer. MPEP § 2106.05(f). [P]erforming first learning, which is based on a first synaptic weight layer, with respect to input subjects and the first intermediate neuron layer: Stating that the method of performing first learning is mere instruction to apply the exception using a generic computer. MPEP § 2106.05(f). [P]erforming the second learning, which is based on a second synaptic weight layer, with respect to the intermediate neurons determined to perform the second learning: Stating that the method of performing second learning is mere instruction to apply the exception using a generic computer. MPEP § 2106.05(f). Step 2B: A learning method of a neural network which includes a first intermediate neuron layer and a second intermediate neuron layer, the method comprising: Stating that the method of learning includes a first intermediate neuron layer and a second intermediate neuron layer is mere instruction to apply the exception using a generic computer. MPEP § 2106.05(f). Apple, Inc. v. Ameranth, Inc., 842 F.3d 1229, 1243-44, 120 USPQ2d 1844, 1855-57 (Fed. Cir. 2016). [P]erforming first learning, which is based on a first synaptic weight layer, with respect to input subjects and the first intermediate neuron layer: Stating that the method of performing first learning is mere instruction to apply the exception using a generic computer. MPEP § 2106.05(f). Apple, Inc. v. Ameranth, Inc., 842 F.3d 1229, 1243-44, 120 USPQ2d 1844, 1855-57 (Fed. Cir. 2016). [P]erforming the second learning, which is based on a second synaptic weight layer, with respect to the intermediate neurons determined to perform the second learning: Stating that the method of performing second learning is mere instruction to apply the exception using a generic computer. MPEP § 2106.05(f). Apple, Inc. v. Ameranth, Inc., 842 F.3d 1229, 1243-44, 120 USPQ2d 1844, 1855-57 (Fed. Cir. 2016). The claim does not contain significantly more than the judicial exception Claim 2 Step 1: A process, as above. Step 2A Prong 1: The claim recites the same mental process as in claim 1. Step 2A Prong 2: This judicial exception is not granted into a practical application. The additional elements of the claim are: [W]herein the first learning and the second learning are performed by a spike-timing-dependent plasticity (STDP) algorithm: Stating that the second learning is being performed by an algorithm merely indicates field of use or technological environment in which the judicial exception is performed. MPEP § 2106.05(h). Step 2B: [W]herein the first learning and the second learning are performed by a spike-timing-dependent plasticity (STDP) algorithm: Stating that the first and second learning is being performed by an algorithm merely indicates field of use or technological environment in which the judicial exception is performed. MPEP § 2106.05(h). FairWarning v. Iatric Sys., 839 F.3d 1089, 1094-95, 120 USPQ2d 1293, 1295 (Fed. Cir. 2016). The claim does not contain significantly more than the judicial exception Claim 3 Step 1: A process, as above. Step 2A Prong 1: The claim recites, inter alia: [I]nitializing synaptic weight layers included in the neural network: This limitation could encompass a human mentally initializing the synaptic weights layers included in the neural network with a pen and paper. Step 2A Prong 2: This judicial exception is not granted into a practical application. The additional elements of the claim are: [P]erforming at least one epoch with respect to the input subjects: Stating that the method of performing second learning is mere instruction to apply the exception using a generic computer. MPEP § 2106.05(f). Step 2B: [P]erforming at least one epoch with respect to the input subjects: Stating that the method of performing second learning is mere instruction to apply the exception using a generic computer. MPEP § 2106.05(f). Apple, Inc. v. Ameranth, Inc., 842 F.3d 1229, 1243-44, 120 USPQ2d 1844, 1855-57 (Fed. Cir. 2016). The claim does not contain significantly more than the judicial exception Claim 4 Step 1: A process, as above. Step 2A Prong 1: The claim recites, inter alia: [D]etermining the intermediate neurons to perform the second learning includes: This limitation could encompass a human mentally determining which intermediate neurons will be used to perform second learning based on the number of spikes by viewing the graph and counting the spikes with a pen and paper. [D]etermining the number of input subjects, which allow the first intermediate neuron to output a spike output signal including spikes, the number of which is the threshold value or more, from among the input subjects: This limitation could encompass a human mentally determining the number of input subjects by counting the input subjects and writing them with a pen and paper. [D]etermining whether to perform the second learning with respect to the first intermediate neuron, based on the determined number of input subjects: This limitation could encompass a human mentally determining whether to perform the second learning with respect to the first intermediate neuron, based on the number of input subjects thus determined by counting the input subjects and writing them with a pen and paper. Step 2A Prong 2: This judicial exception is not granted into a practical application. The additional elements of the claim are: [S]toring an index of a first intermediate neuron, in which the number of spikes of a spike output signal is a threshold value or more, from among the intermediate neurons of the first intermediate neuron layer: This limitation recites insignificant extra solution of storing and retrieving information in memory. MPEP § 2106.05(d). Step 2B: [S]toring an index of a first intermediate neuron, in which the number of spikes of a spike output signal is a threshold value or more, from among the intermediate neurons of the first intermediate neuron layer: This limitation recites insignificant extra solution of storing and retrieving information in memory. MPEP § 2106.05(d). Upon reevaluation, storing the curated features limitation is well-understood, routine, and conventional (“WURC”) because it is directed to storing and retrieving information in memory. MPEP § 2106.05(d)(II); Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93; The claim does not contain significantly more than the judicial exception Claim 5 Step 1: A process, as above. Step 2A Prong 1: The claim recites, inter alia: [W]herein the determining whether to perform the second learning based on the determined number of input subjects includes: This limitation could encompass a human mentally determining whether to perform the second learning with respect to the first intermediate neuron, based on the number of input subjects thus determined by counting the input subjects and writing them with a pen and paper. [A]llowing the second learning not to be performed with respect to the first intermediate neuron in response to the determined number of input subjects being one: This limitation could encompass a human mentally determining to not perform the second learning with respect to the first intermediate neuron, based on the number of input subjects thus determined by counting the input subjects and writing them with a pen and paper. [A]llowing the second learning to be performed with respect to the first intermediate neuron in response to the determined number of input subjects being two or more: This limitation could encompass a human mentally determining whether to perform the second learning with respect to the first intermediate neuron, based on the number of input subjects thus determined by counting the input subjects and writing them with a pen and paper. Step 2A Prong 2: There are no additional elements, therefore this judicial exception is not granted into a practical application. Step 2B: There are no additional elements, therefore the claim does not contain significantly more than the judicial exception Claim 6 Step 1: A process, as above. Step 2A Prong 1: The claim recites, inter alia: [W]herein performing the second learning, includes: This limitation could encompass a human mentally determining whether to perform the second learning with respect to the first intermediate neuron, based on the number of input subjects thus determined by counting the input subjects and writing them with a pen and paper. [D]etermining input subjects, whose learning is not completed in the first learning, from among the input subjects, based on the intermediate neurons determined to perform the second learning: This limitation could encompass a human mentally determining the number of input subjects by counting the input subjects to determine which subjects have not been completed and writing them with a pen and paper. [D]etermining the number of intermediate neurons included in the second intermediate neuron layer, based on the number of input subjects whose learning is not completed in the first learning: This limitation could encompass a human mentally determining the number of intermediate neurons by counting the input subjects to determine which input subjects have not been completed and writing them with a pen and paper. Step 2A Prong 2: There are no additional elements, therefore this judicial exception is not granted into a practical application. Step 2B: There are no additional elements, therefore the claim does not contain significantly more than the judicial exception Claim 7 Step 1: A process, as above. Step 2A Prong 1: The claim recites, inter alia: [W]herein the number of intermediate neurons included in the second intermediate neuron layer is equal to or more than the number of input subjects whose learning is not completed in the first learning: This limitation could encompass a human mentally determining the number of intermediate neurons by counting the input subjects to determine which input subjects have not been completed and writing them with a pen and paper. Step 2A Prong 2: There are no additional elements, therefore this judicial exception is not granted into a practical application. Step 2B: The claim does not contain significantly more than the judicial exception Claim 8 Step 1: A process, as above. Step 2A Prong 1: The claim recites, inter alia: [W]herein the performing of the second learning includes: This limitation could encompass a human mentally determining whether to perform the second learning with respect to the first intermediate neuron, based on the number of input subjects thus determined by counting the input subjects and writing them with a pen and paper. [I]nitializing synaptic weight values of the intermediate neurons of the first synaptic weight layer, which are determined to perform the second learning with respect to the input subjects whose learning is not completed in the first learning: This limitation could encompass a human mentally initializing the synaptic weights layers included in the neural network with a pen and paper. Step 2A Prong 2: This judicial exception is not granted into a practical application. The additional elements of the claim are: [P]erforming the second learning with respect to the partially initialized first synaptic weight layer, the input subjects whose learning is not completed in the first learning, and the intermediate neurons of the second intermediate neuron layer: Stating that the method of performing second learning is mere instruction to apply the exception using a generic computer. MPEP § 2106.05(f). Apple, Inc. v. Ameranth, Inc., 842 F.3d 1229, 1243-44, 120 USPQ2d 1844, 1855-57 (Fed. Cir. 2016). Step 2B: [P]erforming the second learning with respect to the partially initialized first synaptic weight layer, the input subjects whose learning is not completed in the first learning, and the intermediate neurons of the second intermediate neuron layer: Stating that the method of performing second learning is mere instruction to apply the exception using a generic computer. MPEP § 2106.05(f) The claim does not contain significantly more than the judicial exception Claim 9 Step 1: A process, as above. Step 2A Prong 1: The claim recites, inter alia: [D]etermining intermediate neurons to perform a third learning from among the intermediate neurons of the second intermediate neuron layer based on the number of spikes of each of spike output signals of the intermediate neurons of the second intermediate neuron layer. This limitation could encompass a human mentally determining which intermediate neurons will be used to perform second learning based on the number of spikes by viewing the graph and counting the spikes with a pen and paper. Step 2A Prong 2: There are no additional elements, therefore this judicial exception is not granted into a practical application. Step 2B: There are no additional elements, therefore the claim does not contain significantly more than the judicial exception Claim 10 Step 1: A process, as above. Step 2A Prong 1: The claim recites, inter alia: [A]llowing the third learning not to be performed, when each of the intermediate neurons of the second intermediate neuron layer correspond to only one of the input subjects: This limitation could encompass a human mentally determining to not perform the third learning in response to that each of the intermediate neurons of the second intermediate neuron layer corresponds to only one of the input subjects by counting the input subjects and writing them with a pen and paper. [D]etermining the second synaptic weight layer as a weight layer associated with output neurons: This limitation could encompass a human mentally determining the second synaptic weight layer as a weight layer associated with output neurons by viewing the graph and mapping which weight layer connects to the output neurons and writing it with a pen and paper. Step 2A Prong 2: There are no additional elements, therefore this judicial exception is not granted into a practical application. Step 2B: There are no additional elements, therefore the claim does not contain significantly more than the judicial exception Claim 11 Step 1: The claim recites [a] neural processor; therefore, it is directed to the statutory category of machine. Step 2A Prong 1: The claim recites, inter alia: [D]etermine intermediate neurons to perform a second learning, from among intermediate neurons of the first intermediate neuron layer, wherein the intermediate neurons are determined based on a number of spikes of spike output signals of the intermediate neurons of the first intermediate neuron layer: This limitation could encompass a human mentally determining which intermediate neurons will be used to perform second learning based on the number of spikes by viewing the graph and counting the spikes with a pen and paper. Step 2A Prong 2: This judicial exception is not granted into a practical application. The additional elements of the claim are: A neural processor that is configured to train a neural network by: This limitation recites mere instruction to apply exception using a generic computer. MPEP § 2106.05(f). [P]erform a first learning, which is based on a first synaptic weight layer, with respect to input subjects and a first intermediate neuron layer of the neural network which includes the first intermediate neuron layer and a second intermediate neuron layer: Stating that the method of performing first learning is performed by a processor is mere instruction to apply the exception using a generic computer. MPEP § 2106.05(f). [P]erform the second learning, which is based on a second synaptic weight layer, with respect to the intermediate neurons determined to perform the second learning: Stating that the method of performing second learning is performed by a processor is mere instruction to apply the exception using a generic computer. MPEP § 2106.05(f). Step 2B: A neural processor which is configured to: This limitation recites mere instruction to apply exception using a generic computer. MPEP § 2106.05(f). Apple, Inc. v. Ameranth, Inc., 842 F.3d 1229, 1243-44, 120 USPQ2d 1844, 1855-57 (Fed. Cir. 2016). [P]erform first learning, which is based on a first synaptic weight layer, with respect to input subjects and a first intermediate neuron layer of a neural network including the first intermediate neuron layer and a second intermediate neuron layer: Stating that the method of performing first learning is performed by a processor is mere instruction to apply the exception using a generic computer. MPEP § 2106.05(f). Apple, Inc. v. Ameranth, Inc., 842 F.3d 1229, 1243-44, 120 USPQ2d 1844, 1855-57 (Fed. Cir. 2016). [P]erforming the second learning, which is based on a second synaptic weight layer, with respect to the intermediate neurons determined to perform the second learning: Stating that the method of performing second learning is performed by a processor is mere instruction to apply the exception using a generic computer. MPEP § 2106.05(f). Apple, Inc. v. Ameranth, Inc., 842 F.3d 1229, 1243-44, 120 USPQ2d 1844, 1855-57 (Fed. Cir. 2016). The claim does not contain significantly more than the judicial exception Claim 12 Step 1: A machine, as above. Step 2A Prong 1: The claim recites the same mental process as in claim 11: Step 2A Prong 2: This judicial exception is not granted into a practical application. The additional elements of the claim are: [W]herein the first learning and the second learning are performed by a spike-timing-dependent plasticity (STDP) algorithm. Stating that the first and second learning is being performed by an algorithm merely indicates field of use or technological environment in which the judicial exception is performed. MPEP § 2106.05(h). Step 2B: [W]herein the first learning and the second learning are performed by a spike-timing-dependent plasticity (STDP) algorithm: Stating that the first and second learning is being performed by an algorithm merely indicates field of use or technological environment in which the judicial exception is performed. MPEP § 2106.05(h). FairWarning v. Iatric Sys., 839 F.3d 1089, 1094-95, 120 USPQ2d 1293, 1295 (Fed. Cir. 2016). The claim does not contain significantly more than the judicial exception Claim 13 Step 1: A machine, as above. Step 2A Prong 1: The claim recite, inter alia: [I]nitialize synaptic weight layers included in the neural network: This limitation could encompass a human mentally initializing the synaptic weights layers included in the neural network with a pen and paper. Step 2A Prong 2: This judicial exception is not granted into a practical application. The additional elements of the claim are: [W]herein the neural processor is further configured to: This limitation recites mere instruction to apply exception using a generic computer. MPEP § 2106.05(f). [P]erform at least one epoch with respect to the input subjects: Stating that the method of performing second learning by a processor is mere instruction to apply the exception using a generic computer. MPEP § 2106.05(f). Step 2B: [W]herein the neural processor is further configured to: This limitation recites mere instruction to apply exception using a generic computer. MPEP § 2106.05(f). Apple, Inc. v. Ameranth, Inc., 842 F.3d 1229, 1243-44, 120 USPQ2d 1844, 1855-57 (Fed. Cir. 2016). [P]erform at least one epoch with respect to the input subjects: Stating that the method of performing second learning by a processor is mere instruction to apply the exception using a generic computer. MPEP § 2106.05(f). Apple, Inc. v. Ameranth, Inc., 842 F.3d 1229, 1243-44, 120 USPQ2d 1844, 1855-57 (Fed. Cir. 2016). The claim does not contain significantly more than the judicial exception Claim 14 Step 1: A machine, as above. Step 2A Prong 1: The claim recites, inter alia: [D]etermine the number of input subjects, which allow the first intermediate neuron to output a spike output signal including spikes, the number of which is the threshold value or more, from among the input subjects: This limitation could encompass a human mentally determining the number of input subjects by counting the input subjects and writing them with a pen and paper. [D]etermine whether to perform the second learning with respect to the first intermediate neuron based on the determined number of input subjects: This limitation could encompass a human mentally determining whether to perform the second learning with respect to the first intermediate neuron, based on the number of input subjects thus determined by counting the input subjects and writing them with a pen and paper. Step 2A Prong 2: This judicial exception is not granted into a practical application. The additional elements of the claim are: [S]tore an index of a first intermediate neuron, in which the number of spikes of a spike output signal is a threshold value or more, from among the intermediate neurons of the first intermediate neuron layer: This limitation recites insignificant extra-solution activity of mere data gathering. MPEP § 2106.05(d). Step 2B: [S]tore an index of a first intermediate neuron, in which the number of spikes of a spike output signal is a threshold value or more, from among the intermediate neurons of the first intermediate neuron layer: This limitation recites insignificant extra-solution activity of mere data gathering. MPEP § 2106.05(d). Upon reevaluation, storing the curated features limitation is well-understood, routine, and conventional (“WURC”) because it is directed to storing and retrieving information in memory. MPEP § 2106.05(d)(II); Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93; The claim does not contain significantly more than the judicial exception Claim 15 Step 1: A machine, as above. Step 2A Prong 1: The claim recites, inter alia: [A]llow the second learning not to be performed with respect to the first intermediate neuron when the determined number of input subjects being one: This limitation could encompass a human mentally determining to not perform the second learning with respect to the first intermediate neuron, based on the number of input subjects thus determined by counting the input subjects and writing them with a pen and paper. [A]llow the second learning to be performed with respect to the first intermediate neuron when the determined number of input subjects is two or more. This limitation could encompass a human mentally determining whether to perform the second learning with respect to the first intermediate neuron, based on the number of input subjects thus determined by counting the input subjects and writing them with a pen and paper. Step 2A Prong 2: This judicial exception is not granted into a practical application. The additional elements of the claim are: [W]herein the neural processor is further configured to: This limitation recites mere instruction to apply exception using a generic computer. MPEP § 2106.05(f). Step 2B: [W]herein the neural processor is further configured to: This limitation recites mere instruction to apply exception using a generic computer. MPEP § 2106.05(f). Apple, Inc. v. Ameranth, Inc., 842 F.3d 1229, 1243-44, 120 USPQ2d 1844, 1855-57 (Fed. Cir. 2016). The claim does not contain significantly more than the judicial exception Claim 16 Step 1: A machine, as above. Step 2A Prong 1: The claim recites, inter alia: [D]etermine input subjects, whose learning is not completed in the first learning, from among the input subjects, based on the intermediate neurons determined to perform the second learning: This limitation could encompass a human mentally determining the number of input subjects by counting the input subjects by viewing the graph to determine which subjects have not been completed and writing them with a pen and paper. [D]etermine the number of intermediate neurons included in the second intermediate neuron layer, based on the number of input subjects whose learning is not completed in the first learning: This limitation could encompass a human mentally determining the number of intermediate neurons by counting the input subjects by viewing the graph to determine which subjects have not been completed and writing them with a pen and paper. Step 2A Prong 2: This judicial exception is not granted into a practical application. The additional elements of the claim are: [W]herein the neural processor is further configured to: This limitation recites mere instruction to apply exception using a generic computer. MPEP § 2106.05(f). Step 2B: [W]herein the neural processor is further configured to: This limitation recites mere instruction to apply exception using a generic computer. MPEP § 2106.05(f). Apple, Inc. v. Ameranth, Inc., 842 F.3d 1229, 1243-44, 120 USPQ2d 1844, 1855-57 (Fed. Cir. 2016). The claim does not contain significantly more than the judicial exception Claim 17 Step 1: A machine, as above. Step 2A Prong 1: The claim recites, inter alia: [W]herein the number of intermediate neurons included in the second intermediate neuron layer is equal to or more than the number of input subjects whose learning is not completed in the first learning: This limitation could encompass a human mentally determining the number of intermediate neurons by counting the input subjects to determine which input subjects have not been completed and writing them with a pen and paper. Step 2A Prong 2: There are no additional elements, therefore this judicial exception is not granted into a practical application. Step 2B: There are no additional elements, therefore the claim does not contain significantly more than the judicial exception Claim 18 Step 1: A machine, as above. Step 2A Prong 1: The claim recites, inter alia: [P]artially initialize synaptic weight values of the intermediate neurons of the first synaptic weight layer, which are determined to perform the second learning with respect to the input subjects whose learning is not completed in the first learning: This limitation could encompass a human mentally initializing the synaptic weights layers included in the neural network with a pen and paper. Step 2A Prong 2: This judicial exception is not granted into a practical application. The additional elements of the claim are: [W]herein the neural processor is further configured to: This limitation recites mere instruction to apply exception using a generic computer. MPEP § 2106.05(f). [P]erform the second learning with respect to the partially initialized first synaptic weight layer, the input subjects whose learning is not completed in the first learning, and the intermediate neurons of the second intermediate neuron layer: Stating that the method of performing second learning is mere instruction to apply the exception using a generic computer. MPEP § 2106.05(f). Step 2B: [W]herein the neural processor is further configured to: This limitation recites mere instruction to apply exception using a generic computer. MPEP § 2106.05(f). Apple, Inc. v. Ameranth, Inc., 842 F.3d 1229, 1243-44, 120 USPQ2d 1844, 1855-57 (Fed. Cir. 2016). [P]erform the second learning with respect to the partially initialized first synaptic weight layer, the input subjects whose learning is not completed in the first learning, and the intermediate neurons of the second intermediate neuron layer: Stating that the method of performing second learning is mere instruction to apply the exception using a generic computer. MPEP § 2106.05(f). Apple, Inc. v. Ameranth, Inc., 842 F.3d 1229, 1243-44, 120 USPQ2d 1844, 1855-57 (Fed. Cir. 2016). The claim does not contain significantly more than the judicial exception Claim 19 Step 1: A machine, as above. Step 2A Prong 1: The claim recites, inter alia: [D]etermine intermediate neurons to perform third learning, from among intermediate neurons of the second intermediate neuron layer, based on the number of spikes of each of spike output signals of the intermediate neurons of the second intermediate neuron layer: This limitation could encompass a human mentally determining which intermediate neurons will be used to perform second learning based on the number of spikes by viewing the graph and counting the spikes with a pen and paper. [A]llow the third learning not to be performed, in response to each of the intermediate neurons of the second intermediate neuron layer corresponding to only one of the input subjects: This limitation could encompass a human mentally determining to not perform the third learning in response to that each of the intermediate neurons of the second intermediate neuron layer corresponds to only one of the input subjects by counting the input subjects and writing them with a pen and paper. [D]etermine the second synaptic weight layer as a weight layer associated with output neurons. This limitation could encompass a human mentally determining the second synaptic weight layer as a weight layer associated with output neurons by viewing the graph and mapping which weight layer connects to the output neurons and writing it with a pen and paper. Step 2A Prong 2: There are no additional elements, therefore this judicial exception is not granted into a practical application. Step 2B: There are no additional elements, therefore the claim does not contain significantly more than the judicial exception. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1, 3, 9-10, 11, 13 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over KIM et al (US 20200356853) (“KIM”), in view of LEE et al (US 10515305) (“LEE”). Regarding claim 1, KIM teaches: A learning method of training a neural network (KIM, P[0008] According to another aspect of the present disclosure, a learning method of a neural network system includes a plurality of learning iterations on a plurality of layers.) which includes a first intermediate neuron layer and a second intermediate neuron layer, the method comprising (KIM P[0049] For example, the input layer 1100 may include two neurons, each of the hidden layers 1220 and 1240 may include three neurons [neurons in hidden layer are intermediate neurons], and the output layer 1300 may include two neurons, as shown in FIG. 3.): performing first learning, which is based on a first synaptic weight layer, with respect to input subjects and the first intermediate neuron layer (KIM, P[0007] The determination of the at least one layer in which the learning is interrupted is based on a result of comparing, for each of the plurality of layers, a distribution of first weight values resulting [resulting implies the first learning has been performed] from a first learning iteration with a distribution of second weight values resulting from a second learning iteration subsequent to the first learning iteration. P[0048] “FIG. 3 Illustrates a neural network 1000, according to an example embodiment. The neural network 1000 may include an input layer 1100, hidden layers [intermediate neuron layers] 1220 and 1240, and an output layer 1300. The neural network 1000 may perform an operation based on input data I1 and 12 and generate output data 01 and 02 based on an operation result.”); and performing the second learning (KIM P[0009] “The second learning iteration is subsequent to the first learning iteration.”), which is based on a second synaptic weight layer, with respect to the intermediate neurons determined to perform the second learning (KIM P[0007] “The determination of the at least one layer in which the learning is interrupted is based on a result of comparing, for each of the plurality of layers, a distribution of first weight values resulting from a first learning iteration with a distribution of second weight values resulting from a second learning iteration subsequent to the first learning iteration.” P[0009] “The second learning iteration is subsequent to the first learning iteration.” [Examiner notes: The use of a layer implies the neurons have been determined to perform learning.]). KIM fails to teach: determining intermediate neurons to perform a second learning from among intermediate neurons of the first intermediate neuron layer, based on a number of spikes of spike output signals of the intermediate neurons of the first intermediate neuron layer. However, LEE teaches: determining intermediate neurons to perform a second learning from among intermediate neurons of the first intermediate neuron layer, (LEE page 19, column 12, lines 37-42 In operation 730, the training apparatus trains the neural network based on the output value and a desired value corresponding to the training data. For example, the training apparatus may determine a parameter of neurons included in the neural network, to train the neural network.) wherein the intermediate neurons are determined based on a number of spikes of spike output signals of the intermediate neurons of the first intermediate neuron layer (LEE page 17, column 7, lines 16-67 Neurons of the current layer 233 may output a spike signal (for example, spike signals 310 and 330 of FIG. 3) in a predetermined condition based on first synaptic signals received from neurons [intermediate neurons] of the previous layer 231. Also, the neurons of the current layer 233 may output spike signals based on second synaptic signals received from the other neurons in the current layer 233 in addition to the first synaptic signals…. [Examiners note: Once the neuron outputs spike signals, those signals are used in the next neuron from the following layer, thus a neuron is determined based on the spikes outputted]); KIM and LEE are considered analogous because they relate to machine learning methods. It would have been obvious to a person skilled in the art before the effective filing date of the claimed invention to combine KIM with the learning method based on the number of spikes of each spike output signal from LEE . Doing so would optimize the neurons in the neuron network (LEE: Page 21, column 16, lines 38-42) Regarding claim 3, KIM in view of LEE teach all of the limitations of claim 1 as shown above. KIM also teaches: The method of claim 1, wherein performing the first learning includes: (KIM, P[0007] The determination of the at least one layer in which the learning is interrupted is based on a result of comparing, for each of the plurality of layers, a distribution of first weight values resulting [resulting implies the first learning has been performed] from a first learning iteration with a distribution of second weight values resulting from a second learning iteration subsequent to the first learning iteration.) and performing at least one epoch with respect to the input subjects (KIM P[0053] “A learning iteration maybe referred to as an epoch.” P[0009] “According to a further aspect of the present disclosure, a transfer learning method of a neural network processor includes a plurality of learning iterations on a plurality of layers.” P[0035] “For example, the neural network processor 100 may generate an information signal by performing a neural network operation on input data, and the neural network operation may include a convolution operation.”). LEE teaches: initializing synaptic weight layers included in the neural network (LEE page 15, column 4, lines 6-9 The training may include initializing at least one among a membrane potential threshold and synaptic weights of neurons included in a layer of the neural network based on a number of synapses corresponding to each of the neurons.); KIM and LEE are considered analogous because they relate to machine learning methods. It would have been obvious to a person skilled in the art before the effective filing date of the claimed invention to combine KIM with the synaptic weight layer initialization from LEE. Doing so allows for the regulation of parameters. (LEE: page 21, column 15, lines 18-21). Regarding claim 9, KIM in view of LEE teach all of the limitations of claim 1 as shown above. KIM also teaches: determining intermediate neurons, to perform a third learning (KIM, P[0007] The neural network processor is also configured to perform a third learning iteration subsequent to the second learning) LEE teaches: determining intermediate neurons (LEE page 19, column 12, lines 37-42 In operation 730, the training apparatus trains the neural network based on the output value and a desired value corresponding to the training data. For example, the training apparatus may determine a parameter of neurons included in the neural network, to train the neural network.), to perform a third learning, from among the intermediate neurons of the second intermediate neuron layer based on the number of spikes of each of spike output signals of the intermediate neurons of the second intermediate neuron layer (LEE page 17, column 7, lines 16-22 “Neurons of the current layer 233 may output a spike signal (for example, spike signals 310 and 330 of FIG. 3) in a predetermined condition based on first synaptic signals received from neurons of the previous layer 231. Also, the neurons of the current layer 233 may output spike signals based on second synaptic signals received from the other neurons in the current layer 233 in addition to the first synaptic signals.” [Examiners note: Once the neuron outputs spike signals, those signals are used in the next neuron from the following layer, thus a neuron is determined based on the spikes outputted]). The same motivation for claim 1 applies equally to claim 9. Regarding claim 10, KIM in view of LEE teach all of the limitations of claim 9 as shown above. KIM also teaches: The method of claim 9, further comprising: allowing the third learning not to be performed (KIM, P[0007] “The neural network processor is also configured to perform a third learning iteration subsequent to the second learning” P[0009] “The transfer learning method further includes performing a third learning iteration on the plurality of layers except the at least one layer for which interruption of the learning has been determined among the plurality of layers.”), and determining the second synaptic weight layer as a weight layer associated with output neurons (KIM, P[0009] storing second weight values resulting from a second learning iteration in the memory. P[0048] FIG. 3 illustrates a neural network 1000, according to an example embodiment. The neural network 1000 may include an input layer 1100, hidden layers 1220 and 1240, and an output layer 1300. The neural network 1000 may perform an operation based on input data I1 and 12 and generate output data 01 and 02 based on an operation result.). LEE teaches: The method of claim 9, further comprising: allowing the third learning not to be performed when each of the intermediate neurons of the second intermediate neuron layer correspond to only one of the input subjects (LEE Page 16, column 6, lines 36-39 ” The neural network may include an input layer, a hidden layer and an output layer. The input layer may receive input data.” Page 19, column 12, lines 37-42 In operation 730, the training apparatus trains the neural network based on the output value and a desired value corresponding to the training data. For example, the training apparatus may determine a parameter [number] of neurons included in the neural network, to train the neural network.); The same motivation for claim 1 applies equally to claim 10. Regarding claim 11, KIM teaches: A neural processor that is configured to train a neural network, by: perform a first learning (KIM P[0002] The present disclosure relates to a neural network system, a learning method thereof, and a transfer learning method of a neural network processor. P[0007] The determination of the at least one layer in which the learning is interrupted is based on a result of comparing, for each of the plurality of layers, a distribution of first weight values resulting [resulting implies the first learning has been performed] from a first learning iteration with a distribution of second weight values resulting from a second learning iteration subsequent to the first learning iteration.): which is based on a first synaptic weight layer with respect to input subjects and a first intermediate neuron layer of the neural network, which includes the first intermediate neuron layer and a second intermediate neuron layer (KIM, P[0007] The determination of the at least one layer in which the learning is interrupted is based on a result of comparing, for each of the plurality of layers, a distribution of first weight values resulting [resulting implies the first learning has been performed] from a first learning iteration with a distribution of second weight values resulting from a second learning iteration subsequent to the first learning iteration. P[0048] “FIG. 3 Illustrates a neural network 1000, according to an example embodiment. The neural network 1000 may include an input layer 1100, hidden layers [intermediate neuron layers] 1220 and 1240, and an output layer 1300. The neural network 1000 may perform an operation based on input data I1 and 12 and generate output data 01 and 02 based on an operation result.”); and perform the second learning (KIM P[0009] “The second learning iteration is subsequent to the first learning iteration.”), which is based on a second synaptic weight layer, with respect to the intermediate neurons determined to perform the second learning (KIM P[0007] “The determination of the at least one layer in which the learning is interrupted is based on a result of comparing, for each of the plurality of layers, a distribution of first weight values resulting from a first learning iteration with a distribution of second weight values resulting from a second learning iteration subsequent to the first learning iteration.”). KIM fails to teach: determine intermediate neurons to perform a second learning from among intermediate neurons of the first intermediate neuron layer, wherein the intermediate neurons are determined based on a number of spikes of spike output signals of the intermediate neurons of the first intermediate neuron layer. However, LEE teaches: determine intermediate neurons to perform a second learning from among intermediate neurons of the first intermediate neuron layer, (LEE page 19, column 12, lines 37-42 In operation 730, the training apparatus trains the neural network based on the output value and a desired value corresponding to the training data. For example, the training apparatus may determine a parameter of neurons included in the neural network, to train the neural network.) wherein the intermediate neurons are determined based on a number of spikes of spike output signals of the intermediate neurons of the first intermediate neuron layer. (LEE page 17, column 7, lines 16-67 Neurons of the current layer 233 may output a spike signal (for example, spike signals 310 and 330 of FIG. 3) in a predetermined condition based on first synaptic signals received from neurons [intermediate neurons] of the previous layer 231. Also, the neurons of the current layer 233 may output spike signals based on second synaptic signals received from the other neurons in the current layer 233 in addition to the first synaptic signals…. [Examiners note: Once the neuron outputs spike signals, those signals are used in the next neuron from the following layer, thus a neuron is determined based on the spikes outputted]); KIM and LEE are considered analogous because they relate to machine learning methods. It would have been obvious to a person skilled in the art before the effective filing date of the claimed invention to combine KIM with the processor and learning method based on the number of spikes of each spike output signal from LEE . Doing so would optimize the neurons in the neuron network (LEE : Page 21, column 16, lines 38-42) Regarding claim 13, KIM in view of LEE teach all of the limitations of claim 11 as shown above. KIM also teaches: The neural processor of claim 11, wherein the neural processor is further configured to (KIM P[0002] The present disclosure relates to a neural network system, a learning method thereof, and a transfer learning method of a neural network processor.): and perform at least one epoch with respect to the input subjects (KIM P[0053] “A learning iteration maybe referred to as an epoch.” P[0009] “According to a further aspect of the present disclosure, a transfer learning method of a neural network processor includes a plurality of learning iterations on a plurality of layers.” P[0035] “For example, the neural network processor 100 may generate an information signal by performing a neural network operation on input data, and the neural network operation may include a convolution operation.”). LEE teaches: initialize synaptic weight layers included in the neural network (LEE page 15, column 4, lines 6-9 The training may include initializing at least one among a membrane potential threshold and synaptic weights of neurons included in a layer of the neural network based on a number of synapses corresponding to each of the neurons.); The same motivation for claim 3 equally applies to claim 13. Regarding claim 19, KIM in view of LEE teach all of the limitations of claim 11 as shown above. KIM also teaches: The neural processor of claim 11, wherein the neural processor is further configured to (KIM P[0002] The present disclosure relates to a neural network system, a learning method thereof, and a transfer learning method of a neural network processor.): determine intermediate neurons to perform a third learning (KIM, P[0007] The neural network processor is also configured to perform a third learning iteration subsequent to the second learning): allow the third learning not to be performed (KIM, P[0007] “The neural network processor is also configured to perform a third learning iteration subsequent to the second learning” P[0009] “The transfer learning method further includes performing a third learning iteration on the plurality of layers except the at least one layer for which interruption of the learning has been determined among the plurality of layers.”); and determine the second synaptic weight layer as a weight layer associated with output neurons (KIM, P[0009] storing second weight values resulting from a second learning iteration in the memory. P[0048] FIG. 3 illustrates a neural network 1000, according to an example embodiment. The neural network 1000 may include an input layer 1100, hidden layers 1220 and 1240, and an output layer 1300. The neural network 1000 may perform an operation based on input data I1 and 12 and generate output data 01 and 02 based on an operation result.). LEE teaches: determine intermediate neurons to perform a third learning, (LEE page 19, column 12, lines 37-42 In operation 730, the training apparatus trains the neural network based on the output value and a desired value corresponding to the training data. For example, the training apparatus may determine a parameter of neurons included in the neural network, to train the neural network.), from among intermediate neurons of the second intermediate neuron layer, based on the number of spikes of each of spike output signals of the intermediate neurons of the second intermediate neuron layer (LEE page 17, column 7, lines 16-22 “Neurons of the current layer Neurons of the current layer 233 may output a spike signal (for example, spike signals 310 and 330 of FIG. 3) in a predetermined condition based on first synaptic signals received from neurons of the previous layer 231. Also, the neurons of the current layer 233 may output spike signals based on second synaptic signals received from the other neurons in the current layer 233 in addition to the first synaptic signals. [Examiners note: Once the neuron outputs spike signals, those signals are used in the next neuron from the following layer, thus a neuron is determined based on the spikes outputted]); allow the third learning not to be performed, in response to each of the intermediate neurons of the second intermediate neuron layer corresponding to only one of the input subjects (LEE Page 16, column 6, lines 36-39 ” The neural network may include an input layer, a hidden layer and an output layer. The input layer may receive input data.” Page 19, column 12, lines 37-42 In operation 730, the training apparatus trains the neural network based on the output value and a desired value corresponding to the training data. For example, the training apparatus may determine a parameter [number] of neurons included in the neural network, to train the neural network.); The same motivation for claim 11 equally applies to claim 19. Claims 2, 4, 6-8, 12, 14, 16-18 are rejected under 35 U.S.C. 103 as being unpatentable over KIM et al (US 20200356853) (“KIM”), in view of LEE et al (US 10515305) (“LEE”), and further in view of NPL HOYOUNG (Spike Counts Based Low Complexity NN architecture With Binary Synapse) (“HOYOUNG”). Regarding claim 2, KIM in view of LEE teach all of the limitations of claim 1 as shown above. KIM also teaches: The method of claim 1, wherein the first learning and the second learning (KIM P[0009] “The second learning iteration is subsequent to the first learning iteration.”) KIM and LEE fail to teach: wherein the first learning and the second learning are performed by a spike-timing-dependent plasticity (STDP) algorithm. However, HOYOUNG teaches: wherein the first learning and the second learning are performed by a spike-timing-dependent plasticity (STDP) algorithm. (NPL HOYOUNG page 13, Page 13, Conclusion: “The SNN architecture supports STDP based unsupervised learning with one folly--connected layer with 400 excitatory neurons.”) KIM, LEE, and HOYOUNG are considered analogous because they relate to machine learning methods. It would have been obvious to a person skilled in the art before the effective filing date of the claimed invention to combine KIM and LEE with the learning method are performed by STDP algorithm from HOYOUNG. Doing so would reduce energy consumption for the neural network. (HOYOUNG: Page 13, Conclusion) Regarding claim 4, KIM in view of LEE teach all of the limitations of claim 1 as shown above. KIM also teaches: determining whether to perform the second learning with respect to the first intermediate neuron (KIM P[0038] “The neural network processor 100 may perform a subsequent learning (second learning) iteration only in layers excluding the layer for which interruption of the learning is determined.”), LEE teaches: The method of claim 1, wherein determining the intermediate neurons to perform the second learning includes (LEE page 19, column 12, lines 37-42 In operation 730, the training apparatus trains the neural network based on the output value and a desired value corresponding to the training data. For example, the training apparatus may determine a parameter of neurons included in the neural network, to train the neural network.): from among the intermediate neurons of the first intermediate neuron layer; (LEE page 19, column 12, lines 37-42 In operation 730, the training apparatus trains the neural network based on the output value and a desired value corresponding to the training data. For example, the training apparatus may determine a parameter of neurons included in the neural network, to train the neural network.) KIM and LEE fail to teach: storing an index of a first intermediate neuron.”), in which the number of spikes of a spike output signal is a threshold value or more from among the intermediate neurons of the first intermediate neuron layer; determining the number of input subjects, which allow the first intermediate neuron to output a spike output signal including spikes, the number of which is the threshold value or more, from among the input subjects; determining whether to perform the second learning with respect to the first intermediate neuron, based on the determined number of input subjects However, HOYOUNG teaches: storing an index of a first intermediate neuron (NPL HOYOUNG Page 5, lines 4-8 “ln the hardware implementation, finding only the most active excitatory neuron need & only tracking the current maximum number of post-synaptic spike counts and the index of the excitatory neuron.” Page 10, lines 3-6 “If the new value is larger than the previous one, the new Vmem - Vth and the index of the excitatory neuron is stored in the global controller.”), in which the number of spikes of a spike output signal is a threshold value or more from among the intermediate neurons of the first intermediate neuron layer; (NPL HOYOUNG abstract “For the energy effacement inferencing operations, we propose an accumulation based computing scheme, where the number of input spikes for each input axon is accumulated without instant membrane updates until the pre-defined [threshold] number of spikes are reached.”), determining the number of input subjects (NPL HPYOUNG page 6, column 1, lines 13-16 “In the proposed accumulation based computing approach, the counter that was used in training mode can also be used to count the number of input spikes.”), which allow the first intermediate neuron to output a spike output signal including spikes (NPL HOYOUNG page 6, column 2, lines 20-25 “In the proposed accumulation based computing scheme, as the accumulated number of input spikes are used per one membrane voltage updates, multiple excitatory neurons generate output spikes at a time.”), the number of which is the threshold value or more, from among the input subjects (NPL HOYOUNG abstract “For the energy effacement inferencing operations, we propose an accumulation based computing scheme, where the number of input spikes for each input axon is accumulated without instant membrane updates until the pre-defined number of spikes are reached.”); determining whether to perform the second learning with respect to the first intermediate neuron, based on the determined number of input subjects (NPL HPYOUNG page 6, column 1, lines 13-16 “In the proposed accumulation based computing approach, the counter that was used in training mode can also be used to count the number of input spikes.”). KIM, LEE, and HOYOUNG are considered analogous because they relate to machine learning methods. It would have been obvious to a person skilled in the art before the effective filing date of the claimed invention to combine KIM and LEE with the storing index value from a neuron, in which the number of spikes of a spike output signal is a threshold value or more from among the intermediate neurons of the first intermediate neuron layer, determining the number of input subjects, which allow the first intermediate neuron to output a spike output signal including spikes, the threshold value from input subjects, and determination to perform the second learning based on the number of inputs determined from HOYOUNG. Doing so would reduce energy consumption. (HOYOUNG: Page 13, Conclusion) Regarding claim 6, KIM in view of LEE teach all of the limitations of claim 1 as shown above. KIM also teaches: The method of claim 1, wherein performing the second learning includes (KIM, P[0009] storing second weight values resulting from a second learning iteration in the memory.): determining input subjects, whose learning is not completed in the first learning (KIM P[0062] “In an embodiment, the transfer learning controller 140 may determine a layer in which learning [first learning] will be interrupted [Interruption means that the learning has not been completed] in a subsequent learning iteration based on a result of comparing the distribution of weight values resulting from a current learning iteration with the distribution of weight values resulting from a previous learning iteration.”), the number of input subjects whose learning is not completed in the first learning (KIM P[0062] “In an embodiment, the transfer learning controller 140 may determine a layer in which learning [first learning] will be interrupted [Interruption means that the learning has not been completed] in a subsequent learning iteration based on a result of comparing the distribution of weight values resulting from a current learning iteration with the distribution of weight values resulting from a previous learning iteration.”). LEE teaches: The method of claim 1, wherein the performing second learning includes: (LEE page 19, column 12, lines 37-42 In operation 730, the training apparatus trains the neural network based on the output value and a desired value corresponding to the training data. For example, the training apparatus may determine a parameter of neurons included in the neural network, to train the neural network.) the intermediate neurons determined to perform the second learning (LEE page 19, column 12, lines 37-42 In operation 730, the training apparatus trains the neural network based on the output value and a desired value corresponding to the training data. For example, the training apparatus may determine a parameter of neurons included in the neural network, to train the neural network.); and determining the number of intermediate neurons included in the second intermediate neuron layer (LEE page 19, column 12, lines 37-42 In operation 730, the training apparatus trains the neural network based on the output value and a desired value corresponding to the training data. For example, the training apparatus may determine a parameter [number] of neurons included in the neural network, to train the neural network.), KIM and LEE fail to teach: determining input subjects, whose learning is not completed in the first learning, from among the input subjects, based on the intermediate neurons determined to perform the second learning and determining the number of intermediate neurons included in the second intermediate neuron layer, based on the number of input subjects whose learning is not completed in the first learning However, HOYOUNG teaches: determining input subjects, whose learning is not completed in the first learning, from among the input subjects based on the intermediate neurons determined to perform the second learning (NPL HPYOUNG page 6, column 1, lines 13-16 “In the proposed accumulation based computing approach, the counter that was used in training mode can also be used to count the number of input spikes.”). KIM, LEE, and HOYOUNG are considered analogous because they relate to machine learning methods. It would have been obvious to a person skilled in the art before the effective filing date of the claimed invention to combine KIM and LEE with the determination of input subjects from HOYOUNG. Doing so would reduce energy consumption for the neural network. (HOYOUNG: Regarding claim 7, KIM in view of LEE in view of HOYOUNG teach all of the limitations of claim 6 as shown above. KIM also teaches: the number of intermediate neurons included in the second intermediate neuron layer is equal to or more than the number of input subjects whose learning is not completed in the first learning (KIM P[0062] “In an embodiment, the transfer learning controller 140 may determine a layer in which learning [first learning] will be interrupted [Interruption means that the learning has not been completed] in a subsequent learning iteration based on a result of comparing the distribution of weight values resulting from a current learning iteration with the distribution of weight values resulting from a previous learning iteration.”). LEE teaches: The method of claim 6, wherein the number of intermediate neurons included in the second intermediate neuron layer is equal to or more than the number of input subjects whose learning is not completed in the first learning (LEE page 19, column 12, lines 37-42 In operation 730, the training apparatus trains the neural network based on the output value and a desired value corresponding to the training data. For example, the training apparatus may determine a parameter [number] of neurons included in the neural network, to train the neural network.). KIM and LEE are considered analogous because they relate to machine learning methods. It would have been obvious to a person skilled in the art before the effective filing date of the claimed invention to combine KIM with the number of neurons in the second intermediate neuron layer is equal to or more than the number of inputs from LEE. Doing so allows for the regulation of parameters. (LEE: page 21, column 15, lines 18-21) Regarding claim 8, KIM in view of LEE in view of HOYOUNG teach all of the limitations of claim 6 as shown above. KIM also teaches: The method of claim 6, wherein performing the second learning includes(KIM, P[0009] storing second weight values resulting from a second learning iteration in the memory.), (KIM P[0009] “The second learning iteration is subsequent to the first learning iteration.” P[0007] “The determination of the at least one layer in which the learning is interrupted is based on a result of comparing, for each of the plurality of layers, a distribution of first weight values resulting from a first learning iteration with a distribution of second weight values resulting from a second learning iteration subsequent to the first learning iteration.” P[0009] “The second learning iteration is subsequent to the first learning iteration.” [Examiner notes: The use of a layer implies the neurons have been determined to perform learning.]) and performing the second learning (KIM, P[0009] storing second weight values resulting from a second learning iteration in the memory.) the input subjects whose learning is not completed in the first learning (KIM P[0062] “In an embodiment, the transfer learning controller 140 may determine a layer in which learning [first learning] will be interrupted [Interruption means that the learning has not been completed] in a subsequent learning iteration based on a result of comparing the distribution of weight values resulting from a current learning iteration with the distribution of weight values resulting from a previous learning iteration.”) LEE teaches: initializing synaptic weight values of the intermediate neurons of the first synaptic weight layer which are determined to perform the second learning with respect to the input subjects whose learning is not completed in the first learning; (LEE page 15, column 4, lines 6-9 The training may include initializing at least one among a membrane potential threshold and synaptic weights of neurons included in a layer of the neural network based on a number of synapses corresponding to each of the neurons.), and performing the second learning with respect to the partially initialized first synaptic weight layer (LEE page 15, column 4, lines 6-9 The training may include initializing at least one among a membrane potential threshold and synaptic weights of neurons included in a layer of the neural network based on a number of synapses corresponding to each of the neurons.), the input subjects whose learning is not completed in the first learning, and the intermediate neurons of the second intermediate neuron layer (LEE page 19, column 12, lines 37-42 In operation 730, the training apparatus trains the neural network based on the output value and a desired value corresponding to the training data. For example, the training apparatus may determine a parameter [number] of neurons included in the neural network, to train the neural network.). KIM and LEE are considered analogous because they relate to machine learning methods. It would have been obvious to a person skilled in the art before the effective filing date of the claimed invention to combine KIM with the neurons determined to perform second learning, initializing the synaptic weight values from the intermediate neurons of the first synaptic weight layer, and performing the second learning with respect to the initialized synaptic weight layer and intermediate neurons of the second intermediate neuron layer from LEE. Doing so would optimize the neurons in the neuron network (LEE: Page 21, column 16, lines 38-42) Regarding claim 12, KIM in view of LEE teach all of the limitations of claim 11 as shown above. KIM also teaches: The neural processor of claim 11 (KIM P[0002] The present disclosure relates to a neural network system, a learning method thereof, and a transfer learning method of a neural network processor.), wherein the first learning and the second learning (KIM P[0009] “The second learning iteration is subsequent to the first learning iteration.”) KIM and LEE fail to teach: wherein the first learning and the second learning are performed by a spike-timing-dependent plasticity (STDP) algorithm However, HOYOUNG teaches: wherein the first learning and the second learning are performed by a spike-timing-dependent plasticity (STDP) algorithm (NPL HOYOUNG page 13, Conclusion: “The SNN architecture supports STDP based unsupervised learning with one folly--connected layer with 400 excitatory neurons.”). The same motivation for claim 2 equally applies to claim 12. Regarding claim 14, KIM in view of LEE teach all of the limitations of claim 11 as shown above. KIM also teaches: The neural processor of claim 11, wherein the neural processor is further configured to (KIM P[0002] The present disclosure relates to a neural network system, a learning method thereof, and a transfer learning method of a neural network processor.): and determine whether to perform the second learning with respect to the first intermediate neuron (KIM P[0038] “The neural network processor 100 may perform a subsequent learning (second learning) iteration only in layers excluding the layer for which interruption of the learning is determined.”) LEE teaches: from among the intermediate neurons of the first intermediate neuron layer (LEE page 19, column 12, lines 37-42 In operation 730, the training apparatus trains the neural network based on the output value and a desired value corresponding to the training data. For example, the training apparatus may determine a parameter of neurons included in the neural network, to train the neural network.); KIM and LEE fail to teach: store an index of a first intermediate neuron, in which the number of spikes of a spike output signal is a threshold value or more, from among the intermediate neurons of the first intermediate neuron layer; determine the number of input subjects, which allow the first intermediate neuron to output a spike output signal including spikes, the number of which is the threshold value or more, from among the input subjects; and determine whether to perform the second learning with respect to the first intermediate neuron based on the determined number of input subjects. However, HOYOUNG teaches: store an index of a first intermediate neuron (NPL HOYOUNG Page 5, lines 4-8 “ln the hardware implementation, finding only the most active excitatory neuron need & only tracking the current maximum number of post-synaptic spike counts and the index of the excitatory neuron.” Page 10, lines 3-6 “If the new value is larger than the previous one, the new Vmem - Vth and the index of the excitatory neuron is stored in the global controller.”), in which the number of spikes of a spike output signal is a threshold value or more, from among the intermediate neurons of the first intermediate neuron layer (NPL HOYOUNG abstract “For the energy effacement inferencing operations, we propose an accumulation based computing scheme, where the number of input spikes for each input axon is accumulated without instant membrane updates until the pre-defined number of spikes are reached.”); determine the number of input subjects (NPL HPYOUNG page 6, column 1, lines 13-16 “In the proposed accumulation based computing approach, the counter that was used in training mode can also be used to count the number of input spikes.”), which allow the first intermediate neuron to output a spike output signal including spikes (NPL HOYOUNG page 6, column 2, lines 20-25 “In the proposed accumulation based computing scheme, as the accumulated number of input spikes are used per one membrane voltage updates, multiple excitatory neurons generate output spikes at a time.”), the number of which is the threshold value or more, from among the input subjects (NPL HOYOUNG abstract “For the energy effacement inferencing operations, we propose an accumulation based computing scheme, where the number of input spikes for each input axon is accumulated without instant membrane updates until the pre-defined number of spikes are reached.”); and determine whether to perform the second learning with respect to the first intermediate neuron based on the number of input subjects thus determined (NPL HPYOUNG page 6, column 1, lines 13-16 “In the proposed accumulation based computing approach, the counter that was used in training mode can also be used to count the number of input spikes.”). The same motivation for claim 4 equally applies to claim 14. Regarding claim 16, KIM in view of LEE teach all of the limitations of claim 11 as shown above. KIM also teaches: The neural processor of claim 11, wherein the neural processor is further configured to (KIM P[0002] The present disclosure relates to a neural network system, a learning method thereof, and a transfer learning method of a neural network processor.): determine input subjects, whose learning is not completed in the first learning (KIM P[0062] “In an embodiment, the transfer learning controller 140 may determine a layer in which learning [first learning] will be interrupted [Interruption means that the learning has not been completed] in a subsequent learning iteration based on a result of comparing the distribution of weight values resulting from a current learning iteration with the distribution of weight values resulting from a previous learning iteration.”); the number of input subjects whose learning is not completed in the first learning (KIM P[0062] “In an embodiment, the transfer learning controller 140 may determine a layer in which learning [first learning] will be interrupted [Interruption means that the learning has not been completed] in a subsequent learning iteration based on a result of comparing the distribution of weight values resulting from a current learning iteration with the distribution of weight values resulting from a previous learning iteration.”). LEE teaches: based on the intermediate neurons determined to perform the second learning (LEE page 19, column 12, lines 37-42 In operation 730, the training apparatus trains the neural network based on the output value and a desired value corresponding to the training data. For example, the training apparatus may determine a parameter of neurons included in the neural network, to train the neural network.); and determine the number of intermediate neurons included in the second intermediate neuron layer based on the number of input subjects whose learning is not completed in the first learning (LEE page 19, column 12, lines 37-42 In operation 730, the training apparatus trains the neural network based on the output value and a desired value corresponding to the training data. For example, the training apparatus may determine a parameter [number] of neurons included in the neural network, to train the neural network.). KIM and LEE fail to teach: determine input subjects, whose learning is not completed in the first learning from among the input subjects, based on the intermediate neurons determined to perform the second learning. However, HOYOUNG teaches: determine input subjects, whose learning is not completed in the first learning from among the input subjects, based on the intermediate neurons determined to perform the second learning; (NPL HPYOUNG page 6, column 1, lines 13-16 “In the proposed accumulation based computing approach, the counter that was used in training mode can also be used to count the number of input spikes.”), The same motivation for claim 6 equally applies to claim 16. Regarding claim 17, KIM in view of LEE in view of HOYOUNG teach all of the limitations of claim 16 as shown above. KIM also teaches: The neural processor of claim 16 (KIM P[0002] The present disclosure relates to a neural network system, a learning method thereof, and a transfer learning method of a neural network processor.), wherein the number of intermediate neurons included in the second intermediate neuron layer is equal to or more than the number of input subjects whose learning is not completed in the first learning (KIM P[0062] “In an embodiment, the transfer learning controller 140 may determine a layer in which learning [first learning] will be interrupted [Interruption means that the learning has not been completed] in a subsequent learning iteration based on a result of comparing the distribution of weight values resulting from a current learning iteration with the distribution of weight values resulting from a previous learning iteration.”). LEE teaches: wherein the number of intermediate neurons included in the second intermediate neuron layer is equal to or more than the number of input subjects (LEE page 19, column 12, lines 37-42 In operation 730, the training apparatus trains the neural network based on the output value and a desired value corresponding to the training data. For example, the training apparatus may determine a parameter [number] of neurons included in the neural network, to train the neural network.) The same motivation for claim 7 equally applies to claim 17. Regarding claim 18, KIM in view of LEE in view of HOYOUNG teach all of the limitations of claim 16 as shown above. KIM also teaches: The neural processor of claim 16, wherein the neural processor is further configured to (KIM P[0002] The present disclosure relates to a neural network system, a learning method thereof, and a transfer learning method of a neural network processor.): partially initialize synaptic weight values of the intermediate neurons of the first synaptic weight layer, which are determined to perform the second learning (KIM P[0009] “The second learning iteration is subsequent to the first learning iteration.” P[0007] “The determination of the at least one layer in which the learning is interrupted is based on a result of comparing, for each of the plurality of layers, a distribution of first weight values resulting from a first learning iteration with a distribution of second weight values resulting from a second learning iteration subsequent to the first learning iteration.” P[0009] “The second learning iteration is subsequent to the first learning iteration.” [Examiner notes: The use of a layer implies the neurons have been determined to perform learning.]) with respect to the input subjects whose learning is not completed in the first learning (KIM P[0062] “In an embodiment, the transfer learning controller 140 may determine a layer in which learning [first learning] will be interrupted [Interruption means that the learning has not been completed] in a subsequent learning iteration based on a result of comparing the distribution of weight values resulting from a current learning iteration with the distribution of weight values resulting from a previous learning iteration.”); and perform the second learning (KIM, P[0009] storing second weight values resulting from a second learning iteration in the memory.) with respect to the partially initialized first synaptic weight layer the input subjects whose learning is not completed in the first learning (KIM P[0062] “In an embodiment, the transfer learning controller 140 may determine a layer in which learning [first learning] will be interrupted [Interruption means that the learning has not been completed] in a subsequent learning iteration based on a result of comparing the distribution of weight values resulting from a current learning iteration with the distribution of weight values resulting from a previous learning iteration.”). LEE teaches: partially initialize synaptic weight values of the intermediate neurons of the first synaptic weight layer (LEE page 15, column 4, lines 6-9 The training may include initializing at least one among a membrane potential threshold and synaptic weights of neurons included in a layer of the neural network based on a number of synapses corresponding to each of the neurons.); and perform the second learning with respect to the partially initialized first synaptic weight layer (LEE page 15, column 4, lines 6-9 The training may include initializing at least one among a membrane potential threshold and synaptic weights of neurons included in a layer of the neural network based on a number of synapses corresponding to each of the neurons.), and the intermediate neurons of the second intermediate neuron layer (LEE page 19, column 12, lines 37-42 In operation 730, the training apparatus trains the neural network based on the output value and a desired value corresponding to the training data. For example, the training apparatus may determine a parameter [number] of neurons included in the neural network, to train the neural network.). The same motivation for claim 8 equally applies to claim 18. Claims 5 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over KIM et al (US 20200356853) (“KIM”), in view of LEE et al (US 10515305) (“LEE”), and further in view of NPL HOYOUNG (Spike Counts Based Low Complexity NN architecture With Binary Synapse) (“HOYOUNG”), and further in view of KAPLAN et al (US 9117175) ("KAPLAN"). Regarding claim 5, KIM in view of LEE, in view of HOYOUNG teach all of the limitations of claim 4 as shown above. KIM also teaches: The method of claim 4, wherein the determining whether to perform the second learning in response to the determined number of input subjects being one (KIM P[0038] “The neural network processor 100 may perform a subsequent learning (second learning) iteration only in layers excluding the layer for which interruption of the learning is determined.”): allowing the second learning not to be performed with respect to the first intermediate neuron in response to the determined number of input subjects being two more (KIM P[0009] “The second learning iteration is subsequent to the first learning iteration.” P[0007] “The determination of the at least one layer in which the learning is interrupted is based on a result of comparing, for each of the plurality of layers, a distribution of first weight values resulting from a first learning iteration with a distribution of second weight values resulting from a second learning iteration subsequent to the first learning iteration.” P[0009] “The second learning iteration is subsequent to the first learning iteration.” [Examiner notes: The use of a layer implies the neurons have been determined to perform learning.]), and allowing the second learning to be performed with respect to the first intermediate neuron (KIM P[0009] “The second learning iteration is subsequent to the first learning iteration.” P[0007] “The determination of the at least one layer in which the learning is interrupted is based on a result of comparing, for each of the plurality of layers, a distribution of first weight values resulting from a first learning iteration with a distribution of second weight values resulting from a second learning iteration subsequent to the first learning iteration.” P[0009] “The second learning iteration is subsequent to the first learning iteration.” [Examiner notes: The use of a layer implies the neurons have been determined to perform learning.]), KIM and LEE, fail to teach: allowing the second learning not to be performed with respect to the first intermediate neuron, in response to the determined number of input subjects being one; and allowing the second learning to be performed with respect to the first intermediate neuron, in response to the determined number of input subjects being two or more. However, KAPLAN teaches: allowing the second learning not to be performed with respect to the first intermediate neuron, in response to the determined number of input subjects being one; and allowing the second learning to be performed with respect to the first intermediate neuron, in response to the determined number of input subjects being two or more (KAPLAN Page 97, column 44, lines 29-38 "Ganglite 5341 may be a learning neurite. Ganglite 5311 may be a neurite with a threshold frequency for learning, or may be a high-rate learning association ganglite (see FIG. 30a). 5312, and 5342 may be neurites with threshold frequencies for learning on one input, or may be high-rate learning association ganglites with learning on one input only ( see FIG. 30c). Each of these ganglites strengthens synapses if both [“2”] its inputs fire rapidly enough to stimulate its learning process [Learning is performed when there are “2” inputs], but may not learn if only a single [“1”] input is firing [Learning is not performed when there is “1” input].") KIM, LEE, and KAPLAN are considered analogous because they relate to machine learning methods. It would have been obvious to a person skilled in the art before the effective filing date of the claimed invention to combine KIM, LEE, and HOYOUNG with the number of inputs determined to perform or not perform training from KAPLAN. Doing so would reduce power consumption. (KAPLAN: page 82, column 13, lines 35-38) Regarding claim 15, KIM in view of LEE, in view of HOYOUNG teach all of the limitations of claim 14 as shown above. KIM also teaches: The neural processor of claim 14, wherein the neural processor is further configured to (KIM P[0002] The present disclosure relates to a neural network system, a learning method thereof, and a transfer learning method of a neural network processor.): allow the second learning not to be performed with respect to the first intermediate neuron when the determined number of input subjects being one (KIM P[0009] “The second learning iteration is subsequent to the first learning iteration.” P[0007] “The determination of the at least one layer in which the learning is interrupted is based on a result of comparing, for each of the plurality of layers, a distribution of first weight values resulting from a first learning iteration with a distribution of second weight values resulting from a second learning iteration subsequent to the first learning iteration.” P[0009] “The second learning iteration is subsequent to the first learning iteration.” [Examiner notes: The use of a layer implies the neurons have been determined to perform learning.]) and allow the second learning to be performed with respect to the first intermediate neuron when the determined number of input subjects is two or more (KIM P[0009] “The second learning iteration is subsequent to the first learning iteration.” P[0007] “The determination of the at least one layer in which the learning is interrupted is based on a result of comparing, for each of the plurality of layers, a distribution of first weight values resulting from a first learning iteration with a distribution of second weight values resulting from a second learning iteration subsequent to the first learning iteration.” P[0009] “The second learning iteration is subsequent to the first learning iteration.” [Examiner notes: The use of a layer implies the neurons have been determined to perform learning.]). KIM and LEE fail to teach: allow the second learning not to be performed with respect to the first intermediate neuron when the determined number of input subjects being one; and allow the second learning to be performed with respect to the first intermediate neuron when the determined number of input subjects is two or more. However, KAPLAN teaches: allow the second learning not to be performed with respect to the first intermediate neuron when the determined number of input subjects being one; and allow the second learning to be performed with respect to the first intermediate neuron when the determined number of input subjects is two or more. (KAPLAN Page 97, column 44, lines 29-38 "Ganglite 5341 may be a learning neurite. Ganglite 5311 may be a neurite with a threshold frequency for learning, or may be a high-rate learning association ganglite (see FIG. 30a). 5312, and 5342 may be neurites with threshold frequencies for learning on one input, or may be high-rate learning association ganglites with learning on one input only ( see FIG. 30c). Each of these ganglites strengthens synapses if both [“2”] its inputs fire rapidly enough to stimulate its learning process [Learning is performed when there are “2” inputs], but may not learn if only a single [“1”] input is firing [Learning is not performed when there is “1” input]."). The same motivation from claim 5 equally applies to claim 15. Response to Arguments Applicant's arguments filed 8/28/2025 have been fully considered but they are not persuasive. Applicant states that the claims have been amended and therefore the 35 USC 101 abstract idea needs to be withdrawn. As indicated above, the claims are still directed to an abstract idea without significantly more. Additionally, Kim does not teach the claims as amended, and in particular that “while Lee discloses that neurons may output spike signals, there is no disclosure directed to performing a learning that is based on intermediate neurons determined from a number of spikes of spike output signals of intermediate neurons of an intermediate neuron layer, as claimed. With respect to such features, Lee merely discloses that neurons may output spike signals, but does not consider using a number of spikes of the spike signals when determining the intermediate neurons to use when performing a learning for a neural network, as claimed. ”(pages 9-10). The examiner disagrees, because Lee teaches that the neurons of a current layer output spikes or spike signals based on synaptic signals received from the previous layer. The spikes are in turn input into the next layer, where neurons are determined or activated based on the spikes (col.12, lines 22-48, col.7, lines 16-67). Claims 2-10 depend on claim 1 and are rejected at least for the same reasons. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CESAR PAULA whose telephone number is (571)272-4128. The examiner can normally be reached Monday - Friday, 6.30am- 4:30 pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Wiley can be reached at (571)272-3923. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CESAR B PAULA/Supervisory Patent Examiner, Art Unit 2145
Read full office action

Prosecution Timeline

Dec 14, 2021
Application Filed
May 12, 2025
Non-Final Rejection — §101, §103
Aug 28, 2025
Response Filed
Mar 04, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596934
PREDICTION-MODEL-BUILDING METHOD, STATE PREDICTION METHOD AND DEVICES THEREOF
2y 5m to grant Granted Apr 07, 2026
Patent 12585982
MODEL MANAGEMENT USING CONTAINERS
2y 5m to grant Granted Mar 24, 2026
Patent 12585859
SYSTEM AND METHOD FOR IMPROVING THE CLARITY OF OVERLAPPING OBJECTS
2y 5m to grant Granted Mar 24, 2026
Patent 12579439
Kernelized Classifiers in Neural Networks
2y 5m to grant Granted Mar 17, 2026
Patent 12554971
METHOD OF PREDICTING CHARACTERISTICS OF SEMICONDUCTOR DEVICE AND COMPUTING DEVICE PERFORMING THE SAME
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
32%
Grant Probability
41%
With Interview (+8.3%)
4y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 169 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month