DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-4, 6, and 8-11 are presented for examination.
Response to Amendment
Applicant’s amendment appears to have obviated the objections to the specification, drawings, and claims, as well as the interpretation of the claims under 35 USC § 112(f) and the associated rejections under 35 USC §§ 112(a)-(b). Therefore, those objections, that interpretation, and those rejections are withdrawn.
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Claim Rejections - 35 USC § 101
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
Claims 1-4, 6, and 8-11 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The analysis of the claims will follow the 2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50 (“2019 PEG”).
Claim 1
Step 1: The claim recites a system comprising a memory and a processor; therefore, the claim is directed to the statutory category of machines.
Step 2A Prong 1: The claim recites, inter alia, “train[ing] a time based spiking neural network to be performed by supervised learning using a cost function, the cost function including a regularization term relating to a firing time of a neuron in the spiking neural network.” This limitation is directed to a mathematical concept in light of the extensive discussion of the mathematical operations involved in computing the cost function in the specification.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites that the training is performed using “at least one memory configured to store instructions; and at least one processor configured to execute the instructions”. These limitations are mere instructions to apply the exception using a generic computer programmed with generic classes of computer software. MPEP § 2106.05(f).
Step 2B: The claim does not contain significantly more than the judicial exception. The analysis at this step is identical to that of step 2A, prong 2. As an ordered whole, the claim is directed to a mathematical method of performing neural network learning. Nothing in the claim provides significantly more than this. As such, the claim is not patent eligible.
Claim 2
Step 1: A machine, as above.
Step 2A Prong 1: The claim recites, inter alia, “train[ing] the spiking neural network using the cost function that includes: a loss function that uses a negative log-likelihood of a Softmax function; and the regularization term, the negative log-likelihood the Softmax function being obtained by dividing a time index value obtained by inputting a value obtained by inputting time information of an output spike that has been multiplied by a negative coefficient into an exponential function, by a sum of the time index values of all neurons in an output layer.” These limitations represent mathematical concept of calculating a negative log-likelihood of a softmax function in the claimed manner.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites that the learning is to be performed using “at least one processor [that] is configured to execute the instructions”. However, as noted above, this limitation is a mere instruction to apply the judicial exception using a generic computer programmed with a generic class of computer algorithm. MPEP § 2106.05(f).
Step 2B: The claim does not contain significantly more than the judicial exception. The claim further recites that the learning is to be performed using “at least one processor [that] is configured to execute the instructions”. However, as noted above, this limitation is a mere instruction to apply the judicial exception using a generic computer programmed with a generic class of computer algorithm. MPEP § 2106.05(f).
Claim 3
Step 1: A machine, as above.
Step 2A Prong 1: The claim recites, inter alia, “train[ing] the spiking neural network using the regularization term based on a difference between time information of an output spike and a reference time, the reference time being a constant.” This limitation is directed to the mathematical concept of performing learning using a regularization term based on calculating a difference between an output spike and a reference time.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites that the learning is to be performed using “at least one processor [that] is configured to execute the instructions”. However, as noted above, this limitation is a mere instruction to apply the judicial exception using a generic computer programmed with a generic class of computer algorithm. MPEP § 2106.05(f).
Step 2B: The claim does not contain significantly more than the judicial exception. The claim further recites that the learning is to be performed using “at least one processor [that] is configured to execute the instructions”. However, as noted above, this limitation is a mere instruction to apply the judicial exception using a generic computer programmed with a generic class of computer algorithm. MPEP § 2106.05(f).
Claim 4
Step 1: A machine, as above.
Step 2A Prong 1: The claim recites, inter alia, that the “train[ing] the spiking neural network using the regularization term based on a square error of the difference.” This limitation recites the mathematical concept of calculating a squared error of a difference between an output spike time and a reference time.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites that the learning is to be performed using “at least one processor [that] is configured to execute the instructions”. However, as noted above, this limitation is a mere instruction to apply the judicial exception using a generic computer programmed with a generic class of computer algorithm. MPEP § 2106.05(f).
Step 2B: The claim does not contain significantly more than the judicial exception. The claim further recites that the learning is to be performed using “at least one processor [that] is configured to execute the instructions”. However, as noted above, this limitation is a mere instruction to apply the judicial exception using a generic computer programmed with a generic class of computer algorithm. MPEP § 2106.05(f).
Claim 6
Step 1: The claim recites a method; therefore, it is directed to the statutory category of processes.
Step 2A Prong 1: The claim recites the same judicial exceptions as in claim 1.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. The analysis at this step is identical to that of claim 1, with the exception that claim 6 does not recite a memory or a processor.
Step 2B: The claim does not contain significantly more than the judicial exception. The analysis at this step is identical to that of claim 1, with the exception that claim 6 does not recite a memory or a processor.
Claim 8
Step 1: A machine, as above.
Step 2A Prong 1: The claim recites, inter alia, that “training the time-based spiking neural network comprises adjusting at least one of the at least one connection strength.” This limitation recites a mathematical concept of training the network by adjusting connection strengths.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites that “the time-based spiking neural network comprises two or more neurons and at least one connection strength between at least one first neuron and at least one second neuron of the two or more neurons”. However, this amounts to a mere instruction to apply the judicial exception using a generic computer programmed with generic classes of computer algorithm. MPEP § 2106.05(f).
Step 2B: The claim does not contain significantly more than the judicial exception. The claim further recites that “the time-based spiking neural network comprises two or more neurons and at least one connection strength between at least one first neuron and at least one second neuron of the two or more neurons”. However, this amounts to a mere instruction to apply the judicial exception using a generic computer programmed with generic classes of computer algorithm. MPEP § 2106.05(f).
Claim 9
Step 1: A machine, as above.
Step 2A Prong 1: The claim recites the same judicial exceptions as in claim 8.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites that “the time-based spiking neural network is implemented by at least one complementary metal-oxide-semiconductor.” However, this amounts to a mere instruction to apply the judicial exception using a generic computer containing generic classes of semiconductor. MPEP § 2106.05(f).
Step 2B: The claim does not contain significantly more than the judicial exception. The claim further recites that “the time-based spiking neural network is implemented by at least one complementary metal-oxide-semiconductor.” However, this amounts to a mere instruction to apply the judicial exception using a generic computer containing generic classes of semiconductor. MPEP § 2106.05(f).
Claim 10-11
Step 1: A process, as above.
Step 2A Prong 1: The claims recite the same judicial exceptions as in claims 8-9, respectively.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. The analysis at this step mirrors that of claims 8-9, respectively.
Step 2B: The claim does not contain significantly more than the judicial exception. The analysis at this step mirrors that of claims 8-9, respectively.
Claim Rejections - 35 USC § 103
Claims 1, 3-4, 6, 8, and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Lee et al., “Training Deep Spiking Neural Networks Using Backpropagation,” in 10 Frontiers in Neuroscience 508 (2016) (“Lee”) in view of Hunzinger et al. (US 20130204819) (“Hunzinger”).
Regarding claim 1, Lee discloses “[a] spiking neural network system comprising:
at least one memory configured to store instructions (Lee sec. 2.1.2 indicates that a winner-take-all circuit has all connections the same strength, which reduces memory and computational costs [suggesting that there is a memory to store the instructions that execute the method]); and
at least one processor configured to execute the instructions (Lee, penultimate paragraph of article, discloses that the method is executed on hardware such as neuromorphic processors and ARM processors) to:
train a time based spiking neural network to be performed by supervised learning using a cost function, the cost function including a regularization term relating to … a neuron in the spiking neural network (novel supervised learning method for spiking neural networks that closely follows the backpropagation algorithm for deep ANNs but is used to train general forms of deep SNNs directly from spike signals is disclosed – Lee, last paragraph before sec. 1.1; objective function [cost function] for each training sample is given by one half of the square of the difference between a label vector and an output vector plus an exponential regularization term for neuron i in layer l – id. at secs. 2.4, first paragraph, and 2.3.1, first paragraph).”
Lee appears not to disclose explicitly the further limitations of the claim. However, Hunzinger discloses a “term relating to a firing time of a neuron (learning using a spiking neural network may include means for determining an actual time difference [term] between the emission of an output spike from a neuron model and a reference time – Hunzinger, paragraph 15) ….”
Hunzinger and the instant application both relate to spiking neural networks and are analogous. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Lee to train the network using neuron firing times, as disclosed by Hunzinger, and an ordinary artisan could reasonably expect to have done so successfully. Doing so would allow the system to encode information in the differences between firing times, thereby increasing the biological consistency of the network. See Hunzinger, paragraph 85.
Claim 6 is a method claim corresponding to system claim 1 and is rejected for the same reasons as given in the rejection of that claim.
Regarding claim 3, the rejection of claim 1 is incorporated. Lee further discloses that “the at least one processor is configured to execute the instructions to train the spiking neural network using the regularization term based on a difference between … information of an output spike and a reference …, the reference … being a constant (objective function for each training sample is given by one half of the square of the difference between an output vector defined in terms of the number of output spikes generated by the i-th neuron of the output layer and a label vector [constant reference] – Lee, sec. 2.4, first paragraph).”
Lee appears not to disclose explicitly the further limitations of the claim. However, Hunzinger discloses a “difference between time information of an output spike and a reference time (learning using a spiking neural network may include means for determining an actual time difference between the emission of an output spike from a neuron model and a reference time – Hunzinger, paragraph 15) ….” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Lee to perform learning based on the difference between an output spike time and a reference time, as disclosed by Hunzinger, and an ordinary artisan could reasonably expect to have done so successfully. Doing so would allow the system to encode information in the differences between firing times, thereby increasing the biological consistency of the network. See Hunzinger, paragraph 85.
Regarding claim 4, Lee, as modified by Hunzinger, discloses that “the at least one processor is configured to execute the instructions to train the spiking neural network using the regularization term based on a square error of the difference (objective function for each training sample is given by one half of the square of the difference between an output vector defined in terms of the number of output spikes generated by the i-th neuron of the output layer and a label vector [constant reference] – Lee, sec. 2.4, first paragraph).”
Regarding claim 8, Lee, as modified by Hunzinger, discloses that “the time-based spiking neural network comprises two or more neurons and at least one connection strength between at least one first neuron and at least one second neuron of the two or more neurons (membrane potential of ith active neuron in a circuit can be written in terms of the strength of the lateral inhibition from the jth active neuron to the ith active neuron – Lee, sec. 2.2.2, second paragraph); and …
training the time-based spiking neural network comprises adjusting at least one of the at least one connection strength (strength of lateral inhibitory connections is varied [adjusted] in circuits to find their optimum value – Lee, sec. 3.1, last paragraph).”
Claim 10 is a method claim corresponding to system claim 8 and is rejected for the same reasons as given in the rejection of that claim.
Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Lee in view of Stromatius et al., “An Event-Driven Classifier for Spiking Neural Networks Fed with Synthetic or Dynamic Vision Sensor Data,” in 11 Frontiers in Neuroscience 350 (2017) (“Stromatius”) and further in view of Hunzinger.
Regarding claim 2, the rejection of claim 1 is incorporated. Lee further discloses that “the at least one processor is configured to execute the instructions to train the spiking neural network using the cost function that includes: … the regularization term (novel supervised learning method for spiking neural networks that closely follows the backpropagation algorithm for deep ANNs but is used to train general forms of deep SNNs directly from spike signals is disclosed – Lee, last paragraph before sec. 1.1; objective function [cost function] for each training sample is given by one half of the square of the difference between a label vector and an output vector plus an exponential regularization term for neuron i in layer l – id. at secs. 2.4, first paragraph, and 2.3.1, first paragraph; see also penultimate paragraph of article (disclosing that the method is executed on hardware [learning processing means])) ….”
Lee appears not to disclose explicitly the further limitations of the claim. However, Stromatius discloses that “the learning processing means causes the learning to be performed using the cost function that includes: a loss function that uses a negative log-likelihood of a Softmax function; … the negative log-likelihood of the Softmax function being obtained by dividing a[n] … index value obtained by inputting a value obtained by inputting … information of an output spike that has been multiplied by a negative coefficient into an exponential function, by a sum of the … index values of all neurons in an output layer (cost function to be minimized by a mini-batch stochastic gradient descent algorithm is a negative log-likelihood loss, which is obtained by multiplying a coefficient -1/D by the sums of logs of softmax activation functions Yj that are obtained by dividing an exponential function of a weight matrix representing class K and a given input vector xi provided by converting spike counts [information of output spikes] of a flattening layer [this exponential function multiplied by -1/D corresponds to the claimed index value and the flattening layer is the output layer] by the sum of the exponential functions for all input vectors provided by the flattening layer – Stromatius, sec. 2.5.2, first three paragraphs, particularly equations (4)-(5)).”
Stromatius and the instant application both relate to spiking neural networks and are analogous. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Lee to use a negative log-likelihood of a softmax function as the loss function, as disclosed by Stromatius, and an ordinary artisan could reasonably expect to have done so successfully. Doing so would increase the accuracy of the network by ensuring that it can be effectively trained using well-known cost functions that are simple to implement. See Stromatius, sec. 2.5.2.
Neither Lee nor Stromatius appears to disclose explicitly the further limitations of the claim. However, Hunzinger discloses a “time index value [and] … time information (learning using a spiking neural network may include means for determining an actual time difference between the emission of an output spike from a neuron model [time index value] and a reference time [time information] – Hunzinger, paragraph 15) ….” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Lee and Stromatius with to employ time information in the learning, as disclosed by Hunzinger, and an ordinary artisan could reasonably expect to have done so successfully. Doing so would allow the system to encode information in the differences between firing times, thereby increasing the biological consistency of the network. See Hunzinger, paragraph 85.
Claims 9 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Lee in view of Hunzinger and further in view of Ambrogio et al. (US 20200372335) (“Ambrogio”).
Regarding claim 9, neither Lee nor Hunzinger appears to disclose explicitly the further limitations of the claim. However, Ambrogio discloses that “the time-based spiking neural network is implemented by at least one complementary metal-oxide-semiconductor (spike-based computation may be provided using CMOS electronic neurons interacting with each other through nanoscale memory synapses – Ambrogio, paragraph 45).”
Ambrogio and the instant application both relate to spiking neural networks and are analogous. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Lee and Hunzinger to implement the SNN using a CMOS, as disclosed by Ambrogio, and an ordinary artisan could reasonably expect to have done so successfully. Doing so would allow the system to be implemented using widely available technology, thereby eliminating the need to develop specialized hardware. See Ambrogio, paragraph 45.
Claim 11 is a method claim corresponding to system claim 9 and is rejected for the same reasons as given in the rejection of that claim.
Response to Arguments
Applicant's arguments filed September 4, 2025 (“Remarks”) have been fully considered but they are, except insofar as rendered moot by the introduction of a new ground of rejection, not persuasive.
Applicant first argues that the claims as amended are now eligible because they integrate any judicial exception recited into a practical application, namely providing a spiking neural network with increased learning stability, reduced power consumption, and reduced processing time through the use of regularization. Remarks at 10-12. However, even assuming arguendo that the specification discloses such benefits, which Examiner does not concede, the purported benefit is not to technology as such, but to the abstract idea of training through regularization. As noted in the rejection itself, the training is itself a mathematical concept (compare claim 2 of Example 47) and cannot be considered to provide an inventive concept. MPEP § 2106.05(I).
Applicant then argues that the claims as amended are distinguishable over the combination of Lee and Hunzinger because Lee allegedly teaches away from using a regularization term relating to a firing time of a neuron. Specifically, Applicant appears to argue that modifying Lee such that the regularization term is related to the firing time would render Lee unsatisfactory for its intended purpose by disturbing the conditions related to the initialization of the weights. Remarks at 12-15.
However, Applicant provides no evidence that merely modifying Lee so that the regularization term is derived based on a firing time of the neuron would render Lee unsatisfactory for its intended purpose. Indeed, Lee is silent as to the origin of the regularization term other than to say that it represents a weight decay during training, and merely specifying that that regularization term is derived based on neuron firing times would not change the regularization term itself, but merely its origin.
Moreover, Applicant appears to be suggesting that the test for obviousness is whether the features taught by the secondary reference can be physically incorporated into the primary reference. This, however, is not the test. The test for obviousness is not whether the features of a secondary reference may be bodily incorporated into the structure of the primary reference; nor is it that the claimed invention must be expressly suggested in any one or all of the references. Rather, the test is what the combined teachings of the references would have suggested to those of ordinary skill in the art. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981). Here, as shown in the rejection itself, Hunzinger suggests that incorporating a firing time of a neuron into the training of the network would increase its biological plausibility. Applicant does not contest this reasoning.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to RYAN C VAUGHN whose telephone number is (571)272-4849. The examiner can normally be reached M-R 7:00a-5:00p ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kamran Afshar, can be reached at 571-272-7796. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RYAN C VAUGHN/ Primary Examiner, Art Unit 2125