Prosecution Insights
Last updated: April 18, 2026
Application No. 18/449,188

METHOD AND SYSTEM OF TRAINING SPIKING NEURAL NETWORK BASED CONVERSION AWARE TRAINING

Non-Final OA §101§102§112
Filed
Aug 14, 2023
Examiner
COULSON, JESSE CHEN
Art Unit
2122
Tech Center
2100 — Computer Architecture & Software
Assignee
Korea University Research And Business Foundation
OA Round
1 (Non-Final)
25%
Grant Probability
At Risk
1-2
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants only 25% of cases
25%
Career Allow Rate
1 granted / 4 resolved
-30.0% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
33 currently pending
Career history
37
Total Applications
across all art units

Statute-Specific Performance

§101
30.6%
-9.4% vs TC avg
§103
29.8%
-10.2% vs TC avg
§102
22.6%
-17.4% vs TC avg
§112
17.1%
-22.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 4 resolved cases

Office Action

§101 §102 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is in response to the application filed on 8/14/2023. Claims 1-12 are pending and have been examined. Information Disclosure Statement The information disclosure statements (IDS) submitted on 8/14/2023 and 1/21/2026 are in compliance with the provisions of 37 CFR 1.97, 1.98, and MPEP § 609. They have been placed in the application file, and the information referred to therein has been considered as to the merits. Claim Objections Claims 1 and 7-12 is objected to because of the following informalities: Claim 1: “A method of training spiking neural network based a conversion aware training” should be “A method of training a spiking neural network based on a conversion aware training” Claim 7: “A spiking neural network training based a conversion aware training comprising” should be “A spiking neural network training based on a conversion aware training comprising” Claims 8-12: “The spiking neural network training system based the conversion aware training” should be “The spiking neural network training system based on the conversion aware training” . Appropriate correction is required. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) are: an ANN generator, a conversion aware training unit, and a SNN generator in claim 7. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 7-12 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding Claim 7: Claim limitations “an ANN generator configured to generate an analog artificial neural network (ANN) model and to input variable data”, “a conversion aware training unit configured to simulate a spiking neural network (SNN) model”, and “an SNN generator configured to generate the SNN model by correcting parameters and weights of layers based on a result of the simulation” invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. There is not sufficient structure for the ANN generator, conversion aware training unit, and the SNN generator. The written description describes what the generating and simulating is based on but has no specifics on how they are implemented. Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph. Applicant may: (a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph; (b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)). If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either: (a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181. Regarding Claims 8-12: Claims 8-12 are rejected as being dependent on a rejected base claim without curing any of the deficiencies. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-12 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding Claim 1: Step 1: The claim recites a method, which is one of the four statutory categories of patentable subject matter. Step 2A prong 1: The claim recites an abstract idea. Specifically, the limitation an ANN generation operation of generating an analog artificial neural network (ANN) model and inputting variable data amounts to a mental process as it can be performed in a human mind. The claim recites an additional abstract idea of a conversion aware training operation of simulating a spiking neural network (SNN) model by using one or more activation functions with respect to the analog ANN model which is a mathematical concept. The claim recites an additional abstract idea of an SNN generation operation of generating the SNN model by correcting parameters and weights of layers based on a result of the simulation which is a mathematical concept. Step 2A prong 2: There are no additional elements that integrate the abstract idea into practical application or amount to significantly more Step 2B: There are no additional elements that integrate the abstract idea into practical application or amount to significantly more Therefore, the claim is ineligible. Regarding Claim 2: Claim 2 incorporates the rejection of Claim 1. The claim further recites a description of the abstract idea of the conversion aware training operation and is ineligible for the same reasons as set forth in Claim 1. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. The claim is ineligible. Regarding Claim 3: Claim 3 incorporates the rejection of Claim 1. The claim further recites a description of the abstract idea of the activation function and is ineligible for the same reasons as set forth in Claim 1. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. The claim is ineligible. Regarding Claim 4: Claim 4 which incorporates the rejection of Claim 1, recites a further abstract idea wherein the activation function includes a TTFS function as in the following equation… which is a mathematical operation. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. The claim is ineligible. Regarding Claim 5: Claim 5 incorporates the rejection of Claim 1. The claim further recites a description of the abstract idea of the conversion aware training operation and is ineligible for the same reasons as set forth in Claim 1. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. The claim is ineligible. Regarding Claim 6: Claim 6 incorporates the rejection of Claim 1. The claim further recites a description of the abstract idea of the SNN generation operation and is ineligible for the same reasons as set forth in Claim 1. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. The claim is ineligible. Regarding Claim 7: Step 1: The claim recites a system, which is one of the four statutory categories of patentable subject matter. Step 2A prong 1: The claim recites an abstract idea. Specifically, the limitation generate an analog artificial neural network (ANN) model and to input variable data amounts to a mental process as it can be performed in a human mind. The claim recites an additional abstract idea of simulate a spiking neural network (SNN) model by using one or more activation functions with respect to the analog ANN model which is a mathematical concept. The claim recites an additional abstract idea of generate the SNN model by correcting parameters and weights of layers based on a result of the simulation which is a mathematical concept. Step 2A prong 2: The additional element of an ANN generator is a generic computer component amounting to mere instructions to apply the abstract idea, therefore does not integrate the abstract idea into practical application MPEP 2106.05(f). The additional element of a conversion aware training unit is a generic computer component amounting to mere instructions to apply the abstract idea, therefore does not integrate the abstract idea into practical application MPEP 2106.05(f). The additional element of an SNN generator is a generic computer component amounting to mere instructions to apply the abstract idea, therefore does not integrate the abstract idea into practical application MPEP 2106.05(f). Step 2B: The additional element of an ANN generator is a generic computer component amounting to mere instructions to apply the abstract idea, therefore does amount to significantly more MPEP 2106.05(f). The additional element of a conversion aware training unit is a generic computer component amounting to mere instructions to apply the abstract idea, therefore does amount to significantly more MPEP 2106.05(f). The additional element of an SNN generator is a generic computer component amounting to mere instructions to apply the abstract idea, therefore does amount to significantly more MPEP 2106.05(f). Therefore, the claim is ineligible. Regarding Claim 8: Claim 8 incorporates the rejection of Claim 7. The claim further recites a description of the abstract idea of simulating a spiking neural network model by using one or more activation functions and is ineligible for the same reasons as set forth in Claim 7. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. The claim is ineligible. Regarding Claim 9: Claim 9 incorporates the rejection of Claim 7. The claim further recites a description of the abstract idea of the activation function and is ineligible for the same reasons as set forth in Claim 7. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. The claim is ineligible. Regarding Claim 10: Claim 10 which incorporates the rejection of Claim 7, recites a further abstract idea wherein the activation function includes a TTFS function as in the following equation… which is a mathematical operation. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. The claim is ineligible. Regarding Claim 11: Claim 11 incorporates the rejection of Claim 7. The claim further recites a description of the abstract idea of simulating a spiking neural network model by using one or more activation functions and is ineligible for the same reasons as set forth in Claim 7. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. The claim is ineligible. Regarding Claim 12: Claim 12 incorporates the rejection of Claim 7. The claim further recites a description of the abstract idea of generating the SNN model and is ineligible for the same reasons as set forth in Claim 7. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. The claim is ineligible. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-12 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Rueckauer et al. “Conversion of analog to spiking neural networks using sparse temporal coding”, hereinafter “Rueckauer”. Regarding Claim 1, Rueckauer teaches: A method of training spiking neural network based a conversion aware training (p. 1, col. 2, paragraph 2, “we propose in this paper an ANN-to-SNN conversion mechanism, where the analog activation values of the ANN neurons are represented by the inverse time-to-first-spike (TTFS) in the SNN neurons”), the method comprising: an ANN generation operation of generating an analog artificial neural network (ANN) model and inputting variable data (MNIST data is input into Lenet-5 ANN, p. 3, col. 2, paragraph 1, “We tested the three versions of the time-to-first-spike approach (“TTFS base”, “TTFS dyn thresh”, “TTFS clamped”) on the MNIST handwritten digit recognition data set and using the classic Lenet-5 [24] model”); a conversion aware training operation of simulating a spiking neural network (SNN) model by using one or more activation functions with respect to the analog ANN model (p. 2, col. 1, paragraph 3, “In this work, we require only the first spike of each neuron and can prevent additional spikes e.g., by making the refractory period very long… In a simulation with time step dt, this explicit equation translates to a rate of increase… Successful ANN-to-SNN conversion implies that the SNN spike rate ri is approximately equal to the corresponding activation ai of the original ANN.1 Then the output spike time in the SNN is inversely proportional to the ANN activation: t(0)i = 1/ai”, one or more activation functions of ai and t(0)i); and an SNN generation operation of generating the SNN model by correcting parameters and weights of layers based on a result of the simulation (weights and parameters of layers are refined under new activation constraints that adjusts(corrects) parameters and weights, so that when simulated as SNN avoids long-latency spikes p. 3, col. 1, paragraph 3, “The original ANN is refined with a modified ReLU activation function, where the activation values below some threshold β > 0 are clamped to zero: reluclamp(x) = 0 if x ≤ β x else. (7) This way, the network learns to perform well without relying on low activations, and neurons in the converted SNN do not have to wait for long-latency spikes”). Regarding Claim 2, Rueckauer teaches the method of Claim 1. Rueckauer further teaches: wherein the conversion aware training operation includes using the activation functions with respect to one or more layers of the analog ANN model (ann model has plurality of ReLU in layers, p. 2, col. 2, note 1, “We assume the ANN uses the common rectifying linear unit activation function relu(x) = max(0, x).”, p. 3, col. 1, paragraph 3, “The original ANN is refined with a modified ReLU activation function, where the activation values below some threshold β > 0 are clamped to zero”). Regarding Claim 3, Rueckauer teaches the method of Claim 1. Rueckauer further teaches: wherein the activation function includes at least one or more of a ReLU function, a Clip function (relu clamp comprises relu and clip function, p. 3, col. 1, paragraph 3, “The original ANN is refined with a modified ReLU activation function, where the activation values below some threshold β > 0 are clamped to zero”), and a Time to First Spike (TTFS) function (p. 2, col. 2, paragraph 4, “Then the output spike time in the SNN is inversely proportional to the ANN activation: t(0)i = 1/ai”). Regarding Claim 5, Rueckauer teaches the method of Claim 1. Rueckauer further teaches: wherein the conversion aware training operation includes using the activation functions with respect to the analog ANN model in order of a ReLU function, a Clip function(relu clamp comprises relu and clip function, p. 3, col. 1, paragraph 3, “modified ReLU activation function, where the activation values below some threshold β > 0 are clamped to zero”) and a TTFS function (p. 2, col. 2, paragraph 4, “Then the output spike time in the SNN is inversely proportional to the ANN activation: t(0)i = 1/ai”). Regarding Claim 6, Rueckauer teaches the method of Claim 1. Rueckauer further teaches: wherein the SNN generation operation includes generating the SNN model by converting the parameters and the weights with respect to layers which use at least one of the activation functions (p. 3, col. 1, paragraph 3, “The original ANN is refined with a modified ReLU activation function, where the activation values below some threshold β > 0 are clamped to zero: reluclamp(x) = 0 if x ≤ β x else. (7) This way, the network learns to perform well without relying on low activations, and neurons in the converted SNN do not have to wait for long-latency spikes”). Regarding Claim 7, Rueckauer teaches: A spiking neural network training system based a conversion aware training comprising (p. 1, col. 2, paragraph 2, “we propose in this paper an ANN-to-SNN conversion mechanism, where the analog activation values of the ANN neurons are represented by the inverse time-to-first-spike (TTFS) in the SNN neurons”): an ANN generator configured to generate an analog artificial neural network (ANN) model and inputting variable data (MNIST data is input into Lenet-5 ANN, p. 3, col. 2, paragraph 1, “We tested the three versions of the time-to-first-spike approach (“TTFS base”, “TTFS dyn thresh”, “TTFS clamped”) on the MNIST handwritten digit recognition data set and using the classic Lenet-5 [24] model”); a conversion aware training unit configured to simulate a spiking neural network (SNN) model by using one or more activation functions with respect to the analog ANN model (p. 2, col. 1, paragraph 3, “In this work, we require only the first spike of each neuron and can prevent additional spikes e.g., by making the refractory period very long… In a simulation with time step dt, this explicit equation translates to a rate of increase… Successful ANN-to-SNN conversion implies that the SNN spike rate ri is approximately equal to the corresponding activation ai of the original ANN.1 Then the output spike time in the SNN is inversely proportional to the ANN activation: t(0)i = 1/ai”, one or more activation functions of ai and t(0)i); and an SNN generator configured to generate the SNN model by correcting parameters and weights of layers based on a result of the simulation (weights and parameters of layers are refined under new activation constraints that adjusts(corrects) parameters and weights, so that when simulated as SNN avoids long-latency spikes p. 3, col. 1, paragraph 3, “The original ANN is refined with a modified ReLU activation function, where the activation values below some threshold β > 0 are clamped to zero: reluclamp(x) = 0 if x ≤ β x else. (7) This way, the network learns to perform well without relying on low activations, and neurons in the converted SNN do not have to wait for long-latency spikes”). Regarding Claim 8, the rejection of Claim 7 is incorporated and further, the claim is rejected for the same reasons as set forth in Claim 2. Regarding Claim 9, the rejection of Claim 7 is incorporated and further, the claim is rejected for the same reasons as set forth in Claim 3. Regarding Claim 11, the rejection of Claim 7 is incorporated and further, the claim is rejected for the same reasons as set forth in Claim 5. Regarding Claim 12, the rejection of Claim 7 is incorporated and further, the claim is rejected for the same reasons as set forth in Claim 6. Conclusion Regarding Claims 4 and 10, complete prior art search was performed and no prior art was uncovered that would anticipate or fairly suggest the features in these claims. These claims are not rejected under prior art. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JESSE CHEN COULSON whose telephone number is (571)272-4716. The examiner can normally be reached Monday-Friday 8:30-5:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kakali Chaki can be reached at (571) 272-3719. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JESSE C COULSON/ Examiner, Art Unit 2122 /KAKALI CHAKI/Supervisory Patent Examiner, Art Unit 2122
Read full office action

Prosecution Timeline

Aug 14, 2023
Application Filed
Feb 02, 2026
Non-Final Rejection — §101, §102, §112
Apr 07, 2026
Response Filed

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
25%
Grant Probability
99%
With Interview (+100.0%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 4 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month