Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-15 are pending in this application. Claims 1, 3, 10, 12, and 15 are amended by applicant’s amendment filed 29 December 2025.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1 and 10 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Sumbul et al. (U.S. 2019/0042909, hereinafter “Sumbul”).
Regarding Claim 1, Sumbul teaches an image recognition device using a brain-inspired spiking neural network (fig. 1; ¶ [0002], [0021], and [0028]—computer vision and image processing involves image recognition), comprising:
an input unit configured to receive an input image made up of at least one pixel (fig. 2; ¶ [0028]); and
a spiking neural network unit configured to recognize the input image, the spiking neural network unit including a plurality of neurons each corresponding to a respective one of the pixels of the image (¶ [0035] – [0036]—a spiking deep convolutional neural network includes one neuron corresponding to each pixel of an input image) to generate spike signals when a membrane potential state value exceeds a preset threshold, and synapses connecting the plurality of neurons (¶ [0031] – [0033]—the spiking neurons store membrane potentials and generate spikes when a membrane potential state value exceeds a preset threshold).
Regarding Claim 10, Sumbul teaches an image recognition method in an image recognition device using a brain-inspired spiking neural network (fig. 1; ¶ [0002], [0021], and [0028]—computer vision and image processing involves image recognition), the method comprising:
receiving, by an input unit, an input image made up of at least one pixel (fig. 2; ¶ [0028]); and
recognizing, by a spiking neural network unit, the input image, the spiking neural network unit including a plurality of neurons each corresponding to a respective one of the pixels of the image (¶ [0035] – [0036]—a spiking deep convolutional neural network includes one neuron corresponding to each pixel of an input image) to generate spike signals when a membrane potential state value exceeds a preset threshold, and synapses connecting the plurality of neurons (¶ [0031] – [0033]—the spiking neurons store membrane potentials and generate spikes when a membrane potential state value exceeds a preset threshold).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 2, 6, 11, and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Sumbul, as applied to claims 1 and 10, above, in view of Saunders, Daniel J., Hava T. Siegelmann, and Robert Kozma (“Stdp learning of image patches with convolutional spiking neural networks,” 2018 international joint conference on neural networks (IJCNN). IEEE, 2018; hereinafter “Saunders”) and further in view of Richert (U.S. 2014/0219497).
Regarding Claims 2 and 11, Sumbul does not specifically teach:
an encoding unit configured to perform neural coding to provide to the plurality of neurons based on a luminance of the pixels in the image,
wherein the encoding unit performs firing rate coding to determine a firing rate of the spike signals according to the luminance, and spike timing coding to determine a spike timing according to the luminance.
However, Saunders teaches an encoding unit configured to perform neural coding to provide to the plurality of neurons based on a luminance of pixels in the image, wherein the encoding unit performs firing rate coding to determine a firing rate of the spike signals according to the luminance (section III. C—an encoding unit determines a firing rate as proportional to pixel intensity {i.e. luminance}).
These claimed elements were known in Sumbul and Saunders and could have been combined by known methods with no change in their respective functions. It therefore would have been obvious to a person of ordinary skill in the art at the time of filing of the applicant’s invention to combine the firing rate coding of Saunders with the neurons of Sumbul to yield the predictable result of an encoding unit configured to perform neural coding to provide to the plurality of neurons based on a luminance of the pixels in the image, wherein the encoding unit performs firing rate coding to determine a firing rate of the spike signals according to the luminance. One would be motivated to make this combination for the purpose of enabling fast convergence in training with respectable classification accuracy (Saunders, section I, last paragraph).
Sumbul/Saunders does not specifically teach the encoding unit to spike timing coding to determine a spike timing according to the luminance. However, Richert teaches an encoding unit that performs spike timing coding to determine a spike timing according to the luminance (¶ [0063] – [0064]—spike latency {i.e. timing} is determined as being inversely proportional to luminance of a pixel relative to an average luminance).
All of the claimed elements were known in Sumbul/Saunders and Richert and could have been combined by known methods with no change in their respective functions. It therefore would have been obvious to a person of ordinary skill in the art at the time of filing of the applicant’s invention to combine the spike timing determination of Richert with the encoder of Sumbul/Saunders to yield the predictable result of wherein the encoding unit performs firing rate coding to determine a firing rate of the spike signals according to the luminance, and spike timing coding to determine a spike timing according to the luminance. One would be motivated to make this combination for the purpose of improving the temporal and spatial response of spiking neural networks for encoding features (Richert, ¶ [0007] – [0008]).
Regarding Claim 6, Sumbul/Saunders/Richert teaches wherein the firing rate coding calculates a ratio of a luminance value of a present pixel to a mean luminance value of the pixels in the image as the firing rate and performs neural coding to increase the firing rate with the increasing luminance (Saunders, section III. C determines a firing rate as proportional to pixel intensity/luminance. Richert, ¶ [0063] – [0064] calculates a ratio of luminance of a pixel to an average luminance, so, combined with Saunders, this renders the present limitation obvious), and
wherein the spike timing coding performs neural coding to make the spike timing earlier with the increasing luminance by subtracting a percentage of the luminance value of the present pixel to a maximum luminance value from a preset reference spike timing (Richert, ¶ [0063] – [0064]—spike latency may be made earlier {i.e. the shortest latency} for the brightest pixels, i.e. with increasing luminance).
Regarding Claim 15, Sumbul/Saunders/Richert teaches a computer-readable program stored in a computer-readable recording medium configured to perform the image recognition method using a brain- inspired spiking neural network defined in claim 10 (Richert, ¶ [0168] and [0171]).
Claims 3-4, 7-9, and 12-14 are rejected under 35 U.S.C. 103 as being unpatentable over Sumbul in view of Saunders in view of Richert, as applied to claims 2 and 11, and further in view of Wang, Jinling, et al. (“An online supervised learning method for spiking neural networks with adaptive structure,” Neurocomputing 144 (2014): 526-536; hereinafter “Wang”).
Regarding Claims 3 and 12, Sumbul/Saunders/Richert teaches wherein the spiking neural network unit includes:
an input layer configured to receive the input neural code by assigning one neuron to each pixel of the image (Sumbul, ¶ [0036]);
a hidden layer configured to receive the signals from some of the plurality of neurons of the input layer, the hidden layer including an excitatory hidden layer containing excitatory neurons and an inhibitory hidden layer containing inhibitory neurons (Saunders, section III. D. and fig. 3—the hidden layer includes an excitatory layer and an inhibitory layer).
Sumbul/Saunders/Richert does not explicitly teach an output layer configured to receive the signals from some of the plurality of neurons of the excitatory hidden layer, the output layer including an excitatory output layer containing excitatory neurons and an inhibitory output layer containing inhibitory neurons. However, Wang teaches an output layer configured to receive the signals from some of the plurality of neurons of the excitatory hidden layer, the output layer including an excitatory output layer containing excitatory neurons and an inhibitory output layer containing inhibitory neurons (section 2.3.1 and fig. 2—the output layer includes excitatory and inhibitory neurons, which together comprise an excitatory output layer and an inhibitory output layer).
All of the claimed elements were known in Sumbul/Saunders/Richert and Wang and could have been combined by known methods with no change in their respective functions. It therefore would have been obvious to a person of ordinary skill in the art at the time of filing of the applicant’s invention to combine the output layer of Wang with the input layer and hidden layer of Sumbul/Saunders/Richert to yield the predictable result of an output layer configured to receive the signals from some of the plurality of neurons of the excitatory hidden layer, the output layer including an excitatory output layer containing excitatory neurons and an inhibitory output layer containing inhibitory neurons. One would be motivated to make this combination for the purpose of improving the training and classification performance of the SNN (Wang, p. 527, last paragraph).
Regarding Claim 4, Sumbul/Saunders/Richert/Wang teaches wherein the input layer is connected to each of the excitatory hidden layer and the inhibitory hidden layer via excitatory synapses (Saunders, section III. D. and fig. 3),
wherein the excitatory hidden layer and the inhibitory hidden layer are interconnected via the excitatory synapses and inhibitory synapses on a same layer (Saunders, section III. D. and fig. 3),
wherein the excitatory hidden layer is connected to each of the excitatory output layer and the inhibitory output layer via the excitatory synapses (Wang, section 2.3.1 – 2.3.2 and fig. 2—the synapses described in section 2.3.2 with respect to weights learning are excitatory synapses, and section 2.3.1 describes the inhibitory synapses. The excitatory hidden layer is clearly connected to all neurons of the output layer, including excitatory and inhibitory neurons/layers), and
wherein the excitatory output layer and the inhibitory output layer are interconnected via the excitatory synapses and the inhibitory synapses on a same layer (Wang, section 2.3.1—the lateral inhibitory connections in the output layer are synapses that connect the excitatory and inhibitory neurons on the same output layer).
Regarding Claims 7 and 13, Sumbul/Saunders/Richert/Wang teaches: a learning unit configured to modify synaptic weights by applying a Spike Timing-Dependent plasticity (STDP) learning rule to the synapses between the excitatory neurons to allow the neurons of the output layer to selectively generate the spike signals according to the image (Saunders, section III. B and Wang, section 2.3.3).
Regarding Claim 8, Sumbul/Saunders/Richert/Wang teaches wherein the learning unit includes a supervised learning unit configured to determine a target neuron of the output layer according to the image, and induce synaptic potentiation or depression by the STDP learning rule through a rise or fall in membrane potential of the target neuron to allow the target neuron to generate the spike signals (Wang, section 2.3.2—supervised learning is applied to the output layer using STDP).
Regarding Claim 9, Sumbul/Saunders/Richert/Wang teaches wherein the learning unit includes an unsupervised learning unit configured to modify the synaptic weights according to the STDP learning rule based on the output spike signals from the output layer according to the image (Wang, section I, last two paragraphs and section 2.3.2—a self-organized unsupervised competitive Hebbian learning method is applied in addition to the supervised learning).
Regarding Claim 14, Sumbul/Saunders/Richert/Wang wherein the learning step comprises:
supervised learning to determine a target neuron of the output layer according to the image, and induce synaptic potentiation or depression by the STDP learning rule through a rise or fall in membrane potential of the target neuron to allow the target neuron to generate the spike signals (Wang, section 2.3.2—supervised learning is applied to the output layer using STDP), or
unsupervised learning to modify the synaptic weights according to the STDP learning rule based on the output spike signals from the output layer according to the image (Wang, section I, last two paragraphs and section 2.3.2—a self-organized unsupervised competitive Hebbian learning method is applied in addition to the supervised learning).
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Sumbul in view of Saunders in view of Richert in view of Wang, as applied to claim 4, above, and further in view of Jang, Hyun Jae, et al. (“Distinct roles of parvalbumin and somatostatin interneurons in gating the synchronization of spike times in the neocortex,” Science Advances 6.17 (2020): eaay5333; hereinafter “Jang”).
Regarding Claim 5, Sumbul/Saunders/Richert/Wang does not specifically teach wherein the inhibitory neurons include parvalbumin (PV) expressing inhibitory neurons and somatostatin (SST) expressing inhibitory neurons. However, Jang teaches wherein inhibitory neurons include parvalbumin (PV) expressing inhibitory neurons and somatostatin (SST) expressing inhibitory neurons (p. 1, Introduction. The “Simulation of computational network model” section on pp. 12-13 describes an implementation using a spiking neural network).
All of the claimed elements were known in Sumbul/Saunders/Richert/Wang and Jang and could have been combined by known methods with no change in their respective functions. It therefore would have been obvious to a person of ordinary skill in the art at the time of filing of the applicant’s invention to combine the PV and SST inhibitory neurons of Jand with the inhibitory neurons of Sumbul/Saunders/Richert/Wang to yield the predictable result of wherein the inhibitory neurons include parvalbumin (PV) expressing inhibitory neurons and somatostatin (SST) expressing inhibitory neurons. One would be motivated to make this combination for the purpose of improving the synchronization of spike timing and firing rates by accounting for the different contributions of PV and SST inhibition (Jang, pp. 8-9, “Discussion” section).
Response to Arguments
The amendments to the claims are accepted as overcoming the rejections under 35 U.S.C. 112(b) of the first office action.
Applicant’s arguments with respect to claim 1-15 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Although Saunders does not explicitly teach the amendments to independent claims 1 and 10, new prior art reference Sumbul teaches these limitations, as detailed above. Saunders can be combined with Sumbul to teach limitations of some of the dependent claims because both references teach spiking convolutional neural networks, thus making for an obvious combination.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. This art includes:
Hersam et al. (U.S. 2021/0098611) teaches a neural network implemented on a memristor array with one neuron per pixel of an input image to perform image classification
Chelian et al. (U.S. Patent 9,412,051) teaches a neuromorphic image processor with a one-to-one correspondence between neurons and input pixels of an image
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HAL W SCHNEE whose telephone number is (571) 270-1918. The examiner can normally be reached M-F 7:30 a.m. - 6:00 p.m.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michael Huntley can be reached at 303-297-4307. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HAL SCHNEE/ Primary Examiner, Art Unit 2129