Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
2. Claims 1-20 are pending in this office action. This action is responsive to Applicant’s application filed 10/09/2023.
Information Disclosure Statement
3. The references listed in the IDS filed 12/05/2023, 05/20/2025, 09/09/2025, and 11/20/2025 has been considered. A copy of the signed or initialed IDS is hereby attached.
Claim Objections
4. Claims 1-2, 5-6, 14, 17, and 20 are objected to because of the following informalities: The claim limitation “distributed service in the plurality of distributed services 3such that the data set is distributed among the plurality of distributed services”, include the clause “such that” which make the claim scope is not limited by claim language (see MPEP 2173.05 (d)).
Appropriate correction is required.
MPEP 2173.05 (d) “such as,” “such that,” Clauses
Description of examples or preferences is properly set forth in the specification rather than the claims. If stated in the claims, examples and preferences may lead to confusion over the intended scope of a claim. In those instances where it is not clear whether the claimed narrower range is a limitation, a rejection under 35 U.S.C. 112, second paragraph should be made. The examiner should analyze whether the metes and bounds of the claim are clearly set forth. Examples of claim language which have been held to be indefinite because the intended scope of the claim was unclear are:
PNG
media_image1.png
18
19
media_image1.png
Greyscale
(A) "R is halogen, for example, chlorine";
PNG
media_image1.png
18
19
media_image1.png
Greyscale
(B) "material such as rock wool or asbestos" Ex parte Hall, 83 USPQ 38 (Bd. App. 1949);
PNG
media_image1.png
18
19
media_image1.png
Greyscale
(C) "lighter hydrocarbons, such, for example, as the vapors or gas produced" Ex parte Hasche, 86 USPQ 481 (Bd. App. 1949); and
PNG
media_image1.png
18
19
media_image1.png
Greyscale
(D) "normal operating conditions such as while in the container of a proportioner" Ex parte Steigerwald, 131 USPQ 74 (Bd. App. 1961).
PNG
media_image1.png
18
19
media_image1.png
Greyscale
>The above examples of claim language which have been held to be indefinite are fact specific and should not be applied as per se rules. See MPEP § 2173.02 for guidance regarding when it is appropriate to make a rejection under 35 U.S.C. 112, second paragraph.
5. Claims 5, 6, are 10 are objected to because of the following informalities:
The claimed limitation "and/or" in claims 5, 6, and10 is unclear whether applicant want to point out.
Drawings
6. The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5)because the reference characters in figures 1-3 and 5 have no label. Thus, these elements do not give a viewer to fully understand without substantial analysis of detailed specification.
A descriptive textual label for each numbered element in these figures would be needed to fully and better understand these figures without substantial analysis of the detailed specification. Any structural detail that is of sufficient importance to be described should be shown in the drawing. Optionally, applicant may wish to include a table next to the present figure to fulfill this requirement. See 37 CFR 1.83.37 CFR1.84(n)(o) is recited below:
(n) Symbols. Graphical drawing symbols may be used for conventional elementswhen appropriate. The elements for which such symbols and labeled representations are used must be adequately identified in the specification. Known devices should be illustrated by symbols which have a universally recognized conventional meaning and are generally accepted in the art. Other symbols which are not universally recognized may be used, subject to approval by the Office, if they are not likely to be confused with existing conventional symbols, and if they are readily identifiable.
(o) Legends. Suitable descriptive legends may be used subject to approval by the Office, or may be required by the examiner where necessary for understanding of the drawing. They should contain as few words as possible.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory obviousness-type double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the conflicting application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement.
Effective January 1, 1994, a registered attorney or agent of record may sign a terminal disclaimer. A terminal disclaimer signed by the assignee must fully comply with 37 CFR 3.73(b).
7. Claims 1-20 are rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over claims 1-16 of U.S. Patent No. 12,380,320 B2 (Application No. 17/294,697). Although the conflicting claims are not identical, they are not patentably distinct from each other because they are substantially similar in scope and they use the same limitations.
The following table shows the claims 1, and 12 in Instant Application 18/030085 that are rejected by corresponding claim(s) 54, and 68 in Application No. 17/294697. Now US Patent No. of 12,380,320 B2.
Instant Application
US Application 17/294697 (original claims)
1. (original) A method for configuring a spiking neural network,
wherein the spiking neural network comprises a plurality of spiking neurons, and a plurality of synaptic elements interconnecting the spiking neurons to form the network at least partly implemented in hardware,
wherein each synaptic element is adapted to receive a synaptic input signal and apply a weight to the synaptic input signal to generate a synaptic output signal, the synaptic elements being configurable to adjust the weight applied by each synaptic element, and
wherein each of the spiking neurons is adapted to receive one or more of the synaptic output signals from one or more of the synaptic elements, and generate a spatio-temporal spike train output signal in response to the received one or more synaptic output signals,
wherein a response local cluster within the network comprises a set of the spiking neurons and
a plurality of synaptic elements interconnecting the set of the spiking neurons,
wherein the method comprises:
setting the weights of the synaptic elements and the spiking behavior of the spiking neurons in the response local cluster such that the network state within the response local cluster is a periodic steady-state when an input signal to the response local cluster comprises a pre-determined oscillation frequency when represented in the frequency domain, such that the network state within the response local cluster is periodic with the pre-determined oscillation frequency.
12. (original) A spiking neural network for processing input signals representable in the frequency domain, the spiking neural
network comprising a plurality of spiking neurons, and a plurality of synaptic elements interconnecting the spiking neurons to form the network at least partly implemented in hardware,
wherein each synaptic element is adapted to receive a synaptic input signal and to apply a weight to the synaptic input signal to generate a synaptic output signal, the synaptic elements being configurable to adjust the weight applied by each synaptic element, and
wherein each of the spiking neurons is adapted to receive one or more of the synaptic output signals from one or more of the synaptic elements,
and generate a spatio-temporal spike train output signal in response to the received one or more synaptic input signals,
wherein a response local cluster within the network comprises a set of the spiking neurons and a plurality of synaptic elements interconnecting the set of neurons,
wherein a stochastic distribution activity or statistical parameter of the set of neurons within the local cluster is cyclo-stationary with a pre-determined first oscillation frequency when an input signal to the response local cluster comprises the pre-determined first oscillation frequency when represented in the frequency domain.
54. A spiking neural network for classifying input signals,
comprising a plurality of spiking neurons, and a plurality of synaptic elements interconnecting the spiking neurons to form the network,
wherein each synaptic element is adapted to receive a synaptic input signal and apply a weight to the synaptic input signal to generate a synaptic output signal, the synaptic elements being configurable to adjust the weight applied by each synaptic element, and
wherein each of the spiking neurons is adapted to receive one or more of the synaptic output signals from one or more of the synaptic elements, and generate a spatio-temporal spike train output signal in response to the received one or more synaptic output signals,
wherein the weights of the synaptic elements are bounded by bound values, wherein the bound values are stochastic values.
68. A method for configuring a spiking neural network to diminish noise effects in the spiking neural network, wherein the spiking neural network
comprises a plurality of spiking neurons implemented in hardware or a combination of hardware and software, and
a plurality of synaptic elements interconnecting the spiking neurons to form the network, wherein each synaptic element is adapted to receive a synaptic input signal and apply a weight to the synaptic input signal to generate a synaptic output signal, the synaptic elements being configurable to adjust the weight applied by each synaptic element, and wherein each of the spiking neurons is adapted to receive one or more of the synaptic output signals from one or more of the synaptic elements, and generate a spatio-temporal spike train output signal in response to the received one or more synaptic output signals, the method comprising: bounding the weights of the synaptic elements by bound values, wherein the bound values are stochastic values.
68. A method for configuring a spiking neural network to diminish noise effects in the spiking neural network, wherein the spiking neural network comprises a plurality of spiking neurons implemented in hardware or a combination of hardware and software, and a plurality of synaptic elements interconnecting the spiking neurons to form the network,
wherein each synaptic element is adapted to receive a synaptic input signal and apply a weight to the synaptic input signal to generate a synaptic output signal, the synaptic elements being configurable to adjust the weight applied by each synaptic element, and
wherein each of the spiking neurons is adapted to receive one or more of the synaptic output signals from one or more of the synaptic elements,
and generate a spatio-temporal spike train output signal in response to the received one or more synaptic output signals,
the method comprising: bounding the weights of the synaptic elements by bound values, wherein the bound values are stochastic values.
Although the conflicting claims are not identical, they are not patentably distinct from each other because they are substantially similar in scope and they use the same limitations.
After analyzing the language of the claims, it is clear that claims 1-20 are merely an obvious variation of claims 1-16 of US Patent No. 12,380,320. Therefore, these two sets of claims are not patentably distinct.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
8. Claim 18-19 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
The claim 18 recites:
“18. (previously amended) A physical signal to inference processor for adaptively processing a physical signal, comprising:
selectors and extractors for selecting and extracting specific signal features from the physical signal;
a spiking neural network that performs the classification and processing of the physical signal based on the specific signal features that were extracted from the physical signal;
wherein the processor further comprises:
an operating block which establishes the present operating context and the optimal feature set; and
a feedback loop to the selectors and extractors which are adaptive in the sense that based on the specific processing tasks, different signal features can be selected and extracted.”
Include the claim limitation “A physical signal to inference processor for adaptively processing a physical signal…
The claim 19 recites:
“19. (previously amended) A method for adaptively processing a physical signal, the method comprising:
providing a physical signal to inference processor according to claim 18, receiving a physical signal in the physical signal to inference processor,
selecting and extracting specific signal features using the selectors and extractors,
processing the specific signal features using the spiking neural network,
determining the present operating context and the optimal feature set using the operating block, and
sending a feedback loop extract$5 optimal context$3)k signal to the selectors and extractors to adaptively change the signal features to be selected and extracted when necessary.”
Include the claim limitation “A method for adaptively processing a physical signal, the method comprising: providing a physical signal to inference processor according to claim 18, receiving a physical signal in the physical signal to inference processor…
Claims recite a mental process when they contain limitations that can practically be performed in the human mind. However, if the human mind is not equipped to practically, the claimed invention is directed to an abstract idea without significantly more.
USPTO October 35 U.S.C. §101 2019 Update: Patent Subject
The following examples should be used in conjunction with the 2019 Revised Patent Subject Matter Eligibility Guidance (2019 PEG). The examples below are hypothetical and only intended to be illustrative of the claim analysis under the 2019 PEG. These examples should be interpreted based on the fact patterns set forth below as other fact patterns may have different eligibility outcomes. That is, it is not necessary for a claim under examination to mirror an example claim to be subject matter eligible under the 2019 PEG. All of the claims are analyzed for eligibility in accordance with their broadest reasonable interpretation. Note that the examples herein are numbered consecutively beginning with number 37, because 36 examples were previously issued. The examples are illustrative only of the patent-eligibility analysis under the 2019 PEG. All claims must be ultimately analyzed for compliance with every requirement for patentability, including 35 U.S.C. 102, 103, 112, and 101 (utility, inventorship and double patenting) and non-statutory double patenting.
Example 41 – Cryptographic Communications Background:
Security of information is of increasing importance in computer technology. It is critical that data being sent from a sender to a recipient is unable to be intercepted and understood by an intermediate source. In addition, authentication of the source of the message must be ensured along with the verification of and security of the message content. Various cryptographic encoding and decoding methods are available to assist with these security and authentication needs. However, many of them require expensive encoding and decoding hardware as well as a secure way of sharing the private key used to encrypt and decrypt the message. There is a need to perform these same security and authentication functions efficiently over a public key system so that information can be shared easily between users who do not know each other and have not shared the key used to encrypt and decrypt the information. To solve these problems, applicants have invented a method for establishing cryptographic communications using an algorithm to encrypt a plaintext into a ciphertext. The invention includes at least one encoding device and at least one decoding device, which are computer terminals, and a communication channel, where the encoding and decoding devices are coupled to the communication channel. The encoding device is responsive to a precoded message-to-be-transmitted M and an encoding key E to provide a ciphertext word C for transmission to a particular decoding device. The message-to-be-transmitted is precoded by converting it to a numerical representation which is broken into one or more blocks MA of equal length. This precoding may be done by any conventional means. The resulting message MA is a number representative of a message-to-be-transmitted, where 0 ≤ MA ≤ n-1, where n is a composite number of the form n=p*q, where p and q are prime numbers. The encoding key E is a pair of positive integers e and n, which are related to the particular decoding device. The encoding device distinctly encodes each of the n possible messages. The transformation provided by the encoding device is described by the relation CA=MAe (mod n) where e is a number relatively prime to (p-1)*(q-1). The encoding device transmits the ciphertext word signal CA to the decoding device over the communications channel. The decoding device is responsive to the received ciphertext word CA and a decoding key to transform the ciphertext to a received message word MA’. The invention improves upon prior methods for establishing cryptographic communications because by using only the variables n and e (which are publicly known), a plaintext can be encrypted by anyone. The variables p and q are only known by the owner of the decryption key d and are used to generate the decryption key (private key d is not claimed below). Thus, the security of the cipher relies on the difficulty of factoring large inters by computers and there is no known efficient algorithm to recover the plaintext given the ciphertext and the public information.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action:
(a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102 of this title, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negatived by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims under 35 U.S.C. 103(a), the examiner presumes that the subject matter of the various claims was commonly owned at the time any inventions covered therein were made absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and invention dates of each claim that was not commonly owned at the time a later invention was made in order for the examiner to consider the applicability of 35 U.S.C. 103(c) and potential 35 U.S.C. 102(e), (f) or (g) prior art under 35 U.S.C. 103(a).
9. Claims 1-3, 5-7, 9, 11-12, 14, 16-17,and 20 are rejected under 35 U.S.C. 103(a) as being unpatentable over Cao et al. (US Patent Publication No. 2020/0272883 A1, hereinafter “Cao”) in view of Shainline (US Patent Publication No. 2020/0301874 A1, hereinafter “Shainline”).
As to Claim 1, Cao teaches the claimed limitations:
“A method for configuring a spiking neural network” as a method to determine a value of a synaptic weight of a spiking neural network according to an embodiment (paragraph 0007).
“wherein the spiking neural network comprises a plurality of spiking neurons, and a plurality of synaptic elements interconnecting the spiking neurons to form the network at least partly implemented in hardware” as data that is provided into the neural network may be first processed by synapses of input neurons. Interactions between the inputs, the neuron's synapses and the neuron itself govern whether an output is provided via an axon to another neuron's synapse. Modeling the synapses, neurons may be accomplished in a variety of ways. Neuromorphic hardware includes individual processing elements in a synthetic neuron and a messaging fabric to communicate outputs to other neurons (paragraph 0019).
“wherein each synaptic element is adapted to receive a synaptic input signal and apply a weight to the synaptic input signal to generate a synaptic output signal, the synaptic elements being configurable to adjust the weight applied by each synaptic element” as neural networks are configured to implement features of learning which generally are used to adjust the weights of respective connections between the processing elements that provide particular pathways within the neural network and processing outcomes (paragraph 0004). Operation of a spiking neural network includes the communication of various spike trains each via a respective synapse coupled between two corresponding network nodes, wherein such communication is in response to input signaling received by the spiking neural network. Such communications may result in the spiking neural network providing output signaling which is to provide a basis for subsequent signaling which updates one or more synaptic weight values, the output signaling may be evaluated to determine whether a satisfaction of some predefined test criteria is indicated (paragraphs 0015, 0019).
“wherein each of the spiking neurons is adapted to receive one or more of the synaptic output signals from one or more of the synaptic elements, and generate a spatio-temporal spike train output signal in response to the received one or more synaptic output signals” as each neural network core may implement some number of primitive nonlinear temporal computing elements as neurons, so that when a neuron's activation exceeds some threshold level, it generates a spike message that is propagated to a fixed set of fanout neurons contained in destination cores. The network may distribute the spike messages to all destination neurons, and in response those neurons update their activations in a transient, time-dependent manner. Computations may variously occur in each a respective neuron as a result of the dynamic, nonlinear integration of weighted spike input using real-valued state variables. The inference path of the neuron includes a pre-synaptic neuron which is configured to produce a pre-synaptic spike train representing a spike input. A spike train is a temporal sequence of discrete spike events, which provides a set of times specifying at which time a neuron fires sequence of spikes generated by or for a particular neuron may be referred to as its “spike train (Paragraphs 0013, 0020-0022, 0025). The inference path of the neuron includes a pre-synaptic neuron, which is configured to produce a pre-synaptic spike train x.sub.i representing a spike input. A spike train is a temporal sequence of discrete spike events, which provides a set of times specifying at which time a neuron fires (paragraphs 0024-0025, 0047).
“wherein a response local cluster within the network comprises a set of the spiking neurons and a plurality of synaptic elements interconnecting the set of the spiking neurons” as
data that is provided into the neural network may be first processed by synapses of input neurons. Interactions between the inputs, the neuron's synapses and the neuron itself govern whether an output is provided via an axon to another neuron's synapse. Modeling the synapses, neurons may be accomplished in a variety of ways, neuromorphic hardware includes individual processing elements in a synthetic neuron and a messaging fabric to communicate outputs to other neurons. The determination of whether a particular neuron fires to provide data to a further connected neuron is dependent on the activation function applied by the neuron and the weight of the synaptic connection from neuron i to neuron j. The input received by neuron i is depicted as value x.sub.i, and the output produced from neuron j is depicted as value y.sub.j. Thus, the processing conducted in a neural network is based on weighted connections, thresholds, and evaluations performed among the neurons, synapses, and other elements of the neural network (paragraph 0019).
Cao does not explicitly teach the claimed limitation “wherein the method comprises: setting the weights of the synaptic elements and the spiking behavior of the spiking neurons in the response local cluster such that the network state within the response local cluster is a periodic steady-state when an input signal to the response local cluster comprises a pre-determined oscillation frequency when represented in the frequency domain, such that the network state within the response local cluster is periodic with the pre-determined oscillation frequency“.
Shainline teaches all neuron cell bodies are envisioned to perform only the thresholding function leading to spike production. It follows that the outputs from dendrites are functions with analog amplitude and a continuous temporal envelope, while the outputs from neuron cell bodies are stereotypical spike events wherein the amplitude is intended to be constant across spikes and the temporal envelope is intended to approximate a delta function (paragraph 0081). After a photonic communication event has been detected, the synaptic weight has been set as the number of fluxons created, and current has been added to the SI loop, further processing ensues. The electrical current generated by the synapse event can be stored for a chosen amount of time. This is determined by the leak rate of the SI loop, selected by design and set in hardware with the time constant (paragraph 0089). In biological neural systems, processing among local clusters of neurons occurs primarily through fast activity in the range of gamma frequencies. This frequency range emerges because it reaches the upper limit of speed for the excitatory pyramidal neurons participating in the activity. In the superconducting optoelectronic hardware under consideration, this upper speed limit is in the tens of megahertz, limited by the reset time of the SPDs in the synapses and of the transmitter circuits that generate neuronal firing events. Here we take the upper firing rate to be 100 MHz for numerical simplicity. Therefore, we expect the neurons under consideration to demonstrate behavior like gamma oscillations, bursting with inter-spike intervals on the order of 10 ns. Similarly, biological neural systems process information across the network through slower activity at theta frequencies. Mapping this scaling onto the system under consideration, gamma oscillations occur at 100 MHz as well as theta oscillations occurring at 10 MHz (paragraph 0090). However, the qualitative nature of the response is consistent across a useful range of operating parameters. In large-scale systems, the intention is not to precisely control the response of each dendrite or synapse quantitatively at the time of fabrication, but rather to fabricate a complex network with a statistical distribution of device parameters and to employ adaptive plasticity functions that finely adjust biasing conditions through activity-dependent feedback, to adapt the circuits to operating points useful for network computation. Such adaptation over time through synaptic and dendritic plasticity are in the spirit of biological neural systems that cannot be constructed with specific values for each synaptic weight or precise dendritic morphology (paragraph 0100). If the DI loop is configured with large βL and idi on the order of theta time scales, the dendrite will keep track of how many gamma-frequency pulse trains have occurred, thereby keeping track of oscillations on theta time scales. Because the maximum signal level in the DI loop can be made the same as in an SI or DI loop keeping track of gamma activity, such dendritic processing can represent gamma and theta information with equal weight. Alternatively, using the same circuit configuration except employing an SI loop with a time constant close to τspd will cause the DI loop to receive a single fluxon each time the synapse receives a photon. In this mode of operation, the circuit achieves single-photon-to-single-fluxon transduction, converting each photon detection event to an identical, binary signal. If synaptic weighting is not required, and dendritic weights alone can suffice, the signal from a photon-detection event can immediately be converted to a single fluxon, and energy efficiency can be gained (paragraph 0102).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention, having the teachings of Cao and Shainline before him/her, to modify Cao setting the weights of the synaptic elements and the spiking behavior of the spiking neurons in the response local cluster because that would provide synaptic coupling inductor in communication with synaptic integration inductor and that provides synaptic integrated current to synaptic integration loop and synaptic integration resistor in communication with synaptic integration resistor as taught by Shainline (paragraph 0029).
As to Claim 2, Cao does not explicitly teach the claimed limitation “wherein the setting of the weights of the synaptic elements and the spiking behavior of the spiking neurons in the response local cluster comprises iteratively training the response local cluster by optimizing weights of the synaptic elements and the spiking behavior of the spiking neurons, such that the required periodic steady-state behavior is reached”.
Shainline teaches (paragraphs 0081, 0089-0090, 0100-0102).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention, having the teachings of Cao and Shainline before him/her, to modify Cao setting the weights of the synaptic elements and the spiking behavior of the spiking neurons in the response local cluster because that would provide synaptic coupling inductor in communication with synaptic integration inductor and that provides synaptic integrated current to synaptic integration loop and synaptic integration resistor in communication with synaptic integration resistor as taught by Shainline (paragraph 0029).
As to Claim 3, Cao does not explicitly teach the claimed limitation “wherein a stochastic distribution activity or statistical parameter of the set of neurons within the response local cluster is cyclo-stationary with the pre-determined oscillation frequency when an input signal to the response local cluster comprises the pre-determined oscillation frequency”.
Shainline teaches (paragraphs 0081, 0089-0090, 0100-0102).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention, having the teachings of Cao and Shainline before him/her, to modify Cao determined oscillation frequency when an input signal to the response local cluster comprises the pre-determined oscillation frequency because that would provide synaptic coupling inductor in communication with synaptic integration inductor and that provides synaptic integrated current to synaptic integration loop and synaptic integration resistor in communication with synaptic integration resistor as taught by Shainline (paragraph 0029).
As to Claim 5, Cao does not explicitly teach the claimed limitation “wherein the spiking neural network comprises a drive local cluster, which comprises a set of the spiking neurons and a plurality of synaptic elements interconnecting the set of the spiking neurons, such that an output signal of the drive local cluster serves as an input signal to the response local cluster such that the drive local cluster and the response local cluster are coupled with a particular coupling strength, wherein the method further comprises: setting the network state within the response local cluster to have a steady-state and/or a time-varying state when an input signal to the response local cluster from the drive local cluster does not comprise the pre-determined oscillation frequency when represented in the frequency domain or when the particular coupling strength is smaller than a predetermined coupling strength”.
Shainline teaches (paragraphs 0081, 0087, 0089-0091, 0094, 0100-0102).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention, having the teachings of Cao and Shainline before him/her, to modify Cao hen the particular coupling strength is smaller than a predetermined coupling strength because that would provide synaptic coupling inductor in communication with synaptic integration inductor and that provides synaptic integrated current to synaptic integration loop and synaptic integration resistor in communication with synaptic integration resistor as taught by Shainline (paragraph 0029).
As to Claim 6, Cao does not explicitly teach the claimed limitation “wherein the setting of the weights of the synaptic elements and the spiking behavior of the spiking neurons in the response local cluster comprises iteratively training the response local cluster by optimizing weights of the synaptic elements and the spiking behavior of the spiking neurons, such that the required steady-state behavior and/or time-varying behavior is reached”.
Shainline teaches (paragraphs 0081, 0089-0090, 0100-0102).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention, having the teachings of Cao and Shainline before him/her, to modify Cao determined oscillation frequency when an input signal to the response local cluster comprises the pre-determined oscillation frequency because that would provide synaptic coupling inductor in communication with synaptic integration inductor and that provides synaptic integrated current to synaptic integration loop and synaptic integration resistor in communication with synaptic integration resistor as taught by Shainline (paragraph 0029).
As to Claim 7, Cao does not explicitly teach the claimed limitation “wherein a stochastic distribution activity or statistical parameter of the set of neurons within the response local cluster is stationary or non-stationary when the response local cluster receives an input signal from the drive local cluster which does not comprise the pre- determined oscillation frequency when represented in the frequency domain or when the particular coupling strength is smaller than the predetermined coupling strength”.
Shainline teaches (paragraphs 0081, 0089-0090, 0100-0102).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention, having the teachings of Cao and Shainline before him/her, to modify Cao determined oscillation frequency when an input signal to the response local cluster comprises the pre-determined oscillation frequency because that would provide synaptic coupling inductor in communication with synaptic integration inductor and that provides synaptic integrated current to synaptic integration loop and synaptic integration resistor in communication with synaptic integration resistor as taught by Shainline (paragraph 0029).
As to Claim 9, Cao does not explicitly teach the claimed limitation “wherein an increase in a structure dimensionality of the response local cluster is realized by ensuring generalized outer synchronization between the drive local cluster and the response local cluster, wherein generalized outer synchronization is the coupling of the drive local cluster to the response local cluster by means of the particular coupling strength being equal to or larger than the predetermined coupling strength”.
Shainline teaches (paragraphs 0081, 0089-0090, 0100-0102).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention, having the teachings of Cao and Shainline before him/her, to modify Cao determined oscillation frequency when an input signal to the response local cluster comprises the pre-determined oscillation frequency because that would provide synaptic coupling inductor in communication with synaptic integration inductor and that provides synaptic integrated current to synaptic integration loop and synaptic integration resistor in communication with synaptic integration resistor as taught by Shainline (paragraph 0029).
As to Claim 11, Cao does not explicitly teach the claimed limitation “wherein the steady-state numerical solution, time-varying numerical solution and/or periodic steady-state solution is obtained by using feedback connections between the neurons in the response local cluster that results in the synchronization of neuronal activity of the neurons”’
Shainline teaches (paragraphs 0081, 0089-0090, 0100-0102).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention, having the teachings of Cao and Shainline before him/her, to modify Cao determined oscillation frequency when an input signal to the response local cluster comprises the pre-determined oscillation frequency because that would provide synaptic coupling inductor in communication with synaptic integration inductor and that provides synaptic integrated current to synaptic integration loop and synaptic integration resistor in communication with synaptic integration resistor as taught by Shainline (paragraph 0029).
As to Claim 12, Cao teaches the claimed limitations:
“A spiking neural network for processing input signals representable in the frequency domain, the spiking neural network comprising a plurality of spiking neurons, and a plurality of synaptic elements interconnecting the spiking neurons to form the network at least partly implemented in hardware” as spiking neural networks are increasingly being adapted to provide next-generation solutions for various applications. SNNs variously rely on signaling techniques wherein information is communicated using a time-based relationship between signal spikes (paragraph 0003). Data that is provided into the neural network may be first processed by synapses of input neurons. Interactions between the inputs, the neuron's synapses and the neuron itself govern whether an output is provided via an axon to another neuron's synapse. Modeling the synapses, neurons may be accomplished in a variety of ways. Neuromorphic hardware includes individual processing elements in a synthetic neuron and a messaging fabric to communicate outputs to other neurons (paragraph 0019, 0031). Data that is provided into the neural network may be first processed by synapses of input neurons. Interactions between the inputs, the neuron's synapses and the neuron itself govern whether an output is provided via an axon to another neuron's synapse. Modeling the synapses, neurons may be accomplished in a variety of ways. Neuromorphic hardware includes individual processing elements in a synthetic neuron and a messaging fabric to communicate outputs to other neurons (paragraph 0019).
as data that is provided into the neural
“wherein each synaptic element is adapted to receive a synaptic input signal and to apply a weight to the synaptic input signal to generate a synaptic output signal, the synaptic elements being configurable to adjust the weight applied by each synaptic element” as
as each neural network core may implement some number of primitive nonlinear temporal computing elements as neurons, so that when a neuron's activation exceeds some threshold level, it generates a spike message that is propagated to a fixed set of fanout neurons contained in destination cores. The network may distribute the spike messages to all destination neurons, and in response those neurons update their activations in a transient, time-dependent manner.
Computations may variously occur in each a respective neuron as a result of the dynamic, nonlinear integration of weighted spike input using real-valued state variables. The inference path of the neuron includes a pre-synaptic neuron which is configured to produce a pre-synaptic spike train representing a spike input. A spike train is a temporal sequence of discrete spike events, which provides a set of times specifying at which time a neuron fires sequence of spikes generated by or for a particular neuron may be referred to as its “spike train (Paragraphs 0013, 0020-0022, 0025). The inference path of the neuron includes a pre-synaptic neuron, which is configured to produce a pre-synaptic spike train x.sub.i representing a spike input. A spike train is a temporal sequence of discrete spike events, which provides a set of times specifying at which time a neuron fires (paragraphs 0024-0025, 0047).
“wherein each of the spiking neurons is adapted to receive one or more of the synaptic output signals from one or more of the synaptic elements, and generate a spatio-temporal spike train output signal in response to the received one or more synaptic input signals” as each neural network core may implement some number of primitive nonlinear temporal computing elements as neurons, so that when a neuron's activation exceeds some threshold level, it generates a spike message that is propagated to a fixed set of fanout neurons contained in destination cores. The network may distribute the spike messages to all destination neurons, and in response those neurons update their activations in a transient, time-dependent manner.
“wherein a response local cluster within the network comprises a set of the spiking neurons and a plurality of synaptic elements interconnecting the set of neurons” as data that is provided into the neural network may be first processed by synapses of input neurons. Interactions between the inputs, the neuron's synapses and the neuron itself govern whether an output is provided via an axon to another neuron's synapse. Modeling the synapses, neurons may be accomplished in a variety of ways, neuromorphic hardware includes individual processing elements in a synthetic neuron and a messaging fabric to communicate outputs to other neurons. The determination of whether a particular neuron fires to provide data to a further connected neuron is dependent on the activation function applied by the neuron and the weight of the synaptic connection from neuron i to neuron j. The input received by neuron i is depicted as value x.sub.i, and the output produced from neuron j is depicted as value y.sub.j. Thus, the processing conducted in a neural network is based on weighted connections, thresholds, and evaluations performed among the neurons, synapses, and other elements of the neural network (paragraph 0019).
Cao does not explicitly teach the claimed limitation “wherein a stochastic distribution activity or statistical parameter of the set of neurons within the local cluster is cyclo-stationary with a pre-determined first oscillation frequency when an input signal to the response local cluster comprises the pre-determined first oscillation frequency when represented in the frequency domain”.
Shainline teaches all neuron cell bodies are envisioned to perform only the thresholding function leading to spike production. It follows that the outputs from dendrites are functions with analog amplitude and a continuous temporal envelope, while the outputs from neuron cell bodies are stereotypical spike events wherein the amplitude is intended to be constant across spikes and the temporal envelope is intended to approximate a delta function (paragraph 0081). After a photonic communication event has been detected, the synaptic weight has been set as the number of fluxons created, and current has been added to the SI loop, further processing ensues. The electrical current generated by the synapse event can be stored for a chosen amount of time. This is determined by the leak rate of the SI loop, selected by design and set in hardware with the time constant (paragraph 0089). In biological neural systems, processing among local clusters of neurons occurs primarily through fast activity in the range of gamma frequencies. This frequency range emerges because it reaches the upper limit of speed for the excitatory pyramidal neurons participating in the activity. In the superconducting optoelectronic hardware under consideration, this upper speed limit is in the tens of megahertz, limited by the reset time of the SPDs in the synapses and of the transmitter circuits that generate neuronal firing events. Here we take the upper firing rate to be 100 MHz for numerical simplicity. Therefore, we expect the neurons under consideration to demonstrate behavior like gamma oscillations, bursting with inter-spike intervals on the order of 10 ns. Similarly, biological neural systems process information across the network through slower activity at theta frequencies. Mapping this scaling onto the system under consideration, gamma oscillations occur at 100 MHz as well as theta oscillations occurring at 10 MHz (paragraph 0090). However, the qualitative nature of the response is consistent across a useful range of operating parameters. In large-scale systems, the intention is not to precisely control the response of each dendrite or synapse quantitatively at the time of fabrication, but rather to fabricate a complex network with a statistical distribution of device parameters and to employ adaptive plasticity functions that finely adjust biasing conditions through activity-dependent feedback, to adapt the circuits to operating points useful for network computation. Such adaptation over time through synaptic and dendritic plasticity are in the spirit of biological neural systems that cannot be constructed with specific values for each synaptic weight or precise dendritic morphology (paragraph 0100). If the DI loop is configured with large βL and idi on the order of theta time scales, the dendrite will keep track of how many gamma-frequency pulse trains have occurred, thereby keeping track of oscillations on theta time scales. Because the maximum signal level in the DI loop can be made the same as in an SI or DI loop keeping track of gamma activity, such dendritic processing can represent gamma and theta information with equal weight. Alternatively, using the same circuit configuration except employing an SI loop with a time constant close to τspd will cause the DI loop to receive a single fluxon each time the synapse receives a photon. In this mode of operation, the circuit achieves single-photon-to-single-fluxon transduction, converting each photon detection event to an identical, binary signal. If synaptic weighting is not required, and dendritic weights alone can suffice, the signal from a photon-detection event can immediately be converted to a single fluxon, and energy efficiency can be gained (paragraph 0102).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention, having the teachings of Cao and Shainline before him/her, to modify Cao setting the weights of the synaptic elements and the spiking behavior of the spiking neurons in the response local cluster because that would provide synaptic coupling inductor in communication with synaptic integration inductor and that provides synaptic integrated current to synaptic integration loop and synaptic integration resistor in communication with synaptic integration resistor as taught by Shainline (paragraph 0029).
As to Claim 14, Cao does not explicitly teach the claimed limitation “wherein the spiking neural network comprises a drive local cluster, which comprises a set of the spiking neurons and a plurality of synaptic elements interconnecting the set of the spiking neurons, such that an output signal of the drive local cluster serves as an input signal to the response local cluster such that the drive local cluster