DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Applicant claim the benefit of a prior-filed Application EP 18290090.2, filed July 31, 2018, and International Application No. PCT/EP2019/070643, filed July 31, 2019,, which is acknowledged.
Specification
The amendments to the specification filed 01/29/2021 has been entered.
Drawings
The drawings were received on 01/29/2021. These drawings are acceptable.
Information Disclosure Statement
The information disclosure statements (IDSs) submitted on 12/31/2024 has been considered by the examiner.
Abstract
The abstract filed 12/16/2024 has been reviewed and entered. The objection made in the previous rejection has been withdrawn.
Response to Arguments
Applicant's arguments filed 11/25/2025 have been fully considered.
Regarding the rejection of claims under 35 USC 103, applicant remarks are directed to the amended claim limitations that have not been previously examined by the examiner. See, the current office action below that address the amended limitations.
Claim Interpretation
Regarding claims 10 and 22, the claim recite the terms “synapse identifier” and “neuron identifier” associated with a memory unit. These are not terms of art and the specification provides no insight or definition to provide the intended scope of the claimed terms. The examiner notes that any index/id/addressing/labeling associated with neuron/synapse operational processes are within the scope of the claimed terms. If these terms have a special meaning, than the applicant’s specification must set forth a explicit definition assigned to the claim terms. The noted interpretation is used for all claims reciting the noted terms.
MPEP 2111 notes “An applicant is entitled to be their own lexicographer and may rebut the presumption that claim terms are to be given their ordinary and customary meaning by clearly setting forth a definition of the term that is different from its ordinary and customary meaning(s) in the specification at the relevant time… However, it is important to note that any special meaning assigned to a term "must be sufficiently clear in the specification that any departure from common usage would be so understood by a person of experience in the field of the invention." Multiform Desiccants Inc. v. Medzam Ltd., 133 F.3d 1473, 1477, 45 USPQ2d 1429, 1432 (Fed. Cir. 1998). See also Process Control Corp. v. HydReclaim Corp., 190 F.3d 1350, 1357, 52 USPQ2d 1029, 1033 (Fed. Cir. 1999) and MPEP § 2173.05(a). In Apple Inc. v. Corephotonics, Ltd., 81 F.4th 1353, 1358-60, 2023 USPQ2d 1056 (Fed. Cir. 2023), the claim phrase "a point of view of the Wide camera" was held to only require a "Wide perspective point of view or Wide position point of view" after reviewing the specification. In particular, the court found that a reasonable reading of the specification defined two different types of Wide point of view – perspective and position, whereas the claim language was broad as to the point of view. The court also explained that claims should not be interpreted in a way that would omit a disclosed embodiment, absent evidence to the contrary. Thus, given the review of the intrinsic evidence, the court held that the claim language only required Wide perspective or Wide position point of view, but not both.”
Claim Rejections - 35 USC § 112
Claims 10-11,13-29 and 31 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Regarding independent claim 10, the limitation “using the respective input synapse index obtained from the second memory unit, which stores the respective input synapse index and the transmission delay, to access the respective memory entry in the third memory unit to obtain, from the third memory unit, the reference to the associated neuron and the weight with which to weigh the firing event message” is considered new matter because the limitation represents elements that is not disclosed or inferred by the original disclosure. The applicant refers to paragraph 53 of the disclosure, and cited paragraphs does not appear to provide sufficient support for the embodiment disclosed by the amended claim limitation.
Paragraph 0053 from the published document discloses that information can be transmitted from addressable memory, it does not support the routing requirements disclosed in amended limitation: Optionally, as is the case here, the destination information includes a specification for a respective addressable memory entry in an input synapse memory unit 14 (SID1, . . . , SIDn). The latter specifies the associated neural unit addressed (NUID1, . . . , NUIDn) and a weight (W1, . . . , Wn) with which the processing facility weights the firing message when updating the associated neural unit. In that case the method comprises an intermediate step (S7CD) subsequent to the step of deriving (S7C) and preceding the step of transmitting (S7B). In this intermediate step the specification is retrieved from the destination information, and a respective addressable memory entry in the input synapse memory unit (14) specified by the specification is accessed. The identification of the associated neural unit is then retrieved from the accessed respective memory entry.
Regarding independent claim 22, the limitation are similar to claim 10 and are thus rejected under the same rationale.
Regarding the dependent claims that depend on claim 10 and 22, the claims do not resolve the deficiencies noted above and are thus rejected under the same rationale.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 10 and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Davies (Pub. No.: US 2018/0174026, hereinafter ‘Dav’) in view of Chauhan et al. (NPL: “Emulation of Artificial Neural Network on an FPGA-based Accelerator using CYCLONE II”, hereinafter ‘Cha’) and in further view of Akin et al. (US 2019/0042920, hereinafter ‘Akin’).
Regarding independent claim 10 Dav teaches a neuromorphic processing method for execution of a spiking neural network comprising a plurality of neurons, (0033-0038: In an example of a spiking neural network, activation functions occur via spike trains, which means that time is a factor that has to be considered… FIG. 1 is a pictorial diagram of an example of a neuromorphic architecture [a neuromorphic processing method for execution of a spiking neural network comprising a plurality of neurons] 100 that includes a mesh network in which a plurality of neuromorphic cores 110, routers 120, and a grid of routing conductors 130 are arranged to provide a SNN in which the cores 110 may communicate with other cores 110... FIG. 3 is a block diagram 300 that illustrates certain details of a neuromorphic core within the neuromorphic architecture in which the core's 110 architectural resources are shared in a time-multiplexed manner to implement a plurality of neurons within the core [a neuromorphic processing method for execution of a spiking neural network comprising a plurality of neurons]…; And in 0193-00195: Examples, as described herein, may include, or may operate on, logic or a number of components, modules, or mechanisms [a neuromorphic processing]. Modules may include tangible entities ( e.g., hardware) capable of performing specified operations and may be configured or arranged in a certain manner. In an example, circuits may be arranged (e.g., internally or with respect to external entities such as other circuits) in a specified manner as a module. In an example as described herein, the whole or part of one or more computer systems (e.g., a standalone, client or server computer system) or one or more hardware processors may be configured by firmware or software (e.g., instructions, an application portion, or an application)… Machine (e.g., computer system) [a neuromorphic processing module] 26000 may include a neuromorphic processor 110, 300, a hardware processor 26002 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), a hardware processor core, or any combination thereof), a main memory 26004 and a static memory 26006, some or all of which may communicate with each other via an interlink (e.g., bus) 26008.) And executing neuromorphic operations, in 0190: Machine (e.g., computer system) 26000 may include a neuromorphic processor 110, 300, a hardware processor 26002 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), a hardware processor core, or any combination thereof), a main memory 26004 and a static memory 26006, some or all of which may communicate with each other via an interlink (e.g., bus) 26008…)
each neuron being capable of assuming a neural state from among a plurality of neural states comprising an initial state and a firing state, (in 0034: In an example of a spiking neural network, activation functions occur via spike trains, which means that time is a factor that has to be considered. Further, in a spiking neural network, each neuron is modeled after a biological neuron, as the artificial neuron receives its inputs via synaptic connections to one or more "dendrites" (part of the physical structure of a biological neuron), and the inputs affect an internal membrane potential of the artificial neuron "soma" ( cell body). In a spiking neural network, the artificial neuron "fires" (e.g., produces an output spike), when its membrane potential crosses a firing threshold. Thus, the effect of inputs on a spiking neural network neuron operate to increase or decrease its internal membrane potential, making the neuron more or less likely to fire. Further, in a spiking neural network, input connections may be stimulatory or inhibitory. A neuron's membrane potential may also be affected by changes in the neuron's own internal state [each neuron being capable of assuming a neural state from among a plurality of neural states comprising an initial state and a firing state] ("leakage").; And in 0037: The cores 110 may communicate via short packetized spike messages that are sent from core 110 to core 110. Each core 110 may implement a plurality of primitive nonlinear temporal computing elements referred to herein as "neurons" [each neuron being capable of assuming a neural state from among a plurality of neural states comprising an initial state and a firing state,… Each neuron may be characterized by an activation threshold. A spike message received by a neuron contributes to the activation of the neuron….; And in 0067: SOMA_CFG 332A and SOMA_STATE 332B [each neuron being capable of assuming a neural state from among a plurality of neural states comprising an initial state and a firing state]: A soma 330 spikes in response to accumulated activation value upon the occurrence of an update operation at time T. Each neuron in a core 300 has, at minimum, one entry in each of the soma CFG memory 332A and the soma STATE memory … More particularly, each neuron's present activation state level, also referred to as its Vm membrane potential state, is read from SOMA_STATE 332B, updated based upon a corresponding accumulated dendrite value, and written back. In some embodiments, the accumulated dendrite value may be added to the stored present activation state value to produce the updated activation state level. In other embodiments, the function for integrating the accumulated dendrite value may be more complex and may involve additional state variables stored in SOMA_STATE 332B…
the neuromorphic processing method comprising: (in 0193-00195: Examples, as described herein, may include, or may operate on, logic or a number of components, modules, or mechanisms [a neuromorphic processing]. Modules may include tangible entities ( e.g., hardware) capable of performing specified operations and may be configured or arranged in a certain manner. In an example, circuits may be arranged (e.g., internally or with respect to external entities such as other circuits) in a specified manner as a module. In an example as described herein, the whole or part of one or more computer systems (e.g., a standalone, client or server computer system) or one or more hardware processors may be configured by firmware or software (e.g., instructions, an application portion, or an application)… Machine (e.g., computer system) [a neuromorphic processing module] 26000 may include a neuromorphic processor 110, 300, a hardware processor 26002 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), a hardware processor core, or any combination thereof), a main memory 26004 and a static memory 26006, some or all of which may communicate with each other via an interlink (e.g., bus) 26008.) And executing neuromorphic operations, in 0190: Machine (e.g., computer system) 26000 may include a neuromorphic processor 110, 300, a hardware processor 26002 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), a hardware processor core, or any combination thereof), a main memory 26004 and a static memory 26006, some or all of which may communicate with each other via an interlink (e.g., bus) 26008…)
retrieving neural state information for a neuron of the plurality of neurons; updating the neural state information based on one or more event messages destined for the neuron to provide an updated neuron; (0056-0058: As discussed above with respect to FIG. 3, the neuromorphic neuron core 300 may be comprised of two loosely coupled asynchronous components [retrieving neural state information for a neuron of the plurality of neurons; updating the neural state information based on one or more event messages destined for the neuron to provide an updated neuron]: (1) an input dendrite logic circuit 310 configured to receive spikes from the routing network 130 and to apply them to the appropriate destination dendrite compartments at the appropriate future times, and (2) a soma logic circuit 330 configured to receive each dendrite compartment's accumulated values for the current time and to evolve each soma's membrane potential state [retrieving neural state information for a neuron of the plurality of neurons; updating the neural state information based on one or more event messages destined for the neuron to provide an updated neuron] to generate outgoing spike messages at the appropriate times… In accordance with an example of the basic multistage data flow of spike handling in the neuromorphic architecture, at stage (E) 610, input spikes are received over the network 130 at the input circuit 320 of a dendrite process… And in 0057-0062: FIG. 6 is an illustrative pictorial internal architecture level drawing representing an example of an operation of a dendrite logic circuit 310 and of a soma logic circuit 330 of a neuromorphic neuron… The hardware services provided by the soma (e.g., axon) logic circuits 330 and dendrite logic circuits 310 may be dynamically configured in a time-multiplexed manner to share the same physical wiring resources within a core among multiple neuromorphic neurons implemented by the core… In accordance with an example of the basic multistage data flow of spike handling in the neuromorphic architecture, at stage (E) 610, input spikes are received over the network 130 at the input circuit 320 of a dendrite process… the barrier synchronization mechanism ensures is consistent across the cores during spiking activity and servicing of the dendritic accumulators for time T, as described above with respect to FIGS. SA-SD-synchronizing and flushing of spikes that are in flight within the network): 1) Receive and handle spike messages as they serially arrive in time-multiplexed fashion from the network… WeightSum values are transferred to soma 330 for handling at time T, where soma configuration (CFG) 322a and soma state (S TATE) 332B memory values [retrieving neural state information for a neuron of the plurality of neurons] may be updated [updating the neural state information based on one or more event messages destined for the neuron to provide an updated neuron] for the corresponding soma compartment idx 652…. 1) Receive and handle spike messages as they serially arrive in time-multiplexed fashion from the network. Each message specifies an "Axon ID" unique to the core that identifies a distribution set of dendrites within the core. Each element of the distribution set is referred to as synapse… 2) While not handling input spikes, the dendrite logic circuit process 310 serially services all dendrites Si sequentially, passing the total accumulated neurotransmitter values amounts for time T to the Soma stage, resetting the neurotransmitter totals to zero so the state may be repurposed for a future step (n)
determining that the updated neural state information indicates the firing state; and in response to determining that the updated neural state information indicates the firing state, resetting the neural state information so as to indicate the initial state ((0067-0068: SOMA_CFG 332A and SOMA_STATE 332B: A soma 330 spikes in response to accumulated activation value upon the occurrence of an update operation at time T... On each synchronization time step T, the configuration parameters for each neuron are read from SOMA_CFG 332A in order to receive the incoming weighted neurotransmitter amounts received from dendrites corresponding to the neuron, and to update soma state values accordingly. More particularly, each neuron's present activation state level, also referred to as its Vm membrane potential state, is read from SOMA_STATE 332B, updated based upon a corresponding accumulated dendrite value, and written back… In other embodiments, the function for integrating the accumulated dendrite value may be more complex and may involve additional state variables stored in SOMA_STATE 332B. The updated Vm value may be compared to a threshold activation level value stored in SOMA_CFG 332A and, if Vm exceeds the threshold activation level value in an upward direction, then the soma produces an outgoing spike event…. If the updated Vm value exceeds the threshold, then the stored activation level may be reset to an activation level of zero [determining that the updated neural state information indicates the firing state; in response to determining that the updated neural state information indicates the firing state, resetting the neural state information so as to indicate the initial state]. If the updated Vm value does not exceed the threshold, then the updated Vm value may be stored in the SOMA_STATE memory 332B for use during a subsequent synchronization time step. AXON_MAP 334: The spiking neuron index is mapped through the AXON_MAP memory table 334 to provide a (base_address, length) pair identifying a list of spike fan-out destinations in the next table in the pipeline, the AXON_CFG 336 routing table. AXON_MAP 334 provides a level of indirection between the soma compartment index and the AXON_CFG 336 destination routing table...)
and distributing a firing event message by: accessing, in a first memory unit that is indexed by neuron identifier, a respective memory entry for the updated neuron; retrieving from the respective memory entry for the updated neuron in the first memory unit, an indication of a respective range of output synapse indices; (0068-0070: AXON_MAP 334: The spiking neuron index is mapped through the AXON_MAP memory table 334 to provide a (base_address, length) pair identifying a list of spike fan-out destinations [and distributing a firing event message by: accessing, in a first memory unit that is indexed by neuron identifier, a respective memory entry for the updated neuron; retrieving from the respective memory entry for the updated neuron in the first memory unit, an indication of a respective range of output synapse indices Examiner notes neuron index for mapping memory as claimed first memory unit associated with respective ids] in the next table in the pipeline, the AXON_CFG 336 routing table… AXON_CFG 336: Given the spike's base address and fan-out list length from AXON_MAP 334 [an indication of a respective range of output synapse indices], a list of (dest_core, axon_id) pairs is serially read from the AXON_ CFG 336 table [retrieving from the respective memory entry for the updated neuron in the first memory unit, an indication of a respective range of output synapse indices associated with the core and id memory unit]. Each of these becomes an outgoing spike message to the network 130 [and distributing a firing event message by: accessing, in a first memory unit that is indexed by neuron identifier, a respective memory entry for the updated neuron], sent serially one after the other…NETWORK 130: The network 130 routes each spike message to a destination core in a stateless, asynchronous manner [Alternatively and distributing a firing event message by: accessing, in a first memory unit that is indexed by neuron identifier, a respective memory entry for the updated neuron associated with a destination core having claimed first memory based on memory id]. From the standpoint of the computational model, the routing happens in zero time, i.e., if the spike message is generated at time T, then it is received at the destination core [retrieving from the respective memory entry for the updated neuron in the first memory unit, an indication of a respective range of output synapse indices] at time T relative to the source core's time step…)
and for each output synapse index in the respective range of output synapse indices: accessing, in a second memory unit that is indexed by output synapse identifier, a respective memory entry and retrieving output synapse property data from the second memory unit, …(in 0133: The source atom's synaptic weight list spans the range SYNAPSE_CFG[idx] to SYNAPSE_CFG[idx+CFG_LEN−1] accessing, in a second memory unit that is indexed by output synapse identifier, a respective memory entry and retrieving output synapse property data from the second memory unit claimed second memory as memory block noted by the idx where i=2]. And in 0142: Each synapse from the SYNAPSE_CFG [for each output synapse index in the respective range of output synapse indices: accessing, in a second memory unit that is indexed by output synapse identifier, a respective memory entry and retrieving output synapse property data from the second memory unit…] entry maps to a (Weight.sub.i, Delay.sub.i) pair, where Weight.sub.i is a signed six bit quantity and Delay.sub.i specifies a four bit delay value over the range 1 . . . 15. Each entry maps its synapse values [accessing, in a second memory unit that is indexed by output synapse identifier, a respective memory entry and retrieving output synapse property data from the second memory unit, the output synapse property data] in a unique way. Examiner notes that the synapse entries are mapped in a unique way and fan-out memory unit where the synapse map retrieval memory and destination memory using multicast distributions over the network architecture as depicted in 0064-0069: … Communication and computation in the neuromorphic architecture occurs in an event driven manner in response to spike events as they are generated and propagated throughout the neuromorphic network. Note that the soma 330 and dendrite 310 components shown in FIG. 7, in general, will belong to different physical cores… For example, when traversing the neuromorphic network, the spikes may be encoded as short data packets identifying a destination core and Axon ID… Each neuron in a core 300 has, at minimum, one entry in each of the soma CFG memory 332A and the soma STATE memory 332B. On each synchronization time step T, the configuration parameters for each neuron are read from SOMA_CFG 332A in order to receive the incoming weighted neurotransmitter amounts received from dendrites corresponding to the neuron, and to update soma state values accordingly… AXON_MAP 334: The spiking neuron index is mapped through the AXON_MAP memory table 334 to provide a (base_address, length) pair identifying a list of spike fan-out destinations in the next table in the pipeline, the AXON_CFG 336 routing table. AXON_MAP 334 provides a level of indirection between the soma compartment index and the AXON_CFG 336 destination routing table. This allows AXON_CFG's 336 memory resources to be shared across all neurons implemented by the core in a flexible, non-uniform manner. In an alternate embodiment, the AXON_MAP 334 state is integrated into the SOMA_CFG 332A memory… AXON_CFG 336: Given the spike's base address and fan-out list length from AXON_MAP 334, a list of (dest_core, axon_id) pairs is serially read from the AXON_CFG 336 table. Each of these becomes an outgoing spike message to the network 130, sent serially one after the other. Since each list is uniquely mapped by neuron index, some neurons may map to a large number of destinations (i.e., a multicast distribution), while others may only map to a single destination (unicast). List lengths may be arbitrarily configured as long as the total entries does not exceed the total size of the AXON_CFG 336 memory.)
the output synapse property data comprising a specification of a transmission delay and a respective input synapse index corresponding to a respective memory entry in a third memory unit comprising a reference to an associated neuron, wherein the third memory unit is indexed by input synapse identifier and the respective memory entry in the third memory unit comprises a weight with which to weigh the firing event message when updating the associated neuron; (in 0133: The source atom's synaptic weight list spans the range SYNAPSE_CFG[idx] to SYNAPSE_CFG[idx+CFG_LEN−1]. And in 0142: Each synapse from the SYNAPSE_CFG entry maps to a (Weight.sub.i, Delay.sub.i) pair [a third memory unit comprising a reference to an associated neuron, wherein the third memory unit is indexed by input synapse identifier and the respective memory entry in the third memory unit comprises a weight with which to weigh the firing event message when updating the associated neuron], where Weight.sub.i is a signed six bit quantity and Delay.sub.i specifies a four bit delay value [the output synapse property data comprising a specification of a transmission delay and a respective input synapse index corresponding to a respective memory entry in a third memory unit …] over the range 1 . . . 15. Each entry maps its synapse values in a unique way.; And in 0058-0059: … That is, weights targeted for a particular dendrite ID and delay offset time are accumulated/summed into a dendritic compartment address 632. At stage (C) 650, WeightSum values are transferred to soma 330 for handling at time T, where soma configuration (CFG) 322a and soma state (STATE) 332B memory values may be updated for the corresponding soma compartment idx 652. At stage (D) 660, output spikes, when generated, may be mapped to the appropriate fan-out AxonIDs for all destination cores via the AXON_MAP memory 334. At stage (E) 670, output spike messages are routed to the appropriate fan-out cores at the output circuit 340 via the network 130… Receive and handle spike messages as they serially arrive in time-multiplexed fashion from the network. Each message specifies an “Axon ID” [a respective input synapse index corresponding to a respective memory entry in a third memory unit comprising a reference to an associated neuron]unique to the core that identifies a distribution set of dendrites within the core [respective memory entry in a third memory unit comprising a reference to an associated neuron]. Each element of the distribution set is referred to as synapse, specifying a dendrite number [wherein the third memory unit is indexed by input synapse identifier and the respective memory entry in the third memory unit], a connection strength (weight W) [the third memory unit comprises a weight with which to weigh the firing event message when updating the associated neuron], a delay offset (Dϵ[1, D.sub.MAX]), and a synapse type…; And in [0067] SOMA_CFG 332A and SOMA_STATE 332B: A soma 330 spikes in response to accumulated activation value upon the occurrence of an update [the third memory unit comprises a weight with which to weigh the firing event message when updating the associated neuron] operation at time T. Each neuron in a core 300 has, at minimum, one entry in each of the soma CFG memory 332A and the soma STATE memory 332B. On each synchronization time step T, the configuration parameters for each neuron are read from SOMA_CFG 332A in order to receive the incoming weighted neurotransmitter amounts received from dendrites corresponding to the neuron, and to update soma state values accordingly [the third memory unit comprises a weight with which to weigh the firing event message when updating the associated neuron]… And in [0130] FIG. 16 is a register definition of SYNAPSE_MAP[0 . . . 2047] 1600 (1410). The SYNAPSE_MAP table 1600 maps each input spike received by the core to a list of synaptic entries in SYNAPSE_CFG 1420. Its specific behavior depends on whether the input spike is a discrete (standard) spike containing just an AxonID or a population spike containing both FIP (AxonID) [a respective input synapse index corresponding] and SRC_ATOM identifiers [a respective memory entry in a third memory unit comprising a reference to an associated neuron]… )
using the respective input synapse index obtained from the second memory unit, which stores the respective input synapse index and the transmission delay, to access the respective memory entry in the third memory unit to obtain, from the third memory unit, the reference to the associated neuron and the weight with which to weigh the firing event message; and (in 0133: The source atom's synaptic weight list spans the range SYNAPSE_CFG[idx] to SYNAPSE_CFG[idx+CFG_LEN−1]. And in 0142: Each synapse from the SYNAPSE_CFG entry maps to a (Weight.sub.i, Delay.sub.i) pair [using the respective input synapse index obtained from the second memory unit, which stores the respective input synapse index and the transmission delay, to access the respective memory entry in the third memory unit to obtainn], where Weight.sub.i is a signed six bit quantity and Delay.sub.i specifies a four bit delay value [the output synapse property data comprising a specification of a transmission delay and a respective input synapse index corresponding to a respective memory entry in a third memory unit …] over the range 1 . . . 15. Each entry maps its synapse values in a unique way.; And in 0058-0059: … That is, weights targeted for a particular dendrite ID and delay offset time are accumulated/summed into a dendritic compartment address 632. At stage (C) 650, WeightSum values are transferred to soma 330 for handling at time T, where soma configuration (CFG) 322a and soma state (STATE) 332B memory values may be updated for the corresponding soma compartment idx 652]. At stage (D) 660, output spikes, when generated, may be mapped to the appropriate fan-out AxonIDs [using the respective input synapse index obtained from the second memory unit, which stores the respective input synapse index and the transmission delay, to access the respective memory entry in the third memory unit to obtain, from the third memory unit, the reference to the associated neuron and the weight with which to weigh the firing event message, where in the third memory is a destination unit obtaining information from the second unit] for all destination cores via the AXON_MAP memory 334. At stage (E) 670, output spike messages are routed to the appropriate fan-out cores at the output circuit 340 via the network 130… Receive and handle spike messages as they serially arrive in time-multiplexed fashion from the network. Each message specifies an “Axon ID” [.. obtain, from the third memory unit, the reference to the associated neuron and the weight with which to weigh the firing event message] unique to the core that identifies a distribution set of dendrites within the core. Each element of the distribution set is referred to as synapse, specifying a dendrite number, a connection strength (weight W) [obtain, from the third memory unit, the reference to the associated neuron and the weight with which to weigh the firing event message], a delay offset (Dϵ[1, D.sub.MAX]), and a synapse type…; And in [0108] FIG. 11 is an illustrative pictorial drawing showing an example population connectivity model 1100. Connectivity state w.sub.ij specify a template network between population types (T.sub.i, Tj). Connectivity may be bound to any number of specific neuron populations of the corresponding types. The w.sub.ij state needs only be stored once per network type, rather than redundantly for each network instance. [0109] More particularly, the network template is specified in terms of three neuron population types (T.sub.1, T.sub.2, and T.sub.3) with four connection matrices (w.sub.31, w.sub.12, w.sub.21, and w.sub.23). Each connection matrix w.sub.ij specifies the connectivity state (typically a weight and delay pair) between all neurons in a population type j connecting to all neurons in the destination population type i [[using the respective input synapse index obtained from the second memory unit, which stores the respective input synapse index and the transmission delay, to access the respective memory entry in the third memory unit to obtain, from the third memory unit, the reference to the associated neuron and the weight with which to weigh the firing event message]]. Hence each w.sub.ij matrix specifies |T.sub.i|×|T.sub.j| connections where |T.sub.i| indicates the number of neurons in a population type Ti. Thus, in the example shown in FIG. 11, the four connection matrices (w.sub.31, w.sub.12, w.sub.21, and w.sub.23) are used to connect neurons of neuron populations (P.sub.1, P.sub.2, P.sub.3), to connect neurons of neuron populations (P.sub.4, P.sub.5, P.sub.6), and to connect neurons of neuron populations (P.sub.7, P.sub.8, P.sub.9) [using the respective input synapse index obtained from the second memory unit, which stores the respective input synapse index and the transmission delay, to access the respective memory entry in the third memory unit to obtain, from the third memory unit, the reference to the associated neuron and the weight with which to weigh the firing event message]… [0121] FIG. 15 is an illustrative flow diagram representing population spike generation mapping flow in a soma logic circuit 1500 (330). At the Soma stage and downstream, in order to generate the appropriately formatted population spike message, a particular spiking neuron must be mapped to its constituent population and source atom offset within the population [using the respective input synapse index obtained from the second memory unit, which stores the respective input synapse index and the transmission delay, to access the respective memory entry in the third memory unit to obtain, from the third memory unit, the reference to the associated neuron and the weight with which to weigh the firing event message]. Each neuron's compartment index uniquely identifies this information, so one place to map these values is in AXON_MAP 1510 (334). FIG. 15 shows the egress population spike generation pathway [using the respective input synapse index obtained from the second memory unit, which stores the respective input synapse index and the transmission delay, to access the respective memory entry in the third memory unit to obtain, from the third memory unit, the reference to the associated neuron and the weight with which to weigh the firing event message]. In this case, the AXON_CFG memory 1520 (336) is compressed by a factor of pop_size compared to the baseline case since only one population spike entry is needed per destination fip. All atoms (compartment indices) belonging to the source population reference the same entry as mapped by AXON_MAP 1510.…
Examiner notes that the claimed indexes are used to process connected elements in a spiking neural network, per what is known by one of ordinary skill in the art and as noted in cited reference, as noted above and in [0217] In Example 16, the subject matter of any one or more of Examples 1-15 optionally include a soma circuit, comprising: a soma input connected to the dendrite output and at which the dendrite compartment weighted sum value is received comprising an index to a related soma compartment [using the respective input synapse index obtained from the second memory unit, which stores the respective input synapse index and the transmission delay, to access the respective memory entry in the third memory unit to obtain, from the third memory unit, the reference to the associated neuron and the weight with which to weigh the firing event message]; a soma configuration memory of a soma compartment associated with the dendrite compartment, the soma configuration memory to store configuration parameters for a neuron comprising the soma compartment that is configured to be updated by the processor based on the received weighted sum value; a soma state memory that is to store the neuron's present activation state level and that is configured to be updated by the processor based on the received weighted sum value [using the respective input synapse index obtained from the second memory unit, which stores the respective input synapse index and the transmission delay], wherein if an updated present activation state level exceeds a threshold activation level value, the processor is configured to generate an output spike event comprising a spiking neuron index [using the respective input synapse index obtained from the second memory unit, which stores the respective input synapse index and the transmission delay, to access the respective memory entry in the third memory unit to obtain, from the third memory unit, the reference to the associated neuron and the weight with which to weigh the firing event message]; an axon map memory to store a mapping of the spiking neuron index to a spike fan-out destination list identifier; an axon configuration memory to store a list of one or more destination core-axonID pairs referenced by the spike fan-out destination list identifier; and an output circuit configured to route a spike message [to access the respective memory entry in the third memory unit to obtain, from the third memory unit, the reference to the associated neuron and the weight with which to weigh the firing event message] to each destination core [to access the respective memory entry in the third memory unit to obtain, from the third memory unit] of the list.)
transmitting the firing event message to the associated neuron with the transmission delay; and storing updated neural state information for the associated neuron. (in 0067-0069: … More particularly, each neuron's present activation state level, also referred to as its Vm membrane potential state, is read from SOMA_STATE 332B [respective input synapse index corresponding to a respective memory entry in a third memory unit comprising a reference to an associated neuron], updated based upon a corresponding accumulated dendrite value, and written back. In some embodiments, the accumulated dendrite value may be added to the stored present activation state value to produce the updated activation state level. In other embodiments, the function for integrating the accumulated dendrite value may be more complex and may involve additional state variables stored in SOMA_STATE 332B. The updated Vm value may be compared to a threshold activation level value stored in SOMA_CFG 332A and, if Vm exceeds the threshold activation level value in an upward direction, then the soma produces an outgoing spike event. The outgoing spike event is passed to the next AXON_MAP 334 stage [transmitting the firing event message to the associated neuron with the transmission delay; and storing updated neural state information for the associated neuron], at time T +Daxom where Daxon is a delay associated with the neuron's axon [transmitting the firing event message to the associated neuron with the transmission delay; and storing updated neural state information for the associated neuron], which also is specified by SOMA_CFG 332A. At this point in the core's pipeline, the spike may be identified only by the core's neuron number that produced the spike… AXON_CFG 336: Given the spike's base address and fan-out list length from AXON_MAP 334, a list of (dest_core, axon_id) pairs is serially read from the AXON_ CFG 336 table. Each of these becomes an outgoing spike message [transmitting the firing event message to the associated neuron with the transmission delay] to the network 130, sent serially one after the other.; And in 0052-0060: FIGS. 5A-SD are illustrative pictorial drawings representing a synchronized global time step with asynchronous multiplexed core operation. FIG. SA represents the neuromorphic mesh in an idle state with all cores inactive. FIGS. SB-SC represent cores generating spike messages that the mesh interconnects via routes to the appropriate destination cores. FIG. SD represents each core handshaking with its neighbors for a current time step using special barrier synchronization messages [soring updated neural state information for the associated neuron]. As each core finishes servicing the neurons that it services during a current time step, it handshakes with its neighbors to synchronize spike delivery… In accordance with an example of the basic multi-stage data flow of spike handling in the neuromorphic architecture, at stage (E) 610, input spikes are received over the network 130 at the input circuit 320 of a dendrite process 310. At stage (A) 620, the input spikes are distributed by the dendrite process 310 to multiple fan-out synapses within the core with appropriate weight and delay offset (W, D) [transmitting the firing event message to the associated neuron with the transmission delay] via the SYNAPSE_MAP 312. At stage (B) 630, the dendrite 310 maintains sums of all received synaptic weights for future time steps over each dendritic compartment 632 in the dendrite accumulator memory 316. That is, weights targeted for a particular dendrite ID and delay offset time are accumulated/summed into a dendritic compartment address 632… The dendrite logic circuit 310 may perform the following functions at synchronization time step T (this is a global time step that the barrier synchronization mechanism ensures is consistent across the cores during spiking activity and servicing of the dendritic accumulators for time T, as described above with respect to FIGS. SA-SD-synchronizing and flushing of spikes that are in flight within the network): claimed units disclosed in 0193-0195.; And data range of input spike as depicted in Fig. 9 for processing outputs as claimed)
Examiner notes that the Dav reference teaches a synapse slice memory as memory id in a list of memory elements associated with a synapse component in a distributed processing architecture system as noted above.
Additionally, Cha teaches a synapse slice memory as memory cell blocks, in Pg. 35 Sec. III: The emulator hardware is shown schematically in Figure 5. First, the process controller is shown, which is a Finite State Machine (FSM) that starts/monitors all FSMs within other hardware blocks and controls two counters. Secondly, the Synapses block contains all temporally sliced synapses [distributing a firing event message by: accessing, in a first memory unit that is indexed by neuron identifier, a respective memory entry for the updated neuron]. Thirdly, the Algorithm block consist of a number of pipelined hardware multipliers, calculating all weight update values. Finally, the Addition block is simply an adder for synapse outputs [distributing a firing event message by: accessing, in a first memory unit that is indexed by neuron identifier, a respective memory entry for the updated neuron]. In subsequent parts of this section, hardware designs for each block will be detailed. Process controller: The process controller is the main FSM for the hardware system… Then, the multipliers retrieve the value stored in the memory and multiply it with the first input sample for all slices (denoted slice loop 1). Then, outputs w and y are stored in an output BRAM [distributing a firing event message by: accessing, in a first memory unit that is indexed by neuron identifier, a respective memory entry for the updated neuron]. The algorithm is started to calculate the weight updates. When ready, the memory cells are started, updating their weights to contain the new weight value calculated by the algorithm (denoted slice loop 2)… Throughout this process, the sample and slice index values are updated to allow correct data selection from the input data BRAM and control of the amount of loops in the process FSM… The hardware block emulating the artificial synapse consists a multiplier cell emulator and a memory cell emulator. Also, it contains a slice BRAM. The slice BRAM contains for each slice in the synapse [distributing a firing event message by: accessing, in a first memory unit that is indexed by neuron identifier, a respective memory entry for the updated neuron] …
Dav and Cha are analogous art because both involve developing spiking neural network machine learning techniques and systems using hardware and software based architectures.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of the prior art for developing systems and methods for implementing spiking neurons for emulation of artificial neural network on FPGA-based Accelerator as disclosed by Cha with the method of developing devices and methods for operating a neuromorphic processor comprised of neuromorphic cores for implementing operations of a spiking artificial neural network as disclosed by Dav.
One of ordinary skill in the arts would have been motivated to combine the disclosed methods disclosed by Cha and Dav to emulate large amounts of synapses on an FPGA with limited resources while improving processing speed (Cha, Abstract).
Additionally, Akin teaches the use of block memory to synapse that in response to determining that the updated neural state information indicates the firing state, resetting the neural state information so as to indicate the initial state and distributing a firing event message by: accessing, in a first memory unit that is indexed by neuron identifier, a respective memory entry for the updated neuron; retrieving from the respective memory entry for the updated neuron in the first memory unit, an indication of a respective range of output synapse indices in 0143: …the neural network accelerator comprising: neural processing circuitry to process of a plurality of neurons of a neural network with respective neural processor clusters, with the circuitry to: determine and maintain respective states of the plurality of neurons in response to determining that the updated neural state information indicates the firing state, …]; and communicate spike messages within the neural network accelerator, based on the respective states; and axon processing circuitry to process synapse data of a plurality of synapses in the neural network with respective axon processors, with the circuitry to: communicate the spike messages with the neural processing circuitry [in response to determining that the updated neural state information indicates the firing state, …:]; retrieve synapse data [and distributing a firing event message by: accessing a respective memory entry …] of a subset of the plurality of synapses from a bank of memory [… a respective memory entry for the updated neuron; retrieving from the respective memory entry for the updated neuron in the first memory unit, an indication of a respective range of output synapse indices associated with a memory bank] that is external to the neural network accelerator; evaluate the synapse data, based on a spike message received from a presynaptic neuron of a neural processor cluster; and transmit, based on the evaluated synapse data, a weighted spike message [resetting the neural state information so as to indicate the initial state …] to a postsynaptic neuron at a neural processor cluster
Dav, Cha, and Akin are analogous art because both involve developing spiking neural network machine learning techniques and systems using hardware and software based architectures.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of the prior art for developing systems and methods for implementing the arrangement of memory and processing resources in a spiking neural network (SNN) architecture based on neuromorphic computing using digital neuromorphic components such as neuron processors, axon processors, and SNN accelerator as disclosed by Akin with the method of developing devices and methods for operating a neuromorphic processor comprised of neuromorphic cores for implementing operations of a spiking artificial neural network as collectively disclosed by Cha and Dav.
One of ordinary skill in the arts would have been motivated to combine the disclosed methods disclosed by Akin, Cha and Dav to provide beneficial performance-per-energy characteristics when implementing SNN using hardware based configurations, (Akin, in 0005).
Regarding independent claim 22, Dav teaches a data processing system comprising: memory that stores instructions; and one or more processors configured by the instructions to perform operations for execution of a spiking neural network comprising a plurality of neurons, (0033-0038: In an example of a spiking neural network, activation functions occur via spike trains, which means that time is a factor that has to be considered… FIG. 1 is a pictorial diagram of an example of a neuromorphic architecture [a data processing system comprising: memory that stores instructions; and one or more processors configured by the instructions to perform operations for execution of a spiking neural network comprising a plurality of neurons] 100 that includes a mesh network in which a plurality of neuromorphic cores 110, routers 120, and a grid of routing conductors 130 are arranged to provide a SNN in which the cores 110 may communicate with other cores 110... FIG. 3 is a block diagram 300 that illustrates certain details of a neuromorphic core within the neuromorphic architecture in which the core's 110 architectural resources are shared in a time-multiplexed manner to implement a plurality of neurons within the core [and one or more processors configured by the instructions to perform operations for execution of a spiking neural network comprising a plurality of neurons …; And in 0193-00195: Examples, as described herein, may include, or may operate on, logic or a number of components, modules, or mechanisms. Modules may include tangible entities ( e.g., hardware) capable of performing specified operations and may be configured or arranged in a certain manner. In an example, circuits may be arranged (e.g., internally or with respect to external entities such as other circuits) in a specified manner as a module. In an example as described herein, the whole or part of one or more computer systems (e.g., a standalone, client or server computer system) or one or more hardware processors may be configured by firmware or software (e.g., instructions, an application portion, or an application) [memory that stores instructions]… Machine (e.g., computer system) [and one or more processors configured by the instructions to perform operations for execution of a spiking neural network comprising a plurality of neurons] 26000 may include a neuromorphic processor 110, 300, a hardware processor 26002 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), a hardware processor core, or any combination thereof), a main memory 26004 [memory that stores instructions] and a static memory 26006 [memory that stores instructions], some or all of which may communicate with each other via an interlink (e.g., bus) 26008.) And executing neuromorphic operations, in 0190: Machine (e.g., computer system) 26000 may include a neuromorphic processor 110, 300, a hardware processor 26002 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), a hardware processor core, or any combination thereof), a main memory 26004 and a static memory 26006, some or all of which may communicate with each other via an interlink (e.g., bus) 26008…; And in 0197: While the machine readable medium 26022 is illustrated as a single medium, the term “machine readable medium” may include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) configured to store the one or more instructions 26024.)
Regarding the remaining claim 22 limitations, the claims are similar to claim 1 limitations and are rejected under the same rationale.
Claims 10-11, 13-29 and 31 are rejected under 35 U.S.C. 103 as being unpatentable over Davies (Pub. No.: US 2018/0174026, hereinafter ‘Dav’) in view of Imam et al. (US 20180174023, hereinafter ‘Imam’).
Regarding independent claim 10 Dav teaches a neuromorphic processing method for execution of a spiking neural network comprising a plurality of neurons, (0033-0038: In an example of a spiking neural network, activation functions occur via spike trains, which means that time is a factor that has to be considered… FIG. 1 is a pictorial diagram of an example of a neuromorphic architecture [a neuromorphic processing method for execution of a spiking neural network comprising a plurality of neurons] 100 that includes a mesh network in which a plurality of neuromorphic cores 110, routers 120, and a grid of routing conductors 130 are arranged to provide a SNN in which the cores 110 may communicate with other cores 110... FIG. 3 is a block diagram 300 that illustrates certain details of a neuromorphic core within the neuromorphic architecture in which the core's 110 architectural resources are shared in a time-multiplexed manner to implement a plurality of neurons within the core [a neuromorphic processing method for execution of a spiking neural network comprising a plurality of neurons]…; And in 0193-00195: Examples, as described herein, may include, or may operate on, logic or a number of components, modules, or mechanisms [a neuromorphic processing]. Modules may include tangible entities ( e.g., hardware) capable of performing specified operations and may be configured or arranged in a certain manner. In an example, circuits may be arranged (e.g., internally or with respect to external entities such as other circuits) in a specified manner as a module. In an example as described herein, the whole or part of one or more computer systems (e.g., a standalone, client or server computer system) or one or more hardware processors may be configured by firmware or software (e.g., instructions, an application portion, or an application)… Machine (e.g., computer system) [a neuromorphic processing module] 26000 may include a neuromorphic processor 110, 300, a hardware processor 26002 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), a hardware processor core, or any combination thereof), a main memory 26004 and a static memory 26006, some or all of which may communicate with each other via an interlink (e.g., bus) 26008.) And executing neuromorphic operations, in 0190: Machine (e.g., computer system) 26000 may include a neuromorphic processor 110, 300, a hardware processor 26002 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), a hardware processor core, or any combination thereof), a main memory 26004 and a static memory 26006, some or all of which may communicate with each other via an interlink (e.g., bus) 26008…)
each neuron being capable of assuming a neural state from among a plurality of neural states comprising an initial state and a firing state, (in 0034: In an example of a spiking neural network, activation functions occur via spike trains, which means that time is a factor that has to be considered. Further, in a spiking neural network, each neuron is modeled after a biological neuron, as the artificial neuron receives its inputs via synaptic connections to one or more "dendrites" (part of the physical structure of a biological neuron), and the inputs affect an internal membrane potential of the artificial neuron "soma" ( cell body). In a spiking neural network, the artificial neuron "fires" (e.g., produces an output spike), when its membrane potential crosses a firing threshold. Thus, the effect of inputs on a spiking neural network neuron operate to increase or decrease its internal membrane potential, making the neuron more or less likely to fire. Further, in a spiking neural network, input connections may be stimulatory or inhibitory. A neuron's membrane potential may also be affected by changes in the neuron's own internal state [each neuron being capable of assuming a neural state from among a plurality of neural states comprising an initial state and a firing state] ("leakage").; And in 0037: The cores 110 may communicate via short packetized spike messages that are sent from core 110 to core 110. Each core 110 may implement a plurality of primitive nonlinear temporal computing elements referred to herein as "neurons" [each neuron being capable of assuming a neural state from among a plurality of neural states comprising an initial state and a firing state,… Each neuron may be characterized by an activation threshold. A spike message received by a neuron contributes to the activation of the neuron….; And in 0067: SOMA_CFG 332A and SOMA_STATE 332B [each neuron being capable of assuming a neural state from among a plurality of neural states comprising an initial state and a firing state]: A soma 330 spikes in response to accumulated activation value upon the occurrence of an update operation at time T. Each neuron in a core 300 has, at minimum, one entry in each of the soma CFG memory 332A and the soma STATE memory … More particularly, each neuron's present activation state level, also referred to as its Vm membrane potential state, is read from SOMA_STATE 332B, updated based upon a corresponding accumulated dendrite value, and written back. In some embodiments, the accumulated dendrite value may be added to the stored present activation state value to produce the updated activation state level. In other embodiments, the function for integrating the accumulated dendrite value may be more complex and may involve additional state variables stored in SOMA_STATE 332B…
the neuromorphic processing method comprising: (in 0193-00195: Examples, as described herein, may include, or may operate on, logic or a number of components, modules, or mechanisms [a neuromorphic processing]. Modules may include tangible entities ( e.g., hardware) capable of performing specified operations and may be configured or arranged in a certain manner. In an example, circuits may be arranged (e.g., internally or with respect to external entities such as other circuits) in a specified manner as a module. In an example as described herein, the whole or part of one or more computer systems (e.g., a standalone, client or server computer system) or one or more hardware processors may be configured by firmware or software (e.g., instructions, an application portion, or an application)… Machine (e.g., computer system) [a neuromorphic processing module] 26000 may include a neuromorphic processor 110, 300, a hardware processor 26002 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), a hardware processor core, or any combination thereof), a main memory 26004 and a static memory 26006, some or all of which may communicate with each other via an interlink (e.g., bus) 26008.) And executing neuromorphic operations, in 0190: Machine (e.g., computer system) 26000 may include a neuromorphic processor 110, 300, a hardware processor 26002 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), a hardware processor core, or any combination thereof), a main memory 26004 and a static memory 26006, some or all of which may communicate with each other via an interlink (e.g., bus) 26008…)
retrieving neural state information for a neuron of the plurality of neurons; updating the neural state information based on one or more event messages destined for the neuron to provide an updated neuron; (0056-0058: As discussed above with respect to FIG. 3, the neuromorphic neuron core 300 may be comprised of two loosely coupled asynchronous components [retrieving neural state information for a neuron of the plurality of neurons; updating the neural state information based on one or more event messages destined for the neuron to provide an updated neuron]: (1) an input dendrite logic circuit 310 configured to receive spikes from the routing network 130 and to apply them to the appropriate destination dendrite compartments at the appropriate future times, and (2) a soma logic circuit 330 configured to receive each dendrite compartment's accumulated values for the current time and to evolve each soma's membrane potential state [retrieving neural state information for a neuron of the plurality of neurons; updating the neural state information based on one or more event messages destined for the neuron to provide an updated neuron] to generate outgoing spike messages at the appropriate times… In accordance with an example of the basic multistage data flow of spike handling in the neuromorphic architecture, at stage (E) 610, input spikes are received over the network 130 at the input circuit 320 of a dendrite process… And in 0057-0062: FIG. 6 is an illustrative pictorial internal architecture level drawing representing an example of an operation of a dendrite logic circuit 310 and of a soma logic circuit 330 of a neuromorphic neuron… The hardware services provided by the soma (e.g., axon) logic circuits 330 and dendrite logic circuits 310 may be dynamically configured in a time-multiplexed manner to share the same physical wiring resources within a core among multiple neuromorphic neurons implemented by the core… In accordance with an example of the basic multistage data flow of spike handling in the neuromorphic architecture, at stage (E) 610, input spikes are received over the network 130 at the input circuit 320 of a dendrite process… the barrier synchronization mechanism ensures is consistent across the cores during spiking activity and servicing of the dendritic accumulators for time T, as described above with respect to FIGS. SA-SD-synchronizing and flushing of spikes that are in flight within the network): 1) Receive and handle spike messages as they serially arrive in time-multiplexed fashion from the network… WeightSum values are transferred to soma 330 for handling at time T, where soma configuration (CFG) 322a and soma state (S TATE) 332B memory values [retrieving neural state information for a neuron of the plurality of neurons] may be updated [updating the neural state information based on one or more event messages destined for the neuron to provide an updated neuron] for the corresponding soma compartment idx 652…. 1) Receive and handle spike messages as they serially arrive in time-multiplexed fashion from the network. Each message specifies an "Axon ID" unique to the core that identifies a distribution set of dendrites within the core. Each element of the distribution set is referred to as synapse… 2) While not handling input spikes, the dendrite logic circuit process 310 serially services all dendrites Si sequentially, passing the total accumulated neurotransmitter values amounts for time T to the Soma stage, resetting the neurotransmitter totals to zero so the state may be repurposed for a future step (n)
determining that the updated neural state information indicates the firing state; and in response to determining that the updated neural state information indicates the firing state, resetting the neural state information so as to indicate the initial state ((0067-0068: SOMA_CFG 332A and SOMA_STATE 332B: A soma 330 spikes in response to accumulated activation value upon the occurrence of an update operation at time T... On each synchronization time step T, the configuration parameters for each neuron are read from SOMA_CFG 332A in order to receive the incoming weighted neurotransmitter amounts received from dendrites corresponding to the neuron, and to update soma state values accordingly. More particularly, each neuron's present activation state level, also referred to as its Vm membrane potential state, is read from SOMA_STATE 332B, updated based upon a corresponding accumulated dendrite value, and written back… In other embodiments, the function for integrating the accumulated dendrite value may be more complex and may involve additional state variables stored in SOMA_STATE 332B. The updated Vm value may be compared to a threshold activation level value stored in SOMA_CFG 332A and, if Vm exceeds the threshold activation level value in an upward direction, then the soma produces an outgoing spike event…. If the updated Vm value exceeds the threshold, then the stored activation level may be reset to an activation level of zero [determining that the updated neural state information indicates the firing state; in response to determining that the updated neural state information indicates the firing state, resetting the neural state information so as to indicate the initial state]. If the updated Vm value does not exceed the threshold, then the updated Vm value may be stored in the SOMA_STATE memory 332B for use during a subsequent synchronization time step. AXON_MAP 334: The spiking neuron index is mapped through the AXON_MAP memory table 334 to provide a (base_address, length) pair identifying a list of spike fan-out destinations in the next table in the pipeline, the AXON_CFG 336 routing table. AXON_MAP 334 provides a level of indirection between the soma compartment index and the AXON_CFG 336 destination routing table...)
and distributing a firing event message by: accessing, in a first memory unit that is indexed by neuron identifier, a respective memory entry for the updated neuron; retrieving from the respective memory entry for the updated neuron in the first memory unit, an indication of a respective range of output synapse indices; (0068-0070: AXON_MAP 334: The spiking neuron index is mapped through the AXON_MAP memory table 334 to provide a (base_address, length) pair identifying a list of spike fan-out destinations [and distributing a firing event message by: accessing, in a first memory unit that is indexed by neuron identifier, a respective memory entry for the updated neuron; retrieving from the respective memory entry for the updated neuron in the first memory unit, an indication of a respective range of output synapse indices Examiner notes neuron index for mapping memory as claimed first memory unit associated with respective ids] in the next table in the pipeline, the AXON_CFG 336 routing table… AXON_CFG 336: Given the spike's base address and fan-out list length from AXON_MAP 334 [an indication of a respective range of output synapse indices], a list of (dest_core, axon_id) pairs is serially read from the AXON_ CFG 336 table [retrieving from the respective memory entry for the updated neuron in the first memory unit, an indication of a respective range of output synapse indices associated with the core and id memory unit]. Each of these becomes an outgoing spike message to the network 130 [and distributing a firing event message by: accessing, in a first memory unit that is indexed by neuron identifier, a respective memory entry for the updated neuron], sent serially one after the other…NETWORK 130: The network 130 routes each spike message to a destination core in a stateless, asynchronous manner [Alternatively and distributing a firing event message by: accessing, in a first memory unit that is indexed by neuron identifier, a respective memory entry for the updated neuron associated with a destination core having claimed first memory based on memory id]. From the standpoint of the computational model, the routing happens in zero time, i.e., if the spike message is generated at time T, then it is received at the destination core [retrieving from the respective memory entry for the updated neuron in the first memory unit, an indication of a respective range of output synapse indices] at time T relative to the source core's time step…)
and for each output synapse index in the respective range of output synapse indices: accessing, in a second memory unit that is indexed by output synapse identifier, a respective memory entry and retrieving output synapse property data from the second memory unit, …(in 0133: The source atom's synaptic weight list spans the range SYNAPSE_CFG[idx] to SYNAPSE_CFG[idx+CFG_LEN−1] accessing, in a second memory unit that is indexed by output synapse identifier, a respective memory entry and retrieving output synapse property data from the second memory unit claimed second memory as memory block noted by the idx where i=2]. And in 0142: Each synapse from the SYNAPSE_CFG [for each output synapse index in the respective range of output synapse indices: accessing, in a second memory unit that is indexed by output synapse identifier, a respective memory entry and retrieving output synapse property data from the second memory unit…] entry maps to a (Weight.sub.i, Delay.sub.i) pair, where Weight.sub.i is a signed six bit quantity and Delay.sub.i specifies a four bit delay value over the range 1 . . . 15. Each entry maps its synapse values [accessing, in a second memory unit that is indexed by output synapse identifier, a respective memory entry and retrieving output synapse property data from the second memory unit, the output synapse property data] in a unique way. Examiner notes that the synapse entries are mapped in a unique way and fan-out memory unit where the synapse map retrieval memory and destination memory using multicast distributions over the network architecture as depicted in 0064-0069: … Communication and computation in the neuromorphic architecture occurs in an event driven manner in response to spike events as they are generated and propagated throughout the neuromorphic network. Note that the soma 330 and dendrite 310 components shown in FIG. 7, in general, will belong to different physical cores… For example, when traversing the neuromorphic network, the spikes may be encoded as short data packets identifying a destination core and Axon ID… Each neuron in a core 300 has, at minimum, one entry in each of the soma CFG memory 332A and the soma STATE memory 332B. On each synchronization time step T, the configuration parameters for each neuron are read from SOMA_CFG 332A in order to receive the incoming weighted neurotransmitter amounts received from dendrites corresponding to the neuron, and to update soma state values accordingly… AXON_MAP 334: The spiking neuron index is mapped through the AXON_MAP memory table 334 to provide a (base_address, length) pair identifying a list of spike fan-out destinations in the next table in the pipeline, the AXON_CFG 336 routing table. AXON_MAP 334 provides a level of indirection between the soma compartment index and the AXON_CFG 336 destination routing table. This allows AXON_CFG's 336 memory resources to be shared across all neurons implemented by the core in a flexible, non-uniform manner. In an alternate embodiment, the AXON_MAP 334 state is integrated into the SOMA_CFG 332A memory… AXON_CFG 336: Given the spike's base address and fan-out list length from AXON_MAP 334, a list of (dest_core, axon_id) pairs is serially read from the AXON_CFG 336 table. Each of these becomes an outgoing spike message to the network 130, sent serially one after the other. Since each list is uniquely mapped by neuron index, some neurons may map to a large number of destinations (i.e., a multicast distribution), while others may only map to a single destination (unicast). List lengths may be arbitrarily configured as long as the total entries does not exceed the total size of the AXON_CFG 336 memory.)
the output synapse property data comprising a specification of a transmission delay and a respective input synapse index corresponding to a respective memory entry in a third memory unit comprising a reference to an associated neuron, wherein the third memory unit is indexed by input synapse identifier and the respective memory entry in the third memory unit comprises a weight with which to weigh the firing event message when updating the associated neuron; (in 0133: The source atom's synaptic weight list spans the range SYNAPSE_CFG[idx] to SYNAPSE_CFG[idx+CFG_LEN−1]. And in 0142: Each synapse from the SYNAPSE_CFG entry maps to a (Weight.sub.i, Delay.sub.i) pair [a third memory unit comprising a reference to an associated neuron, wherein the third memory unit is indexed by input synapse identifier and the respective memory entry in the third memory unit comprises a weight with which to weigh the firing event message when updating the associated neuron], where Weight.sub.i is a signed six bit quantity and Delay.sub.i specifies a four bit delay value [the output synapse property data comprising a specification of a transmission delay and a respective input synapse index corresponding to a respective memory entry in a third memory unit …] over the range 1 . . . 15. Each entry maps its synapse values in a unique way.; And in 0058-0059: … That is, weights targeted for a particular dendrite ID and delay offset time are accumulated/summed into a dendritic compartment address 632. At stage (C) 650, WeightSum values are transferred to soma 330 for handling at time T, where soma configuration (CFG) 322a and soma state (STATE) 332B memory values may be updated for the corresponding soma compartment idx 652. At stage (D) 660, output spikes, when generated, may be mapped to the appropriate fan-out AxonIDs for all destination cores via the AXON_MAP memory 334. At stage (E) 670, output spike messages are routed to the appropriate fan-out cores at the output circuit 340 via the network 130… Receive and handle spike messages as they serially arrive in time-multiplexed fashion from the network. Each message specifies an “Axon ID” [a respective input synapse index corresponding to a respective memory entry in a third memory unit comprising a reference to an associated neuron]unique to the core that identifies a distribution set of dendrites within the core [respective memory entry in a third memory unit comprising a reference to an associated neuron]. Each element of the distribution set is referred to as synapse, specifying a dendrite number [wherein the third memory unit is indexed by input synapse identifier and the respective memory entry in the third memory unit], a connection strength (weight W) [the third memory unit comprises a weight with which to weigh the firing event message when updating the associated neuron], a delay offset (Dϵ[1, D.sub.MAX]), and a synapse type…; And in [0067] SOMA_CFG 332A and SOMA_STATE 332B: A soma 330 spikes in response to accumulated activation value upon the occurrence of an update [the third memory unit comprises a weight with which to weigh the firing event message when updating the associated neuron] operation at time T. Each neuron in a core 300 has, at minimum, one entry in each of the soma CFG memory 332A and the soma STATE memory 332B. On each synchronization time step T, the configuration parameters for each neuron are read from SOMA_CFG 332A in order to receive the incoming weighted neurotransmitter amounts received from dendrites corresponding to the neuron, and to update soma state values accordingly [the third memory unit comprises a weight with which to weigh the firing event message when updating the associated neuron]… And in [0130] FIG. 16 is a register definition of SYNAPSE_MAP[0 . . . 2047] 1600 (1410). The SYNAPSE_MAP table 1600 maps each input spike received by the core to a list of synaptic entries in SYNAPSE_CFG 1420. Its specific behavior depends on whether the input spike is a discrete (standard) spike containing just an AxonID or a population spike containing both FIP (AxonID) [a respective input synapse index corresponding] and SRC_ATOM identifiers [a respective memory entry in a third memory unit comprising a reference to an associated neuron]… )
using the respective input synapse index obtained from the second memory unit, which stores the respective input synapse index and the transmission delay, to access the respective memory entry in the third memory unit to obtain, from the third memory unit, the reference to the associated neuron and the weight with which to weigh the firing event message; and (in 0133: The source atom's synaptic weight list spans the range SYNAPSE_CFG[idx] to SYNAPSE_CFG[idx+CFG_LEN−1]. And in 0142: Each synapse from the SYNAPSE_CFG entry maps to a (Weight.sub.i, Delay.sub.i) pair [using the respective input synapse index obtained from the second memory unit, which stores the respective input synapse index and the transmission delay, to access the respective memory entry in the third memory unit to obtainn], where Weight.sub.i is a signed six bit quantity and Delay.sub.i specifies a four bit delay value [the output synapse property data comprising a specification of a transmission delay and a respective input synapse index corresponding to a respective memory entry in a third memory unit …] over the range 1 . . . 15. Each entry maps its synapse values in a unique way.; And in 0058-0059: … That is, weights targeted for a particular dendrite ID and delay offset time are accumulated/summed into a dendritic compartment address 632. At stage (C) 650, WeightSum values are transferred to soma 330 for handling at time T, where soma configuration (CFG) 322a and soma state (STATE) 332B memory values may be updated for the corresponding soma compartment idx 652]. At stage (D) 660, output spikes, when generated, may be mapped to the appropriate fan-out AxonIDs [using the respective input synapse index obtained from the second memory unit, which stores the respective input synapse index and the transmission delay, to access the respective memory entry in the third memory unit to obtain, from the third memory unit, the reference to the associated neuron and the weight with which to weigh the firing event message, where in the third memory is a destination unit obtaining information from the second unit] for all destination cores via the AXON_MAP memory 334. At stage (E) 670, output spike messages are routed to the appropriate fan-out cores at the output circuit 340 via the network 130… Receive and handle spike messages as they serially arrive in time-multiplexed fashion from the network. Each message specifies an “Axon ID” [.. obtain, from the third memory unit, the reference to the associated neuron and the weight with which to weigh the firing event message] unique to the core that identifies a distribution set of dendrites within the core. Each element of the distribution set is referred to as synapse, specifying a dendrite number, a connection strength (weight W) [obtain, from the third memory unit, the reference to the associated neuron and the weight with which to weigh the firing event message], a delay offset (Dϵ[1, D.sub.MAX]), and a synapse type…; And in [0108] FIG. 11 is an illustrative pictorial drawing showing an example population connectivity model 1100. Connectivity state w.sub.ij specify a template network between population types (T.sub.i, Tj). Connectivity may be bound to any number of specific neuron populations of the corresponding types. The w.sub.ij state needs only be stored once per network type, rather than redundantly for each network instance. [0109] More particularly, the network template is specified in terms of three neuron population types (T.sub.1, T.sub.2, and T.sub.3) with four connection matrices (w.sub.31, w.sub.12, w.sub.21, and w.sub.23). Each connection matrix w.sub.ij specifies the connectivity state (typically a weight and delay pair) between all neurons in a population type j connecting to all neurons in the destination population type i [[using the respective input synapse index obtained from the second memory unit, which stores the respective input synapse index and the transmission delay, to access the respective memory entry in the third memory unit to obtain, from the third memory unit, the reference to the associated neuron and the weight with which to weigh the firing event message]]. Hence each w.sub.ij matrix specifies |T.sub.i|×|T.sub.j| connections where |T.sub.i| indicates the number of neurons in a population type Ti. Thus, in the example shown in FIG. 11, the four connection matrices (w.sub.31, w.sub.12, w.sub.21, and w.sub.23) are used to connect neurons of neuron populations (P.sub.1, P.sub.2, P.sub.3), to connect neurons of neuron populations (P.sub.4, P.sub.5, P.sub.6), and to connect neurons of neuron populations (P.sub.7, P.sub.8, P.sub.9) [using the respective input synapse index obtained from the second memory unit, which stores the respective input synapse index and the transmission delay, to access the respective memory entry in the third memory unit to obtain, from the third memory unit, the reference to the associated neuron and the weight with which to weigh the firing event message]… [0121] FIG. 15 is an illustrative flow diagram representing population spike generation mapping flow in a soma logic circuit 1500 (330). At the Soma stage and downstream, in order to generate the appropriately formatted population spike message, a particular spiking neuron must be mapped to its constituent population and source atom offset within the population [using the respective input synapse index obtained from the second memory unit, which stores the respective input synapse index and the transmission delay, to access the respective memory entry in the third memory unit to obtain, from the third memory unit, the reference to the associated neuron and the weight with which to weigh the firing event message]. Each neuron's compartment index uniquely identifies this information, so one place to map these values is in AXON_MAP 1510 (334). FIG. 15 shows the egress population spike generation pathway [using the respective input synapse index obtained from the second memory unit, which stores the respective input synapse index and the transmission delay, to access the respective memory entry in the third memory unit to obtain, from the third memory unit, the reference to the associated neuron and the weight with which to weigh the firing event message]. In this case, the AXON_CFG memory 1520 (336) is compressed by a factor of pop_size compared to the baseline case since only one population spike entry is needed per destination fip. All atoms (compartment indices) belonging to the source population reference the same entry as mapped by AXON_MAP 1510.…
Examiner notes that the claimed indexes are used to process connected elements in a spiking neural network, per what is known by one of ordinary skill in the art and as noted in cited reference, as noted above and in [0217] In Example 16, the subject matter of any one or more of Examples 1-15 optionally include a soma circuit, comprising: a soma input connected to the dendrite output and at which the dendrite compartment weighted sum value is received comprising an index to a related soma compartment [using the respective input synapse index obtained from the second memory unit, which stores the respective input synapse index and the transmission delay, to access the respective memory entry in the third memory unit to obtain, from the third memory unit, the reference to the associated neuron and the weight with which to weigh the firing event message]; a soma configuration memory of a soma compartment associated with the dendrite compartment, the soma configuration memory to store configuration parameters for a neuron comprising the soma compartment that is configured to be updated by the processor based on the received weighted sum value; a soma state memory that is to store the neuron's present activation state level and that is configured to be updated by the processor based on the received weighted sum value [using the respective input synapse index obtained from the second memory unit, which stores the respective input synapse index and the transmission delay], wherein if an updated present activation state level exceeds a threshold activation level value, the processor is configured to generate an output spike event comprising a spiking neuron index [using the respective input synapse index obtained from the second memory unit, which stores the respective input synapse index and the transmission delay, to access the respective memory entry in the third memory unit to obtain, from the third memory unit, the reference to the associated neuron and the weight with which to weigh the firing event message]; an axon map memory to store a mapping of the spiking neuron index to a spike fan-out destination list identifier; an axon configuration memory to store a list of one or more destination core-axonID pairs referenced by the spike fan-out destination list identifier; and an output circuit configured to route a spike message [to access the respective memory entry in the third memory unit to obtain, from the third memory unit, the reference to the associated neuron and the weight with which to weigh the firing event message] to each destination core [to access the respective memory entry in the third memory unit to obtain, from the third memory unit] of the list.)
transmitting the firing event message to the associated neuron with the transmission delay; and storing updated neural state information for the associated neuron. (in 0067-0069: … More particularly, each neuron's present activation state level, also referred to as its Vm membrane potential state, is read from SOMA_STATE 332B [respective input synapse index corresponding to a respective memory entry in a third memory unit comprising a reference to an associated neuron], updated based upon a corresponding accumulated dendrite value, and written back. In some embodiments, the accumulated dendrite value may be added to the stored present activation state value to produce the updated activation state level. In other embodiments, the function for integrating the accumulated dendrite value may be more complex and may involve additional state variables stored in SOMA_STATE 332B. The updated Vm value may be compared to a threshold activation level value stored in SOMA_CFG 332A and, if Vm exceeds the threshold activation level value in an upward direction, then the soma produces an outgoing spike event. The outgoing spike event is passed to the next AXON_MAP 334 stage [transmitting the firing event message to the associated neuron with the transmission delay; and storing updated neural state information for the associated neuron], at time T +Daxom where Daxon is a delay associated with the neuron's axon [transmitting the firing event message to the associated neuron with the transmission delay; and storing updated neural state information for the associated neuron], which also is specified by SOMA_CFG 332A. At this point in the core's pipeline, the spike may be identified only by the core's neuron number that produced the spike… AXON_CFG 336: Given the spike's base address and fan-out list length from AXON_MAP 334, a list of (dest_core, axon_id) pairs is serially read from the AXON_ CFG 336 table. Each of these becomes an outgoing spike message [transmitting the firing event message to the associated neuron with the transmission delay] to the network 130, sent serially one after the other.; And in 0052-0060: FIGS. 5A-SD are illustrative pictorial drawings representing a synchronized global time step with asynchronous multiplexed core operation. FIG. SA represents the neuromorphic mesh in an idle state with all cores inactive. FIGS. SB-SC represent cores generating spike messages that the mesh interconnects via routes to the appropriate destination cores. FIG. SD represents each core handshaking with its neighbors for a current time step using special barrier synchronization messages [soring updated neural state information for the associated neuron]. As each core finishes servicing the neurons that it services during a current time step, it handshakes with its neighbors to synchronize spike delivery… In accordance with an example of the basic multi-stage data flow of spike handling in the neuromorphic architecture, at stage (E) 610, input spikes are received over the network 130 at the input circuit 320 of a dendrite process 310. At stage (A) 620, the input spikes are distributed by the dendrite process 310 to multiple fan-out synapses within the core with appropriate weight and delay offset (W, D) [transmitting the firing event message to the associated neuron with the transmission delay] via the SYNAPSE_MAP 312. At stage (B) 630, the dendrite 310 maintains sums of all received synaptic weights for future time steps over each dendritic compartment 632 in the dendrite accumulator memory 316. That is, weights targeted for a particular dendrite ID and delay offset time are accumulated/summed into a dendritic compartment address 632… The dendrite logic circuit 310 may perform the following functions at synchronization time step T (this is a global time step that the barrier synchronization mechanism ensures is consistent across the cores during spiking activity and servicing of the dendritic accumulators for time T, as described above with respect to FIGS. SA-SD-synchronizing and flushing of spikes that are in flight within the network): claimed units disclosed in 0193-0195.; And data range of input spike as depicted in Fig. 9 for processing outputs as claimed)
Examiner notes that claimed synapse and neural index are given broadest reasonable interpretation in light of the specification and the neuromorphic spiking neuron network uses processing cores to model neuron and synapse communication are also claimed indexes associated with the core memory for process neuron and synapse operations using axon and dendrites as noted above.
Additionally, the examiner notes that the Dav reference teaches a synapse slice memory as memory id in a list of memory elements associated with a synapse component in a distributed processing architecture system as noted above. One of ordinary skill in the art would know that neural network architecture can include a plurality of neighboring neurons connected by weighted synapses (e.g. as layers or messaging paths) having respective memory associated with the connections as claimed index memories as claimed.
Additionally, Iman teaches neural network architecture can include a plurality of neighboring neurons connected by weighted synapses (e.g. as layers or messaging paths) having respective memory associated with the connections as claimed index memories, as depicted in Figs. 2A, 3A and 3B:
PNG
media_image1.png
932
702
media_image1.png
Greyscale
PNG
media_image2.png
310
558
media_image2.png
Greyscale
PNG
media_image3.png
604
698
media_image3.png
Greyscale
And in 0026-0033: In one implementation, a neuromorphic computing system is provided that adopts a multicore architecture where each core houses the computing elements including neurons, synapses with on-chip learning capability, and local memory to store synaptic weights and routing tables for routing information […accessing, in a first memory unit that is indexed by neuron identifier, a respective memory entry for the updated neuron retrieving a … accessing, in a second memory unit that is indexed by output synapse identifier, a respective memory entry and retrieving output synapse property data from the second memory unit, … a respective input synapse index corresponding to a respective memory entry in a third memory unit comprising a reference to an associated neuron, wherein the third memory unit is indexed by input synapse identifier and the respective memory entry in the third memory unit …]… For instance, a network 210 of spiking neural network cores may be provided in the device 205 and may each communicate via short packetized spike messages sent from core to core over the network channels. Each core (e.g., 215) may possess processing and memory resources and logic to implement some number of primitive nonlinear temporal computing elements to implement one or more neurons using the neuromorphic core… For instance, in one example, the neuromorphic computing device may include a programming interface 235 to accept data defining a particular network topology [including claimed […first memory unit that is indexed by neuron identifier,…, in a second memory unit that is indexed by output synapse identifier, …corresponding to a respective memory entry in a third memory unit comprising a reference to an associated neuron, wherein the third memory unit is indexed by input synapse identifier …], including the number of neurons to implement in the neural network, the synapses used to interconnect the neurons, the respective synaptic weights of the synapses, individual parameters of the neurons to be implemented on the neuromorphic computing device, among other configurable attributes… . Definition of an SNN may include the definition and provisioning of specific routing tables […accessing, in a first memory unit that is indexed by neuron identifier, a respective memory entry for the updated neuron retrieving a … accessing, in a second memory unit that is indexed by output synapse identifier, a respective memory entry and retrieving output synapse property data from the second memory unit, … a respective input synapse index corresponding to a respective memory entry in a third memory unit comprising a reference to an associated neuron, wherein the third memory unit is indexed by input synapse identifier and the respective memory entry in the third memory unit …] on the various routers in the network 210 (e.g., corresponding to synapses defined in the SNN), orchestration of a network definition and attributes (e.g., weights, decay rates, etc.) to be applied in the network, core synchronization and time multiplexing management, routing of inputs to the appropriate cores, among other potential functions… In some implementations, a neuromorphic computing device 205 may be provided with additional logic to implement various features and models of an artificial neuron, artificial synapse, soma, or axon, etc. to provide additional optional functionality for SNNs implemented using the neuromorphic computing device 205… Each neuromorphic core 215 may additionally provide local memory in which a routing table may be stored and accessed for a neural network, accumulated potential of each soma of each neuron implemented using the core may be tracked, parameters of each neuron implemented by the core may be recorded, among other data and usage. Components, or architectural resources, of a neuromorphic core 215 may further include an input interface 265 to accept input spike messages generated by other neurons on other neuromorphic cores and an output interface 270 to send spike messages to other neuromorphic cores over the mesh network… And in 0041-0048: Spike messages may identify a particular distribution set of dendrites within the core [distributing a firing event message by: accessing, in a first memory unit that is indexed by neuron identifier,…]. Each element of the distribution set may represent a synapse of the modeled neuron, defined by a dendrite number, a connection strength (e.g., weight W), a delay [output synapse property data comprising a specification of a transmission delay] offset D, and a synapse type, among potentially other attributes…The soma process, at each time step, receives an accumulation of the total spike weight received (WeightSum) via synapses mapped to specific dendritic compartments of the soma [a respective memory entry for the updated neuron retrieving a from the respective memory entry for the updated neuron in the first memory unit, an indication of a respective range of output synapse indices]. In the simplest case, each dendritic compartment maps to a single neuron soma. In other instances, a neuromorphic core mesh architecture may additionally support multi-compartment neuron models [and for each output synapse index in the respective range of output synapse indices: accessing, in a second memory unit that is indexed by output synapse identifier, a respective memory entry and retrieving output synapse property data from the second memory unit,]…Synapses are directional, and neurons are able to communicate to each other if a synapse exists. FIG. 3A is a simplified block diagram 300a illustrating a simple example neural network, including neurons 305, 310, 315, 320 connected by synapses. The synapses allow spike messages to be transmitted between the neurons. For instance, neuron 305 may receive spike messages generated by neurons 315, 320…For instance, FIG. 3B shows an example illustrating synaptic connections between individual dendrites of neurons in a network [the output synapse property data comprising a specification of a transmission delay and a respective input synapse index corresponding to a respective memory entry in a third memory unit comprising a reference to an associated neuron, wherein the third memory unit is indexed by input synapse identifier and the respective memory entry in the third memory unit comprises a weight with which to weigh the firing event message when updating the associated neuron], and the parameters [output synapse property data comprising a specification of a transmission delay] that may be defined for these neurons and synapses. As an example, in FIG. 3B, neurons 325, 330, 335 implemented by cores of an example neuromorphic computing device are shown, together with synapses defined (e.g., using a routing table) for interconnections within a neural network implemented using the neurons 325, 330, 335 […first memory unit that is indexed by neuron identifier,…, in a second memory unit that is indexed by output synapse identifier, …corresponding to a respective memory entry in a third memory unit comprising a reference to an associated neuron, wherein the third memory unit is indexed by input synapse identifier …]. Each neuron may include one or more dendrite (processes) (e.g., 340, 360, 375, 380) and a respective soma (process) (e.g., 345, 365, 385) […first memory unit that is indexed by neuron identifier,…, in a second memory unit that is indexed by output synapse identifier, …corresponding to a respective memory entry in a third memory unit comprising a reference to an associated neuron, wherein the third memory unit is indexed by input synapse identifier …]. Spike messages received at each of the dendrites of a respective neuron may contribute to the activation potential of the soma, with the soma firing a spike message when the soma-specific potential threshold is reached…In summary, based on the parameters of a given neuron and weights of the various synapses connecting other neurons in an SNN to the given neuron [wherein the third memory unit is indexed by input synapse identifier and the respective memory entry in the third memory unit comprises a weight with which to weigh the firing event message when updating the associated neuron], as spike messages are received by the neuron, the current introduced through each spike may increase the membrane potential of the given neuron until a spiking threshold set for the neuron is met, causing the neuron to send an outgoing spike message to one or more other neighboring neurons in the SNN [… the firing event message when updating the associated neuron]…where Δw is the change in weight applied to a synapse between a presynaptic neuron and a postsynaptic neuron [wherein the third memory unit is indexed by input synapse identifier and the respective memory entry in the third memory unit comprises a weight with which to weigh the firing event message when updating the associated neuron], t.sub.post is the latest time of spike of the postsynaptic neuron, t.sub.pre is the latest time of spike of the presynaptic neuron, δ.sub.1 and δ.sub.2 are tunable parameters that set the rate and direction of learning,…; And 0051: …The spikes received at neurons 405 and 425 may cause neurons 405 and 425 to in turn send spikes on all of their outbound synapses (e.g., 445, 450, 455, 460) at t=1 [using the respective input synapse index obtained from the second memory unit, which stores the respective input synapse index and the transmission delay, to access the respective memory entry in the third memory unit to obtain, from the third memory unit, the reference to the associated neuron and the weight with which to weigh the firing event message that is sent to outbounded synapses from input synapse]. Of note is that neurons 405 and 425, by virtue of their bidirectional connection to neuron 406, echo back a spike (on synapses 445, 460, respectively) to neuron 406 at t=1. In that the presynaptic spikes sent by neurons 405, 425 on synapses 445, 460 at t=1 followed the postsynaptic spike sent by neuron 406 at t=0, the STDP rule for this SNN has been satisfied for synapses 445, 460 and their respective weights may be increased (e.g., by a value Δw) [wherein the third memory unit is indexed by input synapse identifier and the respective memory entry in the third memory unit comprises a weight with which to weigh the firing event message when updating the associated neuron]. The spike wave may continue at t=2, with neurons 404 and 430 responding to the spikes received from neurons 405 and 425 at t=1, with presynaptic spikes sent by neurons 404 and 430 at t=2 using synapses 465, 470 respectively. Neuron 406 does not send another spike at t=2 in response to the spikes echoed back by neurons 405 and 425 at t=1 given a refractory period enforced at neuron 406 (and all neurons in at least this portion of the example SNN) that prevents the neuron from sending any subsequent spikes for a defined period following the sending of a preceding outbound spike. Further, as the presynaptic spikes sent on synapses 465, 470 by neurons 404 and 430 at t=2 to neurons 405, 425 followed the post-synaptic spikes sent by neurons 405, 425 at t=1, the synaptic weights of synapses 465, 470 may likewise be adjusted based on the STDP learning rule, resulting in an SNN with the enhanced post-learning weights represented by diagram 480.
Dav and Imam are analogous art because both involve developing spiking neural network machine learning techniques and systems using hardware and software based architectures.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of the prior art for developing systems and methods for implementing spiking neurons neural network neuromorphic computing system that adopts a multicore architecture where each core houses the computing elements including neurons, synapses with on-chip learning capability, and local memory to store synaptic weights and routing tables as disclosed by Imam with the method of developing devices and methods for operating a neuromorphic processor comprised of neuromorphic cores for implementing operations of a spiking artificial neural network as disclosed by Dav.
One of ordinary skill in the arts would have been motivated to combine the disclosed methods disclosed by Dav and Imam in order to an improve neuromorphic computing platform which adopts an energy efficient architecture inspired by the brain that is both scalable and energy efficient while also supporting multiple modes of learning on-chip, (Imam, 0025).
Regarding claim 11, the rejection of claim 10 is incorporated and Dav in combination with Imam further teaches the neuromorphic processing method claim 10, comprising selecting one or more of the plurality of neurons to be updated based on compliance with an update enablement condition. (in 0067: SOMA_CFG 332A and SOMA_STATE 332B: A soma 330 spikes in response to accumulated activation value upon the occurrence of an update operation at time T. Each neuron in a core 300 has, at minimum, one entry in each of the soma CFG memory 332A and the soma STATE memory 332B. On each synchronization time step T, the configuration parameters for each neuron are read from SOMA_CFG 332A in order to receive the incoming weighted neurotransmitter amounts received from dendrites corresponding to the neuron, and to update soma state values accordingly. More particularly, each neuron's present activation state level, also referred to as its Vm membrane potential state, is read from SOMA_STATE 332B, updated based upon a corresponding accumulated dendrite value, and written back. In some embodiments, the accumulated dendrite value may be added to the stored present activation state value to produce the updated activation state level. In other embodiments, the function for integrating the accumulated dendrite value may be more complex and may involve additional state variables stored in SOMA_STATE 332B. The updated Vm value may be compared [comprising selecting one or more of the plurality of neurons to be updated based on compliance with an update enablement condition.] to a threshold activation level value stored in SOMA_CFG 332A and, if Vm exceeds the threshold activation level value in an upward direction, then the soma produces an outgoing spike event. The outgoing spike event is passed to the next AXON_MAP 334 stage, at time T +Daxom where Daxon is a delay associated with the neuron's axon, which also is specified by SOMA_CFG 332A. At this point in the core's pipeline, the spike may be identified only by the core's neuron number that produced the spike. If the updated Vm value exceeds the threshold [comprising selecting one or more of the plurality of neurons to be updated based on compliance with an update enablement condition], then the stored activation level may be reset to an activation level of zero. If the updated Vm value does not exceed the threshold, then the updated Vm value may be stored in the SOMA_STATE memory 332B [comprising selecting one or more of the plurality of neurons to be updated based on compliance with an update enablement condition] for use during a subsequent synchronization time step.)
Regarding claim 13, the rejection of claim 10 is incorporated and Dav in combination with Imam further teaches the neuromorphic processing method claim of 10, comprising reconfiguring a neural network topology by updating at least one of the first memory unit, the second memory unit, or the third memory unit. (in 0193-0195: Examples, as described herein, may include, or may operate on, logic or a number of components, modules, or mechanisms [comprising reconfiguring a neural network topology by updating at least one of the first memory unit, the second memory unit, or the third memory unit]. Modules may include tangible entities ( e.g., hardware) capable of performing specified operations [comprising reconfiguring a neural network topology by updating at least one of the first memory unit, the second memory unit, or the third memory unit] and may be configured or arranged in a certain manner. In an example, circuits may be arranged (e.g., internally or with respect to external entities such as other circuits) in a specified manner as a module. In an example as described herein, the whole or part of one or more computer systems (e.g., a standalone, client or server computer system) or one or more hardware processors may be configured by firmware or software (e.g., instructions, an application portion, or an application)… Accordingly, the term "module" is understood to encompass a tangible entity, and that entity may be one that is physically constructed, specifically configured (e.g., hardwired), or temporarily (e.g., transitorily) configured (e.g., programmed) to operate in a specified manner or to perform part or all of any operation described herein. Considering examples in which modules are temporarily configured, each of the modules need not be instantiated at any one moment in time. For example, where the modules comprise a general-purpose hardware processor configured using software, the general-purpose hardware processor may be configured as respective different modules at different times… Machine (e.g., computer system) 26000 may include a neuromorphic processor 110 [comprising reconfiguring a neural network topology by updating at least one of the first memory unit, the second memory unit, or the third memory unit], 300, a hardware processor 26002 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), a hardware processor core, or any combination thereof), a main memory 26004 and a static memory 26006, some or all of which may communicate with each other via an interlink (e.g., bus) 26008…; And 0002: A neuromorphic processor is a processor that is structured to mimic certain aspects of the brain and its underlying architecture, particularly its neurons and the interconnections between the neurons, although such a processor may deviate from its biological counterpart. A neuromorphic processor may be comprised of many neuromorphic (neural network) cores [comprising reconfiguring a neural network topology by updating at least one of the first memory unit, the second memory unit, or the third memory unit] that are interconnected via a bus and routers which may direct communications between the cores. This network of cores may communicate via short packetized spike messages sent from core to core. Each core may implement some number of primitive nonlinear temporal computing elements (neurons). When a neuron's activation exceeds some threshold level, it may generate a spike message that is propagated to a fixed set of fan-out neurons contained in destination cores. The network then may distribute the spike messages to all destination neurons, and in response, those neurons update their activations in a transient, time dependent manner; And in 0229: Example 28 is an electronic neuromorphic core processor circuit comprising [comprising reconfiguring a neural network topology by updating at least one of the first memory unit, the second memory unit, or the third memory unit]: a soma circuit, comprising: a soma input at which a dendrite compartment weighted sum value is received comprising an index to a related soma compartment; a soma configuration memory of a soma compartment associated with the dendrite compartment, the soma configuration memory to store configuration parameters for a neuron comprising the soma compartment and that is configured to be updated by the processor [comprising reconfiguring a neural network topology by updating at least one of the first memory unit, the second memory unit, or the third memory unit] based on the received weighted sum value; a soma state memory to store the neuron's present activation state level and that is configured to be updated by the processor [comprising reconfiguring a neural network topology by updating at least one of the first memory unit, the second memory unit, or the third memory unit] based on the received weighted sum value, wherein if an updated present activation state level exceeds a threshold activation level value, the processor is configured to [comprising reconfiguring a neural network topology by updating at least one of the first memory unit, the second memory unit, or the third memory unit] generate an output spike event comprising a spiking neuron index; an axon map memory comprising a mapping of the spiking neuron index to a spike fan-out destination list identifier [comprising reconfiguring a neural network topology by updating at least one of the first memory unit, the second memory unit, or the third memory unit]; an axon configuration memory comprising a list of one or more destination core-axonID pairs referenced by the spike fan-out destination list identifier; and an output circuit configured to route a spike message to each destination core of the list [comprising reconfiguring a neural network topology by updating at least one of the first memory unit, the second memory unit, or the third memory unit]. )
Regarding claim 14, the rejection of claim 10 is incorporated and Dav in combination with Imam further teaches the neuromorphic processing method of claim 10, comprising using selection information to determine whether to update a particular neuron of the plurality of neurons, the selection information indicating at least one of whether a firing event message was transmitted to the particular neuron or whether it was previously determined that the particular neuron is in an active state. (in 0037: The cores 110 may communicate via short packetized spike messages that are sent from core 110 to core 110 [using selection information to determine whether to update a particular neuron of the plurality of neurons, the selection information indicating at least one of whether a firing event message was transmitted to the particular neuron]. Each core 110 may implement a plurality of primitive nonlinear temporal computing elements referred to herein as “neurons”. In some embodiments, each core includes up to 1024 neurons. Each neuron may be characterized by an activation threshold. A spike message received by a neuron contributes to the activation of the neuron [using selection information to determine whether to update a particular neuron of the plurality of neurons, the selection information indicating at least one of whether a firing event message was transmitted to the particular neuron]. When a neuron's activation exceeds its activation threshold level, the neuron generates a spike message that is propagated to a fixed set of fan-out destination neurons indicated within the spike message that are contained in destination cores. The network distributes the spike messages to all destination neurons [using selection information to determine whether to update a particular neuron of the plurality of neurons, the selection information indicating at least one of whether a firing event message was transmitted to the particular neuro], and in response to the spike message, those destination neurons update their activation levels in a transient [whether it was previously determined that the particular neuron is in an active state], time-dependent manner, analogous to the operation of real biological neurons. And in 0067: SOMA_CFG 332A and SOMA_STATE 332B: A soma 330 spikes in response to accumulated activation value upon the occurrence of an update operation at time T. Each neuron in a core 300 has, at minimum, one entry in each of the soma CFG memory 332A and the soma STATE memory 332B. On each synchronization time step T, the configuration parameters for each neuron are read from SOMA_CFG 332A in order to receive the incoming weighted neurotransmitter amounts received from dendrites corresponding to the neuron, and to update soma state values accordingly. More particularly, each neuron's present activation state level, also referred to as its Vm membrane potential state, is read from SOMA_STATE 332B, updated based upon a corresponding accumulated dendrite value, and written back. … The updated Vm value may be compared to a threshold activation level value stored in SOMA_CFG 332A and, if Vm exceeds the threshold activation level value in an upward direction, then the soma produces an outgoing spike event [whether it was previously determined that the particular neuron is in an active state]…)
Regarding claim 15, the rejection of claim 10 is incorporated and Dav in combination with Imam further teaches the neuromorphic processing method of claim 10, wherein the plurality of neurons is coupled to a message-based network. (0069-0070: AXON_CFG 336: Given the spike's base address and fan-out list length from AXON_MAP 334, a list of (dest_core, axon_id) pairs is serially read from the AXON_CFG 336 table. Each of these becomes an outgoing spike message to the network 130, sent serially one after the other. Since each list is uniquely mapped by neuron index, some neurons may map to a large number of destinations (i.e., a multicast distribution) [wherein the plurality of neurons is coupled to a message-based network], while others may only map to a single destination (unicast) [wherein the plurality of neurons is coupled to a message-based network]. List lengths may be arbitrarily configured as long as the total entries does not exceed the total size of the AXON_CFG 336 memory. NETWORK 130: The network 130 routes [[wherein the plurality of neurons is coupled to a message-based network] each spike message to a destination core in a stateless, asynchronous manner…)
Regarding claim 16, the rejection of claim 15 is incorporated and Dav in combination with Imam further teaches the neuromorphic processing method of claim 15, wherein the message- based network is formed as a network on chip. (in 0035: FIG. 1 is a pictorial diagram of an example of a neuromorphic architecture 100 [wherein the message- based network is formed as a network on chip] that includes a mesh network in which a plurality of neuromorphic cores 110, routers 120, and a grid of routing conductors 130 [wherein the message- based network is formed as a network on chip] are arranged to provide a SNN in which the cores 110 may communicate with other cores 110.)
Regarding claim 17, the rejection of claim 10 is incorporated and Dav in combination with Imam further teaches the neuromorphic processing method of claim 10, wherein the second memory unit specifies, for each output synapse index in the respective range of output synapse indices, a respective network address of a destination neuron. (in 0133: The source atom's synaptic weight list spans the range SYNAPSE_CFG[idx] to SYNAPSE_CFG[idx+CFG_LEN−1] [wherein the second memory unit specifies, for each output synapse index in the respective range of output synapse indices, a respective network address of a destination neuron]. And in 0142: Each synapse from the SYNAPSE_CFG entry maps to a (Weight.sub.i, Delay.sub.i) pair, where Weight.sub.i is a signed six bit quantity and Delay.sub.i specifies a four bit delay value over the range 1 . . . 15. Each entry maps its synapse values in a unique way [wherein the second memory unit specifies, for each output synapse index in the respective range of output synapse indices, a respective network address of a destination neuron].; And in 0059: … Receive and handle spike messages as they serially arrive in time-multiplexed fashion from the network. Each message specifies an “Axon ID” unique to the core that identifies a distribution set of dendrites within the core. Each element of the distribution set is referred to as synapse, specifying a dendrite number, a connection strength (weight W), a delay offset (Dϵ[1, D.sub.MAX]), and a synapse type…)
Regarding claim 18, the rejection of claim 10 is incorporated and Dav in combination with Imam further teaches the neuromorphic processing method of claim 10, wherein the respective memory entry for the updated neuron is a single entry in the first memory unit that indicates an output synapse slice for the updated neuron. (0068-0070: AXON_MAP 334: The spiking neuron index is mapped through the AXON_MAP memory table 334 to provide a (base_address, length) pair identifying a list of spike fan-out destinations [wherein the respective memory entry for the updated neuron is a single entry in the first memory unit that indicates an output synapse slice for the updated neuron] in the next table in the pipeline, the AXON_CFG 336 routing table… AXON_CFG 336: Given the spike's base address and fan-out list length from AXON_MAP 334, a list of (dest_core, axon_id) pairs [wherein the respective memory entry for the updated neuron is a single entry in the first memory unit that indicates an output synapse slice for the updated neuron] is serially read from the AXON_ CFG 336 table. Each of these becomes an outgoing spike message to the network 130 [wherein the respective memory entry for the updated neuron is a single entry in the first memory unit that indicates an output synapse slice for the updated neuron], sent serially one after the other…NETWORK 130: The network 130 routes each spike message to a destination core in a stateless, asynchronous manner [Alternatively wherein the respective memory entry for the updated neuron is a single entry in the first memory unit that indicates an output synapse slice for the updated neuron]. From the standpoint of the computational model, the routing happens in zero time, i.e., if the spike message is generated at time T, then it is received at the destination core [wherein the respective memory entry for the updated neuron is a single entry in the first memory unit that indicates an output synapse slice for the updated neuron] at time T relative to the source core's time step…; And in 0069: AXON_CFG 336: Given the spike's base address and fan-out list length from AXON_MAP 334, a list of (dest_core, axon_id) pairs is serially read from the AXON_CFG 336 table. Each of these becomes an outgoing spike message to the network 130, sent serially one after the other. Since each list is uniquely mapped by neuron index, some neurons may map to a large number of destinations (i.e., a multicast distribution) [wherein the respective memory entry for the updated neuron is a single entry in the first memory unit that indicates an output synapse slice for the updated neuron], while others may only map to a single destination (unicast) [wherein the respective memory entry for the updated neuron is a single entry in the first memory unit that indicates an output synapse slice for the updated neuron]. List lengths may be arbitrarily configured as long as the total entries does not exceed the total size of the AXON_CFG 336 memory.)
Regarding claim 19, the rejection of claim 10 is incorporated and Dav in combination with Imam further teaches the neuromorphic processing method according to claim 10, wherein the respective memory entry for the updated neuron in the first memory unit indicates a number of output synapses associated with the updated neuron. (0068-0070: AXON_MAP 334: The spiking neuron index is mapped through the AXON_MAP memory table 334 to provide a (base_address, length) pair identifying a list of spike fan-out destinations in the next table in the pipeline, the AXON_CFG 336 routing table… AXON_CFG 336: Given the spike's base address and fan-out list length from AXON_MAP 334, a list of (dest_core, axon_id) pairs [wherein the respective memory entry for the updated neuron in the first memory unit indicates a number of output synapses associated with the updated neuron] is serially read from the AXON_ CFG 336 table. Each of these becomes an outgoing spike message to the network 130 [wherein the respective memory entry for the updated neuron indicates a number of output synapses associated with the updated neuron], sent serially one after the other…NETWORK 130: The network 130 routes each spike message to a destination core in a stateless, asynchronous manner [Alternatively wherein the respective memory entry for the updated neuron in the first memory unit indicates a number of output synapses associated with the updated neuron]. From the standpoint of the computational model, the routing happens in zero time, i.e., if the spike message is generated at time T, then it is received at the destination core [wherein the respective memory entry for the updated neuron in the first memory unit indicates a number of output synapses associated with the updated neuron] at time T relative to the source core's time step…; And in 0069: AXON_CFG 336: Given the spike's base address and fan-out list length from AXON_MAP 334, a list of (dest_core, axon_id) pairs [wherein the respective memory entry for the updated neuron in the first memory unit indicates a number of output synapses associated with the updated neuron] is serially read from the AXON_CFG 336 table. Each of these becomes an outgoing spike message to the network 130, sent serially one after the other. Since each list is uniquely mapped by neuron index, some neurons may map to a large number of destinations (i.e., a multicast distribution) [wherein the respective memory entry for the updated neuron in the first memory unit indicates a number of output synapses associated with the updated neuron], while others may only map to a single destination (unicast) [wherein the respective memory entry for the updated neuron in the first memory unit indicates a number of output synapses associated with the updated neuron]. List lengths may be arbitrarily configured as long as the total entries does not exceed the total size of the AXON_CFG 336 memory.)
Regarding claim 20, the rejection of claim 10 is incorporated and Dav in combination with Imam further teaches the neuromorphic processing method of claim 10, wherein the plurality of neural states further comprises one or more transitional states. (in 0168] In an implementation of the cores, a two-bit State (S) field encodes [wherein the plurality of neural states further comprises one or more transitional states ] the phase of the neuron's operation as it proceeds from synaptic integration to firing to refractory period: [0169] 0: IDLE [0170] 1: REFRACT [0171] 2: FIRING [0172] 3: STALLED [wherein the plurality of neural states further comprises one or more transitional states]; And in 00173: … The three-bit DT field counts any additional time steps needed in order to implement the neuron's AxonDelay once the neuron transitions from REFRACT to FIRING [wherein the plurality of neural states further comprises one or more transitional states]. This imposes the constraint that AxonDelay−RefractDelay <8.)
Regarding claim 21, the rejection of claim 10 is incorporated and Dav in combination with Imam further teaches the neuromorphic processing method of claim 10, comprising updating the neural state information of the plurality of neurons in a time-multiplexed manner. (in 0059-0063: The dendrite logic circuit 310 may perform the following functions at synchronization time step T (this is a global time step that the barrier synchronization mechanism ensures is consistent across the cores during spiking activity and servicing of the dendritic accumulators for time T, as described above with respect to FIGS. 5A-5D—synchronizing and flushing of spikes that are in flight within the network): [0060] 1) Receive and handle spike messages as they serially arrive in time-multiplexed [comprising updating the neural state information of the plurality of neurons in a time-multiplexed manner] fashion from the network … While not handling input spikes, the dendrite logic circuit process 310 serially services all dendrites 5i sequentially, passing the total accumulated neurotransmitter values amounts for time T to the Soma stage, resetting the neurotransmitter totals to zero so the state may be repurposed for a future step (namely time step T+D.sub.MAX+1 [comprising updating the neural state information of the plurality of neurons in a time-multiplexed manner], in circular FIFO fashion)… For each compartment Si, the soma 330 receives the total accumulated neurotransmitter amount at time T, (WeightSum in FIG. 6), which may be zero, and updates all of the compartment's state variables according to its configured neural model [comprising updating the neural state information of the plurality of neurons in a time-multiplexed manner]. Soma compartments 652 generate outgoing spike events in response to a sufficiently high level of activation. After compartment δ.sub.i has been updated, the soma process 330 advances to the next compartment δ.sub.i+1, and so on until all compartments 632, 652 in the core have been serviced…)
Regarding independent claim 22, Dav teaches a data processing system comprising: memory that stores instructions; and one or more processors configured by the instructions to perform operations for execution of a spiking neural network comprising a plurality of neurons, (0033-0038: In an example of a spiking neural network, activation functions occur via spike trains, which means that time is a factor that has to be considered… FIG. 1 is a pictorial diagram of an example of a neuromorphic architecture [a data processing system comprising: memory that stores instructions; and one or more processors configured by the instructions to perform operations for execution of a spiking neural network comprising a plurality of neurons] 100 that includes a mesh network in which a plurality of neuromorphic cores 110, routers 120, and a grid of routing conductors 130 are arranged to provide a SNN in which the cores 110 may communicate with other cores 110... FIG. 3 is a block diagram 300 that illustrates certain details of a neuromorphic core within the neuromorphic architecture in which the core's 110 architectural resources are shared in a time-multiplexed manner to implement a plurality of neurons within the core [and one or more processors configured by the instructions to perform operations for execution of a spiking neural network comprising a plurality of neurons …; And in 0193-00195: Examples, as described herein, may include, or may operate on, logic or a number of components, modules, or mechanisms. Modules may include tangible entities ( e.g., hardware) capable of performing specified operations and may be configured or arranged in a certain manner. In an example, circuits may be arranged (e.g., internally or with respect to external entities such as other circuits) in a specified manner as a module. In an example as described herein, the whole or part of one or more computer systems (e.g., a standalone, client or server computer system) or one or more hardware processors may be configured by firmware or software (e.g., instructions, an application portion, or an application) [memory that stores instructions]… Machine (e.g., computer system) [and one or more processors configured by the instructions to perform operations for execution of a spiking neural network comprising a plurality of neurons] 26000 may include a neuromorphic processor 110, 300, a hardware processor 26002 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), a hardware processor core, or any combination thereof), a main memory 26004 [memory that stores instructions] and a static memory 26006 [memory that stores instructions], some or all of which may communicate with each other via an interlink (e.g., bus) 26008.) And executing neuromorphic operations, in 0190: Machine (e.g., computer system) 26000 may include a neuromorphic processor 110, 300, a hardware processor 26002 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), a hardware processor core, or any combination thereof), a main memory 26004 and a static memory 26006, some or all of which may communicate with each other via an interlink (e.g., bus) 26008…; And in 0197: While the machine readable medium 26022 is illustrated as a single medium, the term “machine readable medium” may include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) configured to store the one or more instructions 26024.)
Regarding the remaining claim 22 limitations, the claims are similar to claim 1 limitations and are rejected under the same rationale.
Regarding claims 23-29, the rejection of claim 22 is incorporated and the limitations are similar to claims 15-21 respectively and claims 23-29 are rejected under the same rationale.
Regarding claim 31, the rejection of claim 10 is incorporated and Dav in combination with Imam further teaches the neuromorphic processing method of claim 10, wherein the neural state information is stored in a fourth memory unit that is indexed by neuron identifier. (And in [0038] FIG. 3 is a block diagram 300 that illustrates certain details of a neuromorphic core within the neuromorphic architecture in which the core's 110 architectural resources are shared in a time-multiplexed manner to implement a plurality of neurons within the core [wherein the neural state information is stored in a fourth memory unit that is indexed by neuron identifier]. A dendrite logic circuit 310 may include an input circuit (interface) 320 to receive spike messages, a synapse map memory 312, a synapse configuration (CFG) memory 314, and a dendrite accumulator memory 316. A soma logic circuit 330 includes an output circuit (interface) 340 to provide spike messages produced by the soma circuit, a soma CFG/state memory 332 [wherein the neural state information is stored in a fourth memory unit that is indexed by neuron identifier], an axon map memory 334 and an axon CFG memory 336.; And in 0068-0070: AXON_MAP 334: The spiking neuron index is mapped through the AXON_MAP memory table 334 to provide a (base_address, length) pair identifying a list of spike fan-out destinations in the next table in the pipeline, the AXON_CFG 336 routing table… AXON_CFG 336: Given the spike's base address and fan-out list length from AXON_MAP 334, a list of (dest_core, axon_id) pairs is serially read from the AXON_ CFG 336 table…. And in 0128: The NumGroups configuration parameter controls the number of configured neurons in the core [wherein the neural state information is stored in a fourth memory unit that is indexed by neuron identifier]. The core may service neuron state [wherein the neural state information is stored in a fourth memory unit that is indexed by neuron identifier] on every time step in order from 0 to 4*NumGroups−1. The value may be changed during the idle phase of barrier synchronization when all cores are halted.; And in 0225:In Example 24, the subject matter of any one or more of Examples 16-23 optionally include wherein the soma state memory is further partitioned into a smaller memory that contains a subset of state information per neuron that determines whether each neuron is active or inactive [wherein the neural state information is stored in a fourth memory unit that is indexed by neuron identifier], and in the inactive case allows the processor to skip any further processing of the neuron when the weighted sum input is zero.)
Claims 10-11, 13-29 and 31 are rejected under 35 U.S.C. 103 as being unpatentable over Davies (Pub. No.: US 2018/0174026, hereinafter ‘Dav’) in view of Chen et al. (US 20180189645, hereinafter ‘Chen’).
Regarding independent claim 10 Dav teaches a neuromorphic processing method for execution of a spiking neural network comprising a plurality of neurons, (0033-0038: In an example of a spiking neural network, activation functions occur via spike trains, which means that time is a factor that has to be considered… FIG. 1 is a pictorial diagram of an example of a neuromorphic architecture [a neuromorphic processing method for execution of a spiking neural network comprising a plurality of neurons] 100 that includes a mesh network in which a plurality of neuromorphic cores 110, routers 120, and a grid of routing conductors 130 are arranged to provide a SNN in which the cores 110 may communicate with other cores 110... FIG. 3 is a block diagram 300 that illustrates certain details of a neuromorphic core within the neuromorphic architecture in which the core's 110 architectural resources are shared in a time-multiplexed manner to implement a plurality of neurons within the core [a neuromorphic processing method for execution of a spiking neural network comprising a plurality of neurons]…; And in 0193-00195: Examples, as described herein, may include, or may operate on, logic or a number of components, modules, or mechanisms [a neuromorphic processing]. Modules may include tangible entities ( e.g., hardware) capable of performing specified operations and may be configured or arranged in a certain manner. In an example, circuits may be arranged (e.g., internally or with respect to external entities such as other circuits) in a specified manner as a module. In an example as described herein, the whole or part of one or more computer systems (e.g., a standalone, client or server computer system) or one or more hardware processors may be configured by firmware or software (e.g., instructions, an application portion, or an application)… Machine (e.g., computer system) [a neuromorphic processing module] 26000 may include a neuromorphic processor 110, 300, a hardware processor 26002 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), a hardware processor core, or any combination thereof), a main memory 26004 and a static memory 26006, some or all of which may communicate with each other via an interlink (e.g., bus) 26008.) And executing neuromorphic operations, in 0190: Machine (e.g., computer system) 26000 may include a neuromorphic processor 110, 300, a hardware processor 26002 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), a hardware processor core, or any combination thereof), a main memory 26004 and a static memory 26006, some or all of which may communicate with each other via an interlink (e.g., bus) 26008…)
each neuron being capable of assuming a neural state from among a plurality of neural states comprising an initial state and a firing state, (in 0034: In an example of a spiking neural network, activation functions occur via spike trains, which means that time is a factor that has to be considered. Further, in a spiking neural network, each neuron is modeled after a biological neuron, as the artificial neuron receives its inputs via synaptic connections to one or more "dendrites" (part of the physical structure of a biological neuron), and the inputs affect an internal membrane potential of the artificial neuron "soma" ( cell body). In a spiking neural network, the artificial neuron "fires" (e.g., produces an output spike), when its membrane potential crosses a firing threshold. Thus, the effect of inputs on a spiking neural network neuron operate to increase or decrease its internal membrane potential, making the neuron more or less likely to fire. Further, in a spiking neural network, input connections may be stimulatory or inhibitory. A neuron's membrane potential may also be affected by changes in the neuron's own internal state [each neuron being capable of assuming a neural state from among a plurality of neural states comprising an initial state and a firing state] ("leakage").; And in 0037: The cores 110 may communicate via short packetized spike messages that are sent from core 110 to core 110. Each core 110 may implement a plurality of primitive nonlinear temporal computing elements referred to herein as "neurons" [each neuron being capable of assuming a neural state from among a plurality of neural states comprising an initial state and a firing state,… Each neuron may be characterized by an activation threshold. A spike message received by a neuron contributes to the activation of the neuron….; And in 0067: SOMA_CFG 332A and SOMA_STATE 332B [each neuron being capable of assuming a neural state from among a plurality of neural states comprising an initial state and a firing state]: A soma 330 spikes in response to accumulated activation value upon the occurrence of an update operation at time T. Each neuron in a core 300 has, at minimum, one entry in each of the soma CFG memory 332A and the soma STATE memory … More particularly, each neuron's present activation state level, also referred to as its Vm membrane potential state, is read from SOMA_STATE 332B, updated based upon a corresponding accumulated dendrite value, and written back. In some embodiments, the accumulated dendrite value may be added to the stored present activation state value to produce the updated activation state level. In other embodiments, the function for integrating the accumulated dendrite value may be more complex and may involve additional state variables stored in SOMA_STATE 332B…
the neuromorphic processing method comprising: (in 0193-00195: Examples, as described herein, may include, or may operate on, logic or a number of components, modules, or mechanisms [a neuromorphic processing]. Modules may include tangible entities ( e.g., hardware) capable of performing specified operations and may be configured or arranged in a certain manner. In an example, circuits may be arranged (e.g., internally or with respect to external entities such as other circuits) in a specified manner as a module. In an example as described herein, the whole or part of one or more computer systems (e.g., a standalone, client or server computer system) or one or more hardware processors may be configured by firmware or software (e.g., instructions, an application portion, or an application)… Machine (e.g., computer system) [a neuromorphic processing module] 26000 may include a neuromorphic processor 110, 300, a hardware processor 26002 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), a hardware processor core, or any combination thereof), a main memory 26004 and a static memory 26006, some or all of which may communicate with each other via an interlink (e.g., bus) 26008.) And executing neuromorphic operations, in 0190: Machine (e.g., computer system) 26000 may include a neuromorphic processor 110, 300, a hardware processor 26002 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), a hardware processor core, or any combination thereof), a main memory 26004 and a static memory 26006, some or all of which may communicate with each other via an interlink (e.g., bus) 26008…)
retrieving neural state information for a neuron of the plurality of neurons; updating the neural state information based on one or more event messages destined for the neuron to provide an updated neuron; (0056-0058: As discussed above with respect to FIG. 3, the neuromorphic neuron core 300 may be comprised of two loosely coupled asynchronous components [retrieving neural state information for a neuron of the plurality of neurons; updating the neural state information based on one or more event messages destined for the neuron to provide an updated neuron]: (1) an input dendrite logic circuit 310 configured to receive spikes from the routing network 130 and to apply them to the appropriate destination dendrite compartments at the appropriate future times, and (2) a soma logic circuit 330 configured to receive each dendrite compartment's accumulated values for the current time and to evolve each soma's membrane potential state [retrieving neural state information for a neuron of the plurality of neurons; updating the neural state information based on one or more event messages destined for the neuron to provide an updated neuron] to generate outgoing spike messages at the appropriate times… In accordance with an example of the basic multistage data flow of spike handling in the neuromorphic architecture, at stage (E) 610, input spikes are received over the network 130 at the input circuit 320 of a dendrite process… And in 0057-0062: FIG. 6 is an illustrative pictorial internal architecture level drawing representing an example of an operation of a dendrite logic circuit 310 and of a soma logic circuit 330 of a neuromorphic neuron… The hardware services provided by the soma (e.g., axon) logic circuits 330 and dendrite logic circuits 310 may be dynamically configured in a time-multiplexed manner to share the same physical wiring resources within a core among multiple neuromorphic neurons implemented by the core… In accordance with an example of the basic multistage data flow of spike handling in the neuromorphic architecture, at stage (E) 610, input spikes are received over the network 130 at the input circuit 320 of a dendrite process… the barrier synchronization mechanism ensures is consistent across the cores during spiking activity and servicing of the dendritic accumulators for time T, as described above with respect to FIGS. SA-SD-synchronizing and flushing of spikes that are in flight within the network): 1) Receive and handle spike messages as they serially arrive in time-multiplexed fashion from the network… WeightSum values are transferred to soma 330 for handling at time T, where soma configuration (CFG) 322a and soma state (S TATE) 332B memory values [retrieving neural state information for a neuron of the plurality of neurons] may be updated [updating the neural state information based on one or more event messages destined for the neuron to provide an updated neuron] for the corresponding soma compartment idx 652…. 1) Receive and handle spike messages as they serially arrive in time-multiplexed fashion from the network. Each message specifies an "Axon ID" unique to the core that identifies a distribution set of dendrites within the core. Each element of the distribution set is referred to as synapse… 2) While not handling input spikes, the dendrite logic circuit process 310 serially services all dendrites Si sequentially, passing the total accumulated neurotransmitter values amounts for time T to the Soma stage, resetting the neurotransmitter totals to zero so the state may be repurposed for a future step (n)
determining that the updated neural state information indicates the firing state; and in response to determining that the updated neural state information indicates the firing state, resetting the neural state information so as to indicate the initial state ((0067-0068: SOMA_CFG 332A and SOMA_STATE 332B: A soma 330 spikes in response to accumulated activation value upon the occurrence of an update operation at time T... On each synchronization time step T, the configuration parameters for each neuron are read from SOMA_CFG 332A in order to receive the incoming weighted neurotransmitter amounts received from dendrites corresponding to the neuron, and to update soma state values accordingly. More particularly, each neuron's present activation state level, also referred to as its Vm membrane potential state, is read from SOMA_STATE 332B, updated based upon a corresponding accumulated dendrite value, and written back… In other embodiments, the function for integrating the accumulated dendrite value may be more complex and may involve additional state variables stored in SOMA_STATE 332B. The updated Vm value may be compared to a threshold activation level value stored in SOMA_CFG 332A and, if Vm exceeds the threshold activation level value in an upward direction, then the soma produces an outgoing spike event…. If the updated Vm value exceeds the threshold, then the stored activation level may be reset to an activation level of zero [determining that the updated neural state information indicates the firing state; in response to determining that the updated neural state information indicates the firing state, resetting the neural state information so as to indicate the initial state]. If the updated Vm value does not exceed the threshold, then the updated Vm value may be stored in the SOMA_STATE memory 332B for use during a subsequent synchronization time step. AXON_MAP 334: The spiking neuron index is mapped through the AXON_MAP memory table 334 to provide a (base_address, length) pair identifying a list of spike fan-out destinations in the next table in the pipeline, the AXON_CFG 336 routing table. AXON_MAP 334 provides a level of indirection between the soma compartment index and the AXON_CFG 336 destination routing table...)
and distributing a firing event message by: accessing, in a first memory unit that is indexed by neuron identifier, a respective memory entry for the updated neuron; retrieving from the respective memory entry for the updated neuron in the first memory unit, an indication of a respective range of output synapse indices; (0068-0070: AXON_MAP 334: The spiking neuron index is mapped through the AXON_MAP memory table 334 to provide a (base_address, length) pair identifying a list of spike fan-out destinations [and distributing a firing event message by: accessing, in a first memory unit that is indexed by neuron identifier, a respective memory entry for the updated neuron; retrieving from the respective memory entry for the updated neuron in the first memory unit, an indication of a respective range of output synapse indices Examiner notes neuron index for mapping memory as claimed first memory unit associated with respective ids] in the next table in the pipeline, the AXON_CFG 336 routing table… AXON_CFG 336: Given the spike's base address and fan-out list length from AXON_MAP 334 [an indication of a respective range of output synapse indices], a list of (dest_core, axon_id) pairs is serially read from the AXON_ CFG 336 table [retrieving from the respective memory entry for the updated neuron in the first memory unit, an indication of a respective range of output synapse indices associated with the core and id memory unit]. Each of these becomes an outgoing spike message to the network 130 [and distributing a firing event message by: accessing, in a first memory unit that is indexed by neuron identifier, a respective memory entry for the updated neuron], sent serially one after the other…NETWORK 130: The network 130 routes each spike message to a destination core in a stateless, asynchronous manner [Alternatively and distributing a firing event message by: accessing, in a first memory unit that is indexed by neuron identifier, a respective memory entry for the updated neuron associated with a destination core having claimed first memory based on memory id]. From the standpoint of the computational model, the routing happens in zero time, i.e., if the spike message is generated at time T, then it is received at the destination core [retrieving from the respective memory entry for the updated neuron in the first memory unit, an indication of a respective range of output synapse indices] at time T relative to the source core's time step…)
and for each output synapse index in the respective range of output synapse indices: accessing, in a second memory unit that is indexed by output synapse identifier, a respective memory entry and retrieving output synapse property data from the second memory unit, …(in 0133: The source atom's synaptic weight list spans the range SYNAPSE_CFG[idx] to SYNAPSE_CFG[idx+CFG_LEN−1] accessing, in a second memory unit that is indexed by output synapse identifier, a respective memory entry and retrieving output synapse property data from the second memory unit claimed second memory as memory block noted by the idx where i=2]. And in 0142: Each synapse from the SYNAPSE_CFG [for each output synapse index in the respective range of output synapse indices: accessing, in a second memory unit that is indexed by output synapse identifier, a respective memory entry and retrieving output synapse property data from the second memory unit…] entry maps to a (Weight.sub.i, Delay.sub.i) pair, where Weight.sub.i is a signed six bit quantity and Delay.sub.i specifies a four bit delay value over the range 1 . . . 15. Each entry maps its synapse values [accessing, in a second memory unit that is indexed by output synapse identifier, a respective memory entry and retrieving output synapse property data from the second memory unit, the output synapse property data] in a unique way. Examiner notes that the synapse entries are mapped in a unique way and fan-out memory unit where the synapse map retrieval memory and destination memory using multicast distributions over the network architecture as depicted in 0064-0069: … Communication and computation in the neuromorphic architecture occurs in an event driven manner in response to spike events as they are generated and propagated throughout the neuromorphic network. Note that the soma 330 and dendrite 310 components shown in FIG. 7, in general, will belong to different physical cores… For example, when traversing the neuromorphic network, the spikes may be encoded as short data packets identifying a destination core and Axon ID… Each neuron in a core 300 has, at minimum, one entry in each of the soma CFG memory 332A and the soma STATE memory 332B. On each synchronization time step T, the configuration parameters for each neuron are read from SOMA_CFG 332A in order to receive the incoming weighted neurotransmitter amounts received from dendrites corresponding to the neuron, and to update soma state values accordingly… AXON_MAP 334: The spiking neuron index is mapped through the AXON_MAP memory table 334 to provide a (base_address, length) pair identifying a list of spike fan-out destinations in the next table in the pipeline, the AXON_CFG 336 routing table. AXON_MAP 334 provides a level of indirection between the soma compartment index and the AXON_CFG 336 destination routing table. This allows AXON_CFG's 336 memory resources to be shared across all neurons implemented by the core in a flexible, non-uniform manner. In an alternate embodiment, the AXON_MAP 334 state is integrated into the SOMA_CFG 332A memory… AXON_CFG 336: Given the spike's base address and fan-out list length from AXON_MAP 334, a list of (dest_core, axon_id) pairs is serially read from the AXON_CFG 336 table. Each of these becomes an outgoing spike message to the network 130, sent serially one after the other. Since each list is uniquely mapped by neuron index, some neurons may map to a large number of destinations (i.e., a multicast distribution), while others may only map to a single destination (unicast). List lengths may be arbitrarily configured as long as the total entries does not exceed the total size of the AXON_CFG 336 memory.)
the output synapse property data comprising a specification of a transmission delay and a respective input synapse index corresponding to a respective memory entry in a third memory unit comprising a reference to an associated neuron, wherein the third memory unit is indexed by input synapse identifier and the respective memory entry in the third memory unit comprises a weight with which to weigh the firing event message when updating the associated neuron; (in 0133: The source atom's synaptic weight list spans the range SYNAPSE_CFG[idx] to SYNAPSE_CFG[idx+CFG_LEN−1]. And in 0142: Each synapse from the SYNAPSE_CFG entry maps to a (Weight.sub.i, Delay.sub.i) pair [a third memory unit comprising a reference to an associated neuron, wherein the third memory unit is indexed by input synapse identifier and the respective memory entry in the third memory unit comprises a weight with which to weigh the firing event message when updating the associated neuron], where Weight.sub.i is a signed six bit quantity and Delay.sub.i specifies a four bit delay value [the output synapse property data comprising a specification of a transmission delay and a respective input synapse index corresponding to a respective memory entry in a third memory unit …] over the range 1 . . . 15. Each entry maps its synapse values in a unique way.; And in 0058-0059: … That is, weights targeted for a particular dendrite ID and delay offset time are accumulated/summed into a dendritic compartment address 632. At stage (C) 650, WeightSum values are transferred to soma 330 for handling at time T, where soma configuration (CFG) 322a and soma state (STATE) 332B memory values may be updated for the corresponding soma compartment idx 652. At stage (D) 660, output spikes, when generated, may be mapped to the appropriate fan-out AxonIDs for all destination cores via the AXON_MAP memory 334. At stage (E) 670, output spike messages are routed to the appropriate fan-out cores at the output circuit 340 via the network 130… Receive and handle spike messages as they serially arrive in time-multiplexed fashion from the network. Each message specifies an “Axon ID” [a respective input synapse index corresponding to a respective memory entry in a third memory unit comprising a reference to an associated neuron]unique to the core that identifies a distribution set of dendrites within the core [respective memory entry in a third memory unit comprising a reference to an associated neuron]. Each element of the distribution set is referred to as synapse, specifying a dendrite number [wherein the third memory unit is indexed by input synapse identifier and the respective memory entry in the third memory unit], a connection strength (weight W) [the third memory unit comprises a weight with which to weigh the firing event message when updating the associated neuron], a delay offset (Dϵ[1, D.sub.MAX]), and a synapse type…; And in [0067] SOMA_CFG 332A and SOMA_STATE 332B: A soma 330 spikes in response to accumulated activation value upon the occurrence of an update [the third memory unit comprises a weight with which to weigh the firing event message when updating the associated neuron] operation at time T. Each neuron in a core 300 has, at minimum, one entry in each of the soma CFG memory 332A and the soma STATE memory 332B. On each synchronization time step T, the configuration parameters for each neuron are read from SOMA_CFG 332A in order to receive the incoming weighted neurotransmitter amounts received from dendrites corresponding to the neuron, and to update soma state values accordingly [the third memory unit comprises a weight with which to weigh the firing event message when updating the associated neuron]… And in [0130] FIG. 16 is a register definition of SYNAPSE_MAP[0 . . . 2047] 1600 (1410). The SYNAPSE_MAP table 1600 maps each input spike received by the core to a list of synaptic entries in SYNAPSE_CFG 1420. Its specific behavior depends on whether the input spike is a discrete (standard) spike containing just an AxonID or a population spike containing both FIP (AxonID) [a respective input synapse index corresponding] and SRC_ATOM identifiers [a respective memory entry in a third memory unit comprising a reference to an associated neuron]… )
using the respective input synapse index obtained from the second memory unit, which stores the respective input synapse index and the transmission delay, to access the respective memory entry in the third memory unit to obtain, from the third memory unit, the reference to the associated neuron and the weight with which to weigh the firing event message; and (in 0133: The source atom's synaptic weight list spans the range SYNAPSE_CFG[idx] to SYNAPSE_CFG[idx+CFG_LEN−1]. And in 0142: Each synapse from the SYNAPSE_CFG entry maps to a (Weight.sub.i, Delay.sub.i) pair [using the respective input synapse index obtained from the second memory unit, which stores the respective input synapse index and the transmission delay, to access the respective memory entry in the third memory unit to obtainn], where Weight.sub.i is a signed six bit quantity and Delay.sub.i specifies a four bit delay value [the output synapse property data comprising a specification of a transmission delay and a respective input synapse index corresponding to a respective memory entry in a third memory unit …] over the range 1 . . . 15. Each entry maps its synapse values in a unique way.; And in 0058-0059: … That is, weights targeted for a particular dendrite ID and delay offset time are accumulated/summed into a dendritic compartment address 632. At stage (C) 650, WeightSum values are transferred to soma 330 for handling at time T, where soma configuration (CFG) 322a and soma state (STATE) 332B memory values may be updated for the corresponding soma compartment idx 652]. At stage (D) 660, output spikes, when generated, may be mapped to the appropriate fan-out AxonIDs [using the respective input synapse index obtained from the second memory unit, which stores the respective input synapse index and the transmission delay, to access the respective memory entry in the third memory unit to obtain, from the third memory unit, the reference to the associated neuron and the weight with which to weigh the firing event message, where in the third memory is a destination unit obtaining information from the second unit] for all destination cores via the AXON_MAP memory 334. At stage (E) 670, output spike messages are routed to the appropriate fan-out cores at the output circuit 340 via the network 130… Receive and handle spike messages as they serially arrive in time-multiplexed fashion from the network. Each message specifies an “Axon ID” [.. obtain, from the third memory unit, the reference to the associated neuron and the weight with which to weigh the firing event message] unique to the core that identifies a distribution set of dendrites within the core. Each element of the distribution set is referred to as synapse, specifying a dendrite number, a connection strength (weight W) [obtain, from the third memory unit, the reference to the associated neuron and the weight with which to weigh the firing event message], a delay offset (Dϵ[1, D.sub.MAX]), and a synapse type…; And in [0108] FIG. 11 is an illustrative pictorial drawing showing an example population connectivity model 1100. Connectivity state w.sub.ij specify a template network between population types (T.sub.i, Tj). Connectivity may be bound to any number of specific neuron populations of the corresponding types. The w.sub.ij state needs only be stored once per network type, rather than redundantly for each network instance. [0109] More particularly, the network template is specified in terms of three neuron population types (T.sub.1, T.sub.2, and T.sub.3) with four connection matrices (w.sub.31, w.sub.12, w.sub.21, and w.sub.23). Each connection matrix w.sub.ij specifies the connectivity state (typically a weight and delay pair) between all neurons in a population type j connecting to all neurons in the destination population type i [[using the respective input synapse index obtained from the second memory unit, which stores the respective input synapse index and the transmission delay, to access the respective memory entry in the third memory unit to obtain, from the third memory unit, the reference to the associated neuron and the weight with which to weigh the firing event message]]. Hence each w.sub.ij matrix specifies |T.sub.i|×|T.sub.j| connections where |T.sub.i| indicates the number of neurons in a population type Ti. Thus, in the example shown in FIG. 11, the four connection matrices (w.sub.31, w.sub.12, w.sub.21, and w.sub.23) are used to connect neurons of neuron populations (P.sub.1, P.sub.2, P.sub.3), to connect neurons of neuron populations (P.sub.4, P.sub.5, P.sub.6), and to connect neurons of neuron populations (P.sub.7, P.sub.8, P.sub.9) [using the respective input synapse index obtained from the second memory unit, which stores the respective input synapse index and the transmission delay, to access the respective memory entry in the third memory unit to obtain, from the third memory unit, the reference to the associated neuron and the weight with which to weigh the firing event message]… [0121] FIG. 15 is an illustrative flow diagram representing population spike generation mapping flow in a soma logic circuit 1500 (330). At the Soma stage and downstream, in order to generate the appropriately formatted population spike message, a particular spiking neuron must be mapped to its constituent population and source atom offset within the population [using the respective input synapse index obtained from the second memory unit, which stores the respective input synapse index and the transmission delay, to access the respective memory entry in the third memory unit to obtain, from the third memory unit, the reference to the associated neuron and the weight with which to weigh the firing event message]. Each neuron's compartment index uniquely identifies this information, so one place to map these values is in AXON_MAP 1510 (334). FIG. 15 shows the egress population spike generation pathway [using the respective input synapse index obtained from the second memory unit, which stores the respective input synapse index and the transmission delay, to access the respective memory entry in the third memory unit to obtain, from the third memory unit, the reference to the associated neuron and the weight with which to weigh the firing event message]. In this case, the AXON_CFG memory 1520 (336) is compressed by a factor of pop_size compared to the baseline case since only one population spike entry is needed per destination fip. All atoms (compartment indices) belonging to the source population reference the same entry as mapped by AXON_MAP 1510.…
Examiner notes that the claimed indexes are used to process connected elements in a spiking neural network, per what is known by one of ordinary skill in the art and as noted in cited reference, as noted above and in [0217] In Example 16, the subject matter of any one or more of Examples 1-15 optionally include a soma circuit, comprising: a soma input connected to the dendrite output and at which the dendrite compartment weighted sum value is received comprising an index to a related soma compartment [using the respective input synapse index obtained from the second memory unit, which stores the respective input synapse index and the transmission delay, to access the respective memory entry in the third memory unit to obtain, from the third memory unit, the reference to the associated neuron and the weight with which to weigh the firing event message]; a soma configuration memory of a soma compartment associated with the dendrite compartment, the soma configuration memory to store configuration parameters for a neuron comprising the soma compartment that is configured to be updated by the processor based on the received weighted sum value; a soma state memory that is to store the neuron's present activation state level and that is configured to be updated by the processor based on the received weighted sum value [using the respective input synapse index obtained from the second memory unit, which stores the respective input synapse index and the transmission delay], wherein if an updated present activation state level exceeds a threshold activation level value, the processor is configured to generate an output spike event comprising a spiking neuron index [using the respective input synapse index obtained from the second memory unit, which stores the respective input synapse index and the transmission delay, to access the respective memory entry in the third memory unit to obtain, from the third memory unit, the reference to the associated neuron and the weight with which to weigh the firing event message]; an axon map memory to store a mapping of the spiking neuron index to a spike fan-out destination list identifier; an axon configuration memory to store a list of one or more destination core-axonID pairs referenced by the spike fan-out destination list identifier; and an output circuit configured to route a spike message [to access the respective memory entry in the third memory unit to obtain, from the third memory unit, the reference to the associated neuron and the weight with which to weigh the firing event message] to each destination core [to access the respective memory entry in the third memory unit to obtain, from the third memory unit] of the list.)
transmitting the firing event message to the associated neuron with the transmission delay; and storing updated neural state information for the associated neuron. (in 0067-0069: … More particularly, each neuron's present activation state level, also referred to as its Vm membrane potential state, is read from SOMA_STATE 332B [respective input synapse index corresponding to a respective memory entry in a third memory unit comprising a reference to an associated neuron], updated based upon a corresponding accumulated dendrite value, and written back. In some embodiments, the accumulated dendrite value may be added to the stored present activation state value to produce the updated activation state level. In other embodiments, the function for integrating the accumulated dendrite value may be more complex and may involve additional state variables stored in SOMA_STATE 332B. The updated Vm value may be compared to a threshold activation level value stored in SOMA_CFG 332A and, if Vm exceeds the threshold activation level value in an upward direction, then the soma produces an outgoing spike event. The outgoing spike event is passed to the next AXON_MAP 334 stage [transmitting the firing event message to the associated neuron with the transmission delay; and storing updated neural state information for the associated neuron], at time T +Daxom where Daxon is a delay associated with the neuron's axon [transmitting the firing event message to the associated neuron with the transmission delay; and storing updated neural state information for the associated neuron], which also is specified by SOMA_CFG 332A. At this point in the core's pipeline, the spike may be identified only by the core's neuron number that produced the spike… AXON_CFG 336: Given the spike's base address and fan-out list length from AXON_MAP 334, a list of (dest_core, axon_id) pairs is serially read from the AXON_ CFG 336 table. Each of these becomes an outgoing spike message [transmitting the firing event message to the associated neuron with the transmission delay] to the network 130, sent serially one after the other.; And in 0052-0060: FIGS. 5A-SD are illustrative pictorial drawings representing a synchronized global time step with asynchronous multiplexed core operation. FIG. SA represents the neuromorphic mesh in an idle state with all cores inactive. FIGS. SB-SC represent cores generating spike messages that the mesh interconnects via routes to the appropriate destination cores. FIG. SD represents each core handshaking with its neighbors for a current time step using special barrier synchronization messages [soring updated neural state information for the associated neuron]. As each core finishes servicing the neurons that it services during a current time step, it handshakes with its neighbors to synchronize spike delivery… In accordance with an example of the basic multi-stage data flow of spike handling in the neuromorphic architecture, at stage (E) 610, input spikes are received over the network 130 at the input circuit 320 of a dendrite process 310. At stage (A) 620, the input spikes are distributed by the dendrite process 310 to multiple fan-out synapses within the core with appropriate weight and delay offset (W, D) [transmitting the firing event message to the associated neuron with the transmission delay] via the SYNAPSE_MAP 312. At stage (B) 630, the dendrite 310 maintains sums of all received synaptic weights for future time steps over each dendritic compartment 632 in the dendrite accumulator memory 316. That is, weights targeted for a particular dendrite ID and delay offset time are accumulated/summed into a dendritic compartment address 632… The dendrite logic circuit 310 may perform the following functions at synchronization time step T (this is a global time step that the barrier synchronization mechanism ensures is consistent across the cores during spiking activity and servicing of the dendritic accumulators for time T, as described above with respect to FIGS. SA-SD-synchronizing and flushing of spikes that are in flight within the network): claimed units disclosed in 0193-0195.; And data range of input spike as depicted in Fig. 9 for processing outputs as claimed)
Examiner notes that the Dav reference teaches a synapse slice memory as memory id in a list of memory elements associated with a synapse component in a distributed processing architecture system as noted above.
Additionally, Chen teaches a routing signals through index memory elements as depicted in Figs. 4-5 and Fig. 3, in [0053] FIG. 4 depicts an example system of neurosynaptic core clusters 402 in accordance with certain embodiments. In various embodiments, a neuromorphic processor may comprise one or more neurosynaptic core clusters. In the embodiment depicted, a neurosynaptic core cluster 402 includes four neuron cores 404A-D each comprising a number of neurons (e.g., 64 or other suitable number of neurons) that are connected with four synapse cores 406A-D through a router 408 (which may have any suitable characteristics of router 204 described above) [for each output synapse index in the respective range of output synapse indices :accessing, in a second memory unit that is indexed by output synapse identifier, a respective memory entry and retrieving output synapse property data from the second memory unit, the output synapse property data comprising a specification of a transmission delay and a respective input synapse index corresponding to a respective memory entry in a third memory unit comprising a reference to an associated neuron, wherein the third memory unit is indexed by input synapse identifier and the respective memory entry in the third memory unit comprises a weight with which to weigh the firing event message when updating the associated neuron; using the respective input synapse index obtained from the second memory unit, which stores the respective input synapse index and the transmission delay, to access the respective memory entry in the third memory unit to obtain, from the third memory unit, the reference to the associated neuron and the weight with which to weigh the firing event message]. In other embodiments, each neuron core 404 could be connected to its own router [for each output synapse index in the respective range of output synapse indices :accessing, in a second memory unit that is indexed by output synapse identifier, a respective memory entry and retrieving output synapse property data from the second memory unit..] or the number of neuron cores 404 that are connected to a single router may be two, eight, or other suitable number, thus a neurosynaptic core cluster may include any suitable number of neuron cores and synapse cores connected to a router [for each output synapse index in the respective range of output synapse indices :accessing, in a second memory unit that is indexed by output synapse identifier, a respective memory entry and retrieving output synapse property data from the second memory unit]. In various embodiments, the neuron cores and synapse cores are modular and may be tiled with individual or shared routers depending on the availability of the hardware resources. In a particular embodiment, the system includes 256 neuron cores and 256 synapse cores tiled in 16×16 array… [0055] Each synapse core 406 includes a synapse array memory and associated logic (e.g., logic to write synapse weights to the synapse array memory, access the synapse weights, and/or update the synapse weights) [wherein the third memory unit is indexed by input synapse identifier and the respective memory entry in the third memory unit comprises a weight with which to weigh the firing event message when updating the associated neuron]. In various embodiments, a synapse core 406 may be collocated with a neuron core such that the neuron core may communicate directly with the synapse core 406 (as opposed to communicating with the synapse core 406 via a router)… [0057] The memory mapping scheme depicted in FIG. 5A may be utilized for fully connected networks and multi-layer perceptrons. FIG. 5A depicts a mapping scheme for a neural network having full feed-forward connections between two layers of neurons (a visible layer V and a hidden layer H), that is, the neurons of a layer [using the respective input synapse index obtained from the second memory unit, which stores the respective input synapse index and the transmission delay, to access the respective memory entry in the third memory unit to obtain, from the third memory unit, the reference to the associated neuron and the weight with which to weigh the firing event message] are each connected to all of the neurons of the next layer [wherein the third memory unit is indexed by input synapse identifier and the respective memory entry in the third memory unit comprises a weight with which to weigh the firing event message when updating the associated neuron]. Although particular layers are depicted, the same mapping scheme may be used between any two adjacent layers (e.g., input layer to hidden layer, hidden layer to another hidden layer, and/or hidden layer to output layer). The neurons are depicted as circles and the synapse weights as squares. The weight of the synapse connecting neuron A to neuron 0 is denoted as A0 [indexed by output synapse identifier, a respective memory entry], and similar notation is used for all synapses (a similar notation may be used in the following figures as well) [indexed by output synapse identifier, a respective memory entry]. Using this scheme, the row and column of a particular synapse weight is determined by the neuron number in layer V and layer H respectively (where neuron number refers to the position of a neuron within an ordered list of neurons) [indexed by output synapse identifier, a respective memory entry]. For example, all synapse weights for neuron A are in row 0 (wherein the row numbers are in ascending order from the bottom), all synapse weights for neuron F are in row 5, all synapse weights for neuron 2 are in bank 2 (wherein the bank numbers are in ascending order from the left), and all synapse weights for neuron 7 are in bank 7. [0058] Any suitable memory may be used to store the synapse weights. For example, in the embodiment depicted, a plurality of independently accessible memory banks (0-7) are used to store the synapse weights. In one embodiment, each bank may represent an 8-bit word SRAM, though other embodiments may utilize different sizes of memory and different types of memory. In various embodiments, the banks may be collocated (e.g., in the same synapse core) or may be dispersed throughout a processor (e.g., one or more banks may be in a first synapse core, one or more banks may be in a second synapse core, etc.). In various embodiments, the concept of memory banks disclosed herein may be generalized to any independently accessible portions of available synapse memory (e.g., memories located in different synapse cores, etc.). [0059] Because each bank is independently accessible, each bank may be read simultaneously and thus an output from each bank may be obtained in parallel. Thus, the fan-out synapses for any particular neuron of layer V may be accessed in parallel, thus speeding up operation of the neural network. For example, the fan-out synapse weights of neuron A (synapse weights A0-A7) may be obtained by reading row 0 from each bank 1-7 in parallel. In another embodiment, the same effect can be achieved by putting all fan-out synapse weights in a shared memory access, such as the same word (e.g., SRAM word)
Examiner notes that transmission of spiking signal are delayed based on a membrane potential for storing timing information, in [0061] FIG. 5B illustrates an example memory mapping scheme for a generative neural network in accordance with certain embodiments. Generative neural networks may include, e.g., Restricted Boltzmann Machines and deep belief networks. In various neural networks (including generative neural networks), a signal may be sent from a neuron in a backwards direction (i.e., to the fan-in neurons). As one example, synapses may be undirected in generative neural networks. This means that, e.g., the connection from neuron A to 0 and connection from neuron 0 to A share the same synapse weight in memory. As another example, in order to implement certain learning techniques (e.g., STDP), a spiking neuron may send a backspike message to its fan-in neurons. STDP may utilize long-term depression (LTD) updates and long-term potentiation (LTP) updates [comprising a specification of a transmission delay … the transmission delay]. An LTD update decreases a synapse weight and is triggered when the fan-in neuron (i.e., pre-synaptic neuron) spikes after the fan-out neuron (i.e., post-synaptic neuron) [comprising a specification of a transmission delay … the transmission delay]. Because the fan-out neuron spiked in the past, the timing information needed to update the synapse weight is available to the fan-out neuron. The LTP operation increments a synapse weight and is triggered when the fan-out neuron spikes after the fan-in neuron [comprising a specification of a transmission delay … the transmission delay]. Because a fan-out neuron may have many fan-in neurons (e.g., 1,000 or more in some embodiments), it may not be feasible for the fan-out neuron to store timing information [comprising a specification of a transmission delay … the transmission delay] of all of the fan-in neurons. And in [0095] Neuron core 1104 (which may include any suitable characteristics of any of the neuron cores described herein) implements a plurality of neurons via neuron processing logic 1106 and address array. Neuron processing logic may include any suitable logic to update membrane potentials of the neurons based on any suitable parameters and to initiate communications between the neurons [comprising a specification of a transmission delay]. Address array 1108 may include addresses of fan-out and/or fan-in synapse weights for the various neurons.
Chen and Dav are analogous art because both involve developing spiking neural network machine learning techniques and systems using hardware and software based architectures.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of the prior art for developing systems and methods for implementing spiking neurons for emulation of artificial neural network as disclosed by Chen with the method of developing devices and methods for operating a neuromorphic processor comprised of neuromorphic cores for implementing operations of a spiking artificial neural network as disclosed by Dav.
One of ordinary skill in the arts would have been motivated to combine the disclosed methods disclosed by Chen and Dav to allow the implementation of neuromorphic computer with reconfigurable memory mapping for various neural network topologies, (Chen, 0001).
Regarding independent claim 22, Dav teaches a data processing system comprising: memory that stores instructions; and one or more processors configured by the instructions to perform operations for execution of a spiking neural network comprising a plurality of neurons, (0033-0038: In an example of a spiking neural network, activation functions occur via spike trains, which means that time is a factor that has to be considered… FIG. 1 is a pictorial diagram of an example of a neuromorphic architecture [a data processing system comprising: memory that stores instructions; and one or more processors configured by the instructions to perform operations for execution of a spiking neural network comprising a plurality of neurons] 100 that includes a mesh network in which a plurality of neuromorphic cores 110, routers 120, and a grid of routing conductors 130 are arranged to provide a SNN in which the cores 110 may communicate with other cores 110... FIG. 3 is a block diagram 300 that illustrates certain details of a neuromorphic core within the neuromorphic architecture in which the core's 110 architectural resources are shared in a time-multiplexed manner to implement a plurality of neurons within the core [and one or more processors configured by the instructions to perform operations for execution of a spiking neural network comprising a plurality of neurons …; And in 0193-00195: Examples, as described herein, may include, or may operate on, logic or a number of components, modules, or mechanisms. Modules may include tangible entities ( e.g., hardware) capable of performing specified operations and may be configured or arranged in a certain manner. In an example, circuits may be arranged (e.g., internally or with respect to external entities such as other circuits) in a specified manner as a module. In an example as described herein, the whole or part of one or more computer systems (e.g., a standalone, client or server computer system) or one or more hardware processors may be configured by firmware or software (e.g., instructions, an application portion, or an application) [memory that stores instructions]… Machine (e.g., computer system) [and one or more processors configured by the instructions to perform operations for execution of a spiking neural network comprising a plurality of neurons] 26000 may include a neuromorphic processor 110, 300, a hardware processor 26002 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), a hardware processor core, or any combination thereof), a main memory 26004 [memory that stores instructions] and a static memory 26006 [memory that stores instructions], some or all of which may communicate with each other via an interlink (e.g., bus) 26008.) And executing neuromorphic operations, in 0190: Machine (e.g., computer system) 26000 may include a neuromorphic processor 110, 300, a hardware processor 26002 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), a hardware processor core, or any combination thereof), a main memory 26004 and a static memory 26006, some or all of which may communicate with each other via an interlink (e.g., bus) 26008…; And in 0197: While the machine readable medium 26022 is illustrated as a single medium, the term “machine readable medium” may include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) configured to store the one or more instructions 26024.)
Regarding the remaining claim 22 limitations, the claims are similar to claim 1 limitations and are rejected under the same rationale.
Regarding 11, 13-21, 23-29 and 31, the limitations are rejected by the Dav reference and the rejection noted above is incorporated here.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Agrawal et al. (NPL: SPARE: Spiking neural network acceleration using ROM-embedded RAMs as in-memory-computation primitives): teaches in Sec. III: .. At each time step, the neuron firing data is transfered from one layer to the next. Input data spikes (for a given time-step) stored in the global memory are broadcast over the shared bus. Subsequently, the PEs mapping the first layer of the SNN start buffering the data and execute their SNN partition. Once spikes for the first layer have been transmitted, the spikes for the next layer are broadcast, and PEs mapped to second layer start their computations, and so on. All synaptic data is stored locally within each PE. Once the layer-1 PEs finish their execution, their output data (spikes) are written back to the global memory. Subsequently, data from all PEs is written back into the global memory, layer by layer. Consequently, this successive data transfer (neuron data) between global memory and PEs realize a time-step of SNN computation. It’s worth noting that only neuron data movements occur between PEs and global memory, whereas the synapse data is locally read from the PE’s RAM.
Donati et at. (NPL: “Discrimination of EMG signals using a neuromorphic implementation of a spiking neural network. IEEE transactions on biomedical circuits and systems”): teaches neuromorphic hardware that allows for configuring neural network routers/mapping, to send input spikes to the chip and collect output spikes from it; where the circuit used to generate the spike input train: creates an explicit list of indices (e.g. neuron IDs that fire) and of the time stamp of each index. The reference also teaches a reconfigurable neuromorphic processor having multiple cores where each core comprises 256 adaptive exponential integrate-and-fire (AEI&F) neurons for a total of 1k neurons per chip. Each neuron has a Content Addressable Memory (CAM) block, containing 64 addresses representing the pre-synaptic neurons that the neuron is subscribed to. The asynchronous CAMs on the synapses are used to store the tags of the source neuron addresses connected to them, while the SRAM cells are used to program the address of the destination core/chip that the neuron targets. The input/output interfacing circuits that receive and transmit spike events follow the Address Event Representation (AER) communication protocol. In the AER representation, each neuron is assigned an address and it is transmitted as soon the neuron spikes, in Pgs. 795-796.
Chen et al. (US 20190197391): teaches the event for updating spiking activity for learning and training the spiking neural network for determining, in 0026: The basic implementation of some applicable learning algorithms may be provided through spike timing dependent plasticity, which adjusts the strength of connections (e.g., synapses) between neurons in a neural network based on correlating the timing between an input (e.g., ingress) spike and an output (e.g., egress) spike. Input spikes that closely proceed an output spike for a neuron are considered causal to the output and their weights are strengthened, while the weights of other input spikes are weakened. These techniques use spike times, or modeled spike times, to allow a modeled neural network's operation to be modified according to a number of machine learning modes, such as in an unsupervised learning mode or in a reinforced learning mode. in 0022-0024: …Several neural chips may also be packaged and networked together to form the neuromorphic hardware 155, which may be included in any number of devices, such as servers, mobile devices, sensors, actuators, etc. The illustrated neural core structure functionally models the behavior of a biological neuron. A signal is provided at an input (e.g., ingress spikes) to a synapse (e.g., modeled by the synaptic variable memory 105) that may include a fan-out within the core to other dendrite structures with appropriate weight and delay offsets. The signal may be modified by the synaptic variable memory 105 (e.g., synaptic weights may be applied to spikes addressing respective synapses) and made available to the neuron model 110. The neuron model 110 may include a number of components to model dendrite activity and soma activity... The neuron model 110 is configured to produce an output spike (e.g., egress spikes via an axon to one or several destination cores) based on weighted spike states …Each spike event may generate temporary state for the trace computation that is accumulated over the duration of a periodic interval of time defined as the learning epoch…
Modha (US 20130073494): teaches neuron comprising a reconfigurable digital complementary metal-oxide-semiconductor (CMOS) circuit for logic and memory elements for its operational state. Each synapse between two neural modules comprises a reconfigurable digital CMOS circuit for logic and memory elements for its operational state. Each synapse between two neural modules further comprises a communication link implemented via a combination of logical and physical primitives.
Hunzinger et al. (US 20130073501): teaches the arrangement of spiking neural networks and that the firing (e.g. spiking signal) where the network neurons transfer spikes from one level of neurons to another through the network of synaptic connections (or simply "synapses"). The synapses may receive output signals (i.e., spikes) from the level 102 neurons, scale those signals according to adjustable synaptic weights w.sub.1.sup.(i,i+1), . . . , w.sub.P.sup.(i,i+1) (where P is a total number of synaptic connections between the neurons of levels, and combine the scaled signals as an input signal of each neuron in the level. Further, each of the synapses may be associated with a delay, i.e., a time for which an output spike of a neuron of level i reaches a soma of neuron of level i+1.
Canoy et al. (Patent No. US 9460382): teaches an embodiment where in response to detecting an exception condition, the neural monitor resets and/or reconfigures the primary neural network. Alternatively, the neural monitor may reset and/or reconfigure one or more subsets of the primary neural network. Additionally, the neural monitor may reset the primary neural network and/or one or more subsets of the primary neural network. The neural monitor may reset or reconfigure the software and/or hardware of the primary neural network and/or one or more subsets of the primary neural network.
Harkin et al. (NPL: “A Reconfigurable and Biologically Inspired Paradigm for Computation Using Network‐On‐Chip and Spiking Neural Networks”): teaches The neural tiles operate in one of two modes: runtime or configuration. This is defined by the information in the packet header (see Table 1) where “00” and “01” define the payload to be either runtime or configuration, respectively. EMBRACE is configured to realize particular synapse models and the desired SNN topology. Configuration data is also delivered to the EMBRACE device in the form of data packets where each packet is addressed to a particular neural tile and contains information on the configuration of the router’s AT, the selection of cell synapse weights via the programmable voltage lines, Vq, and other neural tile parameters.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to OLUWATOSIN ALABI whose telephone number is (571)272-0516. The examiner can normally be reached Monday-Friday, 8:00am-5:00pm EST..
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michael Huntley can be reached on (303) 297-4307. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/OLUWATOSIN ALABI/Primary Examiner, Art Unit 2129