DETAILED ACTION
This action is in response to the filing on 06/20/2025. Claims 1-4, are pending and have been considered below.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 06/18/2025 is being considered by the examiner.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-4 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 1 recites the limitation "the corresponding chip" in lines 21 and 22. There is insufficient antecedent basis for this limitation in the claim.
Claim 1 recites the limitation “the 0th layer corresponding to the corresponding chip” on line 22, it is unclear how the 0th layer corresponds to a corresponding chip. For the purpose of examination, it will be interpreted as "the group in the 0th layer corresponding to the corresponding chip".
Claim 2 recites the limitation “a channel belonging to the group that does not correspond to the group in the first layer” on lines 7-8, it is unclear what group this channel belongs to, if it is the group in the 0th layer from claim 1 it is further unclear how the group in the 0th layer can both correspond to the group in the first layer and not correspond to the group in the first layer. For the purpose of examination, it will be interpreted as "a channel belonging to a group in the 0th layer that does not correspond to the group in the first layer".
Claim 2 recites the limitation "the group that does not correspond to the group in the first layer" in line 8. There is insufficient antecedent basis for this limitation in the claim.
Claim 2 recites the limitation "the channel belonging to the group corresponding to the chip" in line 9. There is insufficient antecedent basis for this limitation in the claim.
Claim 2 recites the limitation "the channel belonging to the group corresponding to the chip" on line 9, it is unclear what channel is being referred to. For the purpose of examination, it will be interpreted as "the channel belonging to the group in the first layer corresponding to the chip".
Claim 2 recites the limitation "the group corresponding to the chip" in line 9. There is insufficient antecedent basis for this limitation in the claim.
Claim 2 recites the limitation "the chip" in line 9. There is insufficient antecedent basis for this limitation in the claim.
Claim 2 recites the limitation “obtain the set of values for the channel belonging to the group in the first layer that does not correspond to the group from another chip corresponding to the group that does not correspond to the group in the first layer” on lines 10-13, it is unclear how the set of values for the channel belonging to the group in the first layer is obtained in order to calculate the set of values for the channel belonging to the group in the first layer. For the purpose of examination, it will be interpreted as "obtain a set of values for the channel belonging to the group in the 0th layer that does not correspond to the group in the first layer from another chip corresponding to the group in the 0th layer that does not correspond to the group in the first layer".
Claim 2 recites the limitation "the group" in lines 11-12. There is insufficient antecedent basis for this limitation in the claim.
Claim 3 recites the limitation "the edge" in line 3. There is insufficient antecedent basis for this limitation in the claim.
Claim 3 recites the limitation “a condition that the edge is not set between the channels, in the first layer and the 0th layer, that belong to non-corresponding groups” on lines 3-4, it is unclear how the edge between channels in the first layer and the 0th layer that belong to non-corresponding groups is not set, when claim 1 states that an edge is set between channels in the first layer and 0th layer that belong to non-corresponding groups under a restriction. For the purpose of examination, it will be interpreted as "a condition that at least one edge is not set between the channels, in the first layer and the 0th layer, that belong to non-corresponding groups".
Claim 4 recites the limitation "the chip" in lines 19, 20, 26, 28, 29, 30, 33, and 35. There is insufficient antecedent basis for this limitation in the claim.
Claim 4 recites the limitation "the first weights" in line 24. There is insufficient antecedent basis for this limitation in the claim.
Claim 4 recites the limitation “the group corresponding to the chip in the first layer” on lines 27-28, it is unclear how the chip is in the first layer. For the purpose of examination, it will be interpreted as "the group in the first layer corresponding to the chip".
Claim 4 recites the limitation " the group corresponding to the chip in the first layer" in lines 27-28. There is insufficient antecedent basis for this limitation in the claim.
Claim 4 recites the limitation “the channel belonging to the group that does not correspond to the group corresponding to the chip” on line28-29, it is unclear what group the channel belongs to. For the purpose of examination, it will be interpreted as "a channel belonging to the group in the 0th layer that does not correspond to the group in the first layer corresponding to the chip".
Claim 4 recites the limitation "the channel belonging to the group that does not correspond to the group corresponding to the chip" in lines 28-29. There is insufficient antecedent basis for this limitation in the claim.
Claim 4 recites the limitation “the edge connected to the channel belonging to the group corresponding to the chip” on lines 29-30, it is unclear what edge is set, and for which channel in which group. For the purpose of examination, it will be interpreted as "an edge connected to the channel belonging to the group in the first layer corresponding to the chip".
Claim 4 recites the limitation "the edge" in line 31. There is insufficient antecedent basis for this limitation in the claim.
Claim 4 recites the limitation “obtain the set of values for the channel belonging to the group that does not correspond to the group corresponding to the chip from another chip that corresponds to the group that does not correspond to the group corresponding to the chip” on lines 31-33, it is grammatically uninterpretable. For the purpose of examination, it will be interpreted as "obtain a set of values for the channel belonging to the group in the 0th layer that does not correspond to the group in the first layer corresponding to the chip from another chip that corresponds to the group in the 0th layer that does not correspond to the group in the first layer corresponding to the chip".
Claim 4 recites the limitation “the channel that belongs to the group corresponding to the chip in the first layer” on lines 34-35, it is unclear how the chip is in the first layer. For the purpose of examination, it will be interpreted as "the channel that belongs to the group in the first layer corresponding to the chip".
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3 are rejected under 35 U.S.C. 103 as being unpatentable over Akin et al. (US 2019/0042920 A1, cited in the Office Action mailed 03/26/2025) hereinafter Akin, in view of Amir et al. (US 2018/0260682 A1, cited in the Office Action mailed 03/26/2025), hereinafter Amir.
Regarding claim 1, Akin teaches an operation device comprising: a plurality of chips, (In an example, a neural-core 205 may be on a die with several other neural cores to form a neural-chip 255. Several neural-chips may be packaged and networked together to form neuromorphic hardware 250, which may be included in any number of devices 245, such as servers, mobile devices, sensors, actuators, etc. [see para. 38; FIG. 2]):
wherein each chip comprises memory which stores weights, for a plurality of edges, determined by learning (Akin discloses the spike timing dependent plasticity (STDP) learning technique used to learn the synaptic weights [see para. 35-36]; The neural-core 205 may include a memory block that is adapted to store the synaptic weights 220, a memory block for neuron membrane potentials 225, integration logic 235, thresholding logic 240, on-line learning and weight update logic based on STDP 210, and a spike history buffer 230. [see para. 40; FIG. 2]);
wherein the memory in each chip stores weights determined for the edge between the channels in the first layer and the 0th layer (The neural-core 205 may include a memory block that is adapted to store the synaptic weights 220, a memory block for neuron membrane potentials 225, integration logic 235, thresholding logic 240, on-line learning and weight update logic based on STDP 210, and a spike history buffer 230. [see para. 40; FIG. 2]);
wherein each chip further comprises a processor configured to calculate a set of values for a channel belonging to the first layer, the first layer corresponding to the 0th layer, based on the weights stored in the memory in the corresponding chip, and a set of values for a channel belonging to the 0th layer, the 0th layer corresponding to the corresponding chip (interpreted as wherein each chip further comprises a processor configured to calculate a set of values for a channel belonging to a group in the first layer, the first layer corresponding to the 0th layer, based on the weights stored in the memory in the corresponding chip, and a set of values for a channel belonging to the 0th layer, the group in the 0th layer corresponding to the corresponding chip per 35 U.S.C. 112(b) rejection above) (FIG. 1 illustrates an example diagram of a simplified neural network 110, providing an illustration of connections 135 between a first set of nodes 130 (e.g., neurons) and a second set of nodes 140 (e.g., neurons). [see para. 30; FIG. 1]; The temporal sequence of spikes generated by or for a particular neuron may be referred to as its “spike train.” [see para. 33]; A signal is provided at an input (e.g., ingress spikes, spike in, etc.) to a synapse (e.g., modeled by synaptic weights 220 in a synaptic variable memory) that may result in fan-out connections within the core to other dendrite structures with appropriate weight and delay offsets (e.g., represented by the synapse addresses 215 to identify to which synapse a dendrite corresponds). The signal may be modified by the synaptic variable memory (e.g., as synaptic weights are applied to spikes addressing respective synapses) and made available to the neuron model. For instance, the combination of the neuron membrane potentials 225 may be multiplexed 235 with the weighted spike and compared 240 to the neuron's potential to produce an output spike (e.g., egress spikes via an axon to one or several destination cores) based on weighted spike states. [see para. 38; FIG. 2]; Pick any node in the second set of nodes 140 as the channel in the first layer, then pick any node in the first set of nodes 130, with a connection to the selected node in the second set of nodes 140, as the channel in the 0th layer. The chip that stores the node in the second set of nodes 140 will calculate the set of values, as a spike train, based on the weight stored and the set of values, as a spike train, from the node in the first set of nodes 130).
However, Akin fails to teach learning under a condition that channels, in a first layer and a 0th layer that is a previous layer to the first layer in a neural network, are divided into groups respectively corresponding to the chips, whose number is equal to a number of the chips , an edge of the plurality of edges being set between the channels in the first layer and 0th layer belonging to corresponding groups, an edge being set between the channels in the first layer and the 0th layer belonging to non-corresponding groups under a restriction; the edge between the channels in the first layer and the 0th layer belonging to corresponding groups; and a channel belonging to a group in the first layer, the group of the first layer corresponding to a group in the 0th layer, and a channel belonging to the group in the 0th layer, the 0th layer corresponding to the corresponding chip.
In the same field of endeavor, Amir teaches:
learning under a condition that channels, in a first layer and a 0th layer that is a previous layer to the first layer in a neural network, are divided into groups respectively corresponding to the chips, whose number is equal to a number of the chips (Amir discloses a deep convolution network comprising multiple layers of neurosynaptic cores [see Amir, para. 33] where certain layers or parts of the neurosynaptic network can be designed using neurosynaptic cores in many different ways [see Amir, para 36]; In the case of a multi-chip neuro-synaptic system with K chips, the image signal (H×W×C) would be cut into K disjoint partitions/clusters such that there is minimal number of edges across chip boundaries at the successive convolution layers. Each convolution layer with overlapping patches generates cross-chip edges at the topographic chip boundaries of feature maps. [see Amir, para. 44]), are divided into groups respectively corresponding to the chips, whose number is equal to a number of the chips (Amir discloses the network model as a graph comprising nodes (neurosynaptic cores) and edges (connections between neurons on a neurosynaptic core and axons on target cores), which when placed on multi-chip hardware are partitioned with each partition placed on one chip [see Amir, para. 35], and a method for assigning neurons to the neurosynaptic cores, by labelling neurons based on their mapping from the input domain and grouping them based on the labelling, such that each group is assigned to a neurosynaptic core [see Amir, para. 74]), an edge of the plurality of edges being set between the channels in the first layer and 0th layer belonging to corresponding groups, an edge being set between the channels in the first layer and the 0th layer belonging to non-corresponding groups under a restriction (Amir discloses the network model as a graph comprising nodes (neurosynaptic cores) and edges (connections between neurons on a neurosynaptic core and axons on target cores), which when placed on multi-chip hardware are partitioned with each partition placed on one chip [see Amir, para. 35], and a method for assigning neurons to the neurosynaptic cores, by labelling neurons based on their mapping from the input domain and grouping them based on the labelling, such that each group is assigned to a neurosynaptic core [see Amir, para. 74]; In the case of a multi-chip neuro-synaptic system with K chips, the image signal (H×W×C) would be cut into K disjoint partitions/clusters such that there is minimal number of edges across chip boundaries at the successive convolution layers. Each convolution layer with overlapping patches generates cross-chip edges at the topographic chip boundaries of feature maps. [see Amir, para. 44]; Thus, corresponding groups have edges set between them when being cut into partitions across the chips, and non-corresponding groups have edges set between them under the restriction that it is only done when there are overlapping patches across chips),
the channels in the first layer and the 0th layer belonging to corresponding groups (Amir discloses the network model as a graph comprising nodes (neurosynaptic cores) and edges (connections between neurons on a neurosynaptic core and axons on target cores), which when placed on multi-chip hardware are partitioned with each partition placed on one chip [see Amir, para. 35], and a method for assigning neurons to the neurosynaptic cores, by labelling neurons based on their mapping from the input domain and grouping them based on the labelling, such that each group is assigned to a neurosynaptic core [see Amir, para. 74]; In the case of a multi-chip neuro-synaptic system with K chips, the image signal (H×W×C) would be cut into K disjoint partitions/clusters such that there is minimal number of edges across chip boundaries at the successive convolution layers. [see Amir, para. 44]; Thus, corresponding groups have edges set between them when being cut into the partitions across the chips), and
a channel belonging to a group in the first layer, the group of the first layer corresponding to a group in the 0th layer (Amir discloses the network model as a graph comprising nodes (neurosynaptic cores) and edges (connections between neurons on a neurosynaptic core and axons on target cores), which when placed on multi-chip hardware are partitioned with each partition placed on one chip [see Amir, para. 35], and a method for assigning neurons to the neurosynaptic cores, by labelling neurons based on their mapping from the input domain and grouping them based on the labelling, such that each group is assigned to a neurosynaptic core [see Amir, para. 74]; In the case of a multi-chip neuro-synaptic system with K chips, the image signal (H×W×C) would be cut into K disjoint partitions/clusters such that there is minimal number of edges across chip boundaries at the successive convolution layers. Each convolution layer with overlapping patches generates cross-chip edges at the topographic chip boundaries of feature maps. [see Amir, para. 44]; Thus, corresponding groups have edges set between them when being cut into partitions across the chips, and a group in a first layer would correspond to a group in the previous layer), and a channel belonging to the group in the 0th layer, the 0th layer corresponding to the corresponding chip (interpreted as and a channel belonging to the group in the 0th layer, the group in the 0th layer corresponding to the corresponding chip per 35 U.S.C 112(b) rejection above) (Amir discloses the network model as a graph comprising nodes (neurosynaptic cores) and edges (connections between neurons on a neurosynaptic core and axons on target cores), which when placed on multi-chip hardware are partitioned with each partition placed on one chip [see Amir, para. 35], and a method for assigning neurons to the neurosynaptic cores, by labelling neurons based on their mapping from the input domain and grouping them based on the labelling, such that each group is assigned to a neurosynaptic core [see Amir, para. 74]; In the case of a multi-chip neuro-synaptic system with K chips, the image signal (H×W×C) would be cut into K disjoint partitions/clusters such that there is minimal number of edges across chip boundaries at the successive convolution layers. Each convolution layer with overlapping patches generates cross-chip edges at the topographic chip boundaries of feature maps. [see Amir, para. 44]; Thus, corresponding groups have edges set between them when being cut into partitions across the chips, and a group in the 0th layer would correspond to the chip).
It would have been obvious to one of ordinary skill, in the art at the time before the effective filing date of the invention to incorporate learning under a condition that channels, in a first layer and a 0th layer that is a previous layer to the first layer in a neural network, are divided into groups respectively corresponding to the chips, whose number is equal to a number of the chips , an edge of the plurality of edges being set between the channels in the first layer and 0th layer belonging to corresponding groups, an edge being set between the channels in the first layer and the 0th layer belonging to non-corresponding groups under a restriction; the edge between the channels in the first layer and the 0th layer belonging to corresponding groups; and a channel belonging to a group in the first layer, the group of the first layer corresponding to a group in the 0th layer, and a channel belonging to the group in the 0th layer, the 0th layer corresponding to the corresponding chip as suggested in Amir into Akin because both systems employ neuromorphic chips to model neural networks (see Akin, para. 38; FIG. 2; see Amir, para. 74). Incorporating the teaching of Amir into Akin would generate efficient core placement that minimizes the communication between cores across chips and maximizes communication between cores within each chip (see Amir, para. 32).
Regarding claim 2, the combination of Akin and Amir as applied in claim 1 above teaches all the limitations of claim 1 and further teaches:
wherein the memory in each chip stores the weight, for each edge, determined (The neural-core 205 may include a memory block that is adapted to store the synaptic weights 220, a memory block for neuron membrane potentials 225, integration logic 235, thresholding logic 240, on-line learning and weight update logic based on STDP 210, and a spike history buffer 230. [see Akin, para. 40; FIG. 2]) under a condition that the edges between the channels in the first layer and the 0th layer that belong to non-corresponding groups are set only for some pairs among pairs of channels that belong to the non-corresponding groups, and (Amir discloses the network model as a graph comprising nodes (neurosynaptic cores) and edges (connections between neurons on a neurosynaptic core and axons on target cores), which when placed on multi-chip hardware are partitioned with each partition placed on one chip [see Amir, para. 35], and a method for assigning neurons to the neurosynaptic cores, by labelling neurons based on their mapping from the input domain and grouping them based on the labelling, such that each group is assigned to a neurosynaptic core [see Amir, para. 74]; In the case of a multi-chip neuro-synaptic system with K chips, the image signal (H×W×C) would be cut into K disjoint partitions/clusters such that there is minimal number of edges across chip boundaries at the successive convolution layers. Each convolution layer with overlapping patches generates cross-chip edges at the topographic chip boundaries of feature maps. [see Amir, para. 44]; Thus, corresponding groups have edges set between them when being cut into partitions across the chips, and non-corresponding groups have edges set between them under the restriction that it is only done when there are overlapping patches across chips, thus not all non-corresponding groups have edges set between them),
wherein when calculating the set of values for the channel belonging to the group in the first layer (Akin discloses a neural network with a first set of nodes 130 and a second set of nodes 140, with connections 135 between them [see Akin, para. 30; FIG. 1]; Amir discloses the network model as a graph comprising nodes (neurosynaptic cores) and edges (connections between neurons on a neurosynaptic core and axons on target cores), which when placed on multi-chip hardware are partitioned with each partition placed on one chip [see Amir, para. 35], and a method for assigning neurons to the neurosynaptic cores, by labelling neurons based on their mapping from the input domain and grouping them based on the labelling, such that each group is assigned to a neurosynaptic core [see Amir, para. 74]; Pick any node in the second set of nodes 140 as the channel in the first layer and it will belong to a group, by integration of the partitioning of Amir into Akin, in the first layer), if there is a channel belonging to the group that does not correspond to the group in the first layer (interpreted as if there is a channel belonging to a group in the 0th layer that does not correspond to the group in the first layer per 35 U.S.C. 112(b) rejection above) for which an edge connected to the channel belonging to the group corresponding to the chip is set (interpreted as for which an edge connected the channel belonging to the group in the first layer corresponding to the chip is set per 35 U.S.C. 112(b) rejection above), the processor in each chip is further configured to obtain the set of values for the channel belonging to the group in the first layer that does not correspond to the group from another chip corresponding to the group that does not correspond to the group in the first layer (interpreted as the processor in each chip is further configured to obtain a set of values for the channel belonging to the group in the 0th layer that does not correspond to the group in the first layer from another chip corresponding to the group in the 0th layer that does not correspond to the group in the first layer per 35 U.S.C. 112(b) rejection above) and calculate the set of values for the channel belonging to the group in the first layer using the obtained set of values (Amir discloses the network model as a graph comprising nodes (neurosynaptic cores) and edges (connections between neurons on a neurosynaptic core and axons on target cores), which when placed on multi-chip hardware are partitioned with each partition placed on one chip [see Amir, para. 35], and a method for assigning neurons to the neurosynaptic cores, by labelling neurons based on their mapping from the input domain and grouping them based on the labelling, such that each group is assigned to a neurosynaptic core [see Amir, para. 74]; In the case of a multi-chip neuro-synaptic system with K chips, the image signal (H×W×C) would be cut into K disjoint partitions/clusters such that there is minimal number of edges across chip boundaries at the successive convolution layers. Each convolution layer with overlapping patches generates cross-chip edges at the topographic chip boundaries of feature maps. [see Amir, para. 44]; Thus, corresponding groups have edges set between them when being cut into partitions across the chips, and non-corresponding groups have edges set between them under the restriction that it is only done when there are overlapping patches across chips, thus non-corresponding groups do have edges set between them; Akin discloses a neural network with a first set of nodes 130 and a second set of nodes 140, with connections 135 between them [see Akin, para. 30; FIG. 1]; For the case where an edge is set between non-corresponding groups as taught by Amir, the two nodes have a connection, thus when calculating values in the first layer as taught by Akin, the group in the first layer will obtain the set of values from the group in the 0th layer).
Regarding claim 3, the combination of Akin and Amir as applied in claim 1 above teaches all the limitations of claim 1 and further teaches:
wherein the memory in each chip stores a weight determined for each edge (Akin discloses the spike timing dependent plasticity (STDP) learning technique used to learn the synaptic weights [see para. 35-36]; The neural-core 205 may include a memory block that is adapted to store the synaptic weights 220, a memory block for neuron membrane potentials 225, integration logic 235, thresholding logic 240, on-line learning and weight update logic based on STDP 210, and a spike history buffer 230. [see para. 40; FIG. 2]) under a condition that the edge is not set between the channels, in the first layer and the 0th layer, that belong to non-corresponding groups (interpreted as under a condition that at least one edge is not set between the channels, in the first layer and the 0th layer, that belong to non-corresponding groups per 35 U.S.C. 112(b) rejection above) (Amir discloses the network model as a graph comprising nodes (neurosynaptic cores) and edges (connections between neurons on a neurosynaptic core and axons on target cores), which when placed on multi-chip hardware are partitioned with each partition placed on one chip [see Amir, para. 35], and a method for assigning neurons to the neurosynaptic cores, by labelling neurons based on their mapping from the input domain and grouping them based on the labelling, such that each group is assigned to a neurosynaptic core [see Amir, para. 74]; In the case of a multi-chip neuro-synaptic system with K chips, the image signal (H×W×C) would be cut into K disjoint partitions/clusters such that there is minimal number of edges across chip boundaries at the successive convolution layers. Each convolution layer with overlapping patches generates cross-chip edges at the topographic chip boundaries of feature maps. [see Amir, para. 44]; Thus, corresponding groups have edges set between them when being cut into partitions across the chips, and non-corresponding groups do not have an edge set between them).
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Akin et al. (US 2019/0042920 A1, cited in the Office Action mailed 03/26/2025) hereinafter Akin, in view of Amir et al. (US 2018/0260682 A1, cited in the Office Action mailed 03/26/2025), hereinafter Amir, and further in view of DAVID et al. (US 2019/0108436 A1, cited in the Office Action mailed 03/26/2025), hereinafter David.
Regarding claim 4, Akin teaches an operating device comprising: a plurality of chips (In an example, a neural-core 205 may be on a die with several other neural cores to form a neural-chip 255. Several neural-chips may be packaged and networked together to form neuromorphic hardware 250, which may be included in any number of devices 245, such as servers, mobile devices, sensors, actuators, etc. [see Akin, para. 38; FIG. 2]):
wherein each chip comprises a memory which stores weights determined by learning for a plurality of edges, an edge of the plurality of edges being set between the channels in the first layer and the 0th layer, and a weight between channels in the first layer and the 0th layer being learned (Akin discloses the spike timing dependent plasticity (STDP) learning technique used to learn the synaptic weights [see para. 35-36], and further discloses the neural network with a first set of nodes 130 and a second set of nodes 140, with connections 135 between them [see Akin, para. 30; FIG. 1]; The neural-core 205 may include a memory block that is adapted to store the synaptic weights 220, a memory block for neuron membrane potentials 225, integration logic 235, thresholding logic 240, on-line learning and weight update logic based on STDP 210, and a spike history buffer 230. [see para. 40; FIG. 2]; Thus, a plurality of weights can be determined for a plurality of edges, including edges connecting a first layer and a 0th layer, by learning);
wherein the memory in each chip stores a first weight determined for an edge between the channels in the first layer and the 0th layer, and a second weight for the edge between the channel, belonging to the first layer, corresponding to the chip and the channel, belonging to the 0th layer (Akin discloses the spike timing dependent plasticity (STDP) learning technique used to learn the synaptic weights [see para. 35-36], and further discloses the neural network with a first set of nodes 130 and a second set of nodes 140, with connections 135 between them [see Akin, para. 30; FIG. 1]; The neural-core 205 may include a memory block that is adapted to store the synaptic weights 220, a memory block for neuron membrane potentials 225, integration logic 235, thresholding logic 240, on-line learning and weight update logic based on STDP 210, and a spike history buffer 230. [see para. 40; FIG. 2]; Thus, a plurality of weights can be determined for a plurality of edges connecting a first layer and 0th layer);
wherein each chip further comprises a processor configured to calculate a set of values for the channel that belongs to the first layer corresponding to the 0th layer, based on the first weights and a set of values for a channel belonging to the 0th layer (FIG. 1 illustrates an example diagram of a simplified neural network 110, providing an illustration of connections 135 between a first set of nodes 130 (e.g., neurons) and a second set of nodes 140 (e.g., neurons). [see para. 30; FIG. 1]; The temporal sequence of spikes generated by or for a particular neuron may be referred to as its “spike train.” [see para. 33]; A signal is provided at an input (e.g., ingress spikes, spike in, etc.) to a synapse (e.g., modeled by synaptic weights 220 in a synaptic variable memory) that may result in fan-out connections within the core to other dendrite structures with appropriate weight and delay offsets (e.g., represented by the synapse addresses 215 to identify to which synapse a dendrite corresponds). The signal may be modified by the synaptic variable memory (e.g., as synaptic weights are applied to spikes addressing respective synapses) and made available to the neuron model. For instance, the combination of the neuron membrane potentials 225 may be multiplexed 235 with the weighted spike and compared 240 to the neuron's potential to produce an output spike (e.g., egress spikes via an axon to one or several destination cores) based on weighted spike states. [see para. 38; FIG. 2]; Pick any node in the second set of nodes 140 as the channel in the first layer, then pick any node in the first set of nodes 130, with a connection to the selected node in the second set of nodes 140, as the channel in the 0th layer. The chip that stores the node in the second set of nodes 140 will calculate the set of values, as a spike train, based on the weight stored and the set of values, as a spike train, from the node in the first set of nodes 130);
when calculating the set of values for the channel corresponding to the chip in the first layer (interpreted as when calculating the set of values for the channel in the first layer corresponding to the chip per 35 U.S.C. 112(b) rejection above), obtain the set of values for the channel from another chip (interpreted as obtain a set of values for a channel in the 0th layer from another chip in the 0th layer), and calculate the set of values for the channel corresponding to the chip in the first layer using obtained set of values and the second weight (FIG. 1 illustrates an example diagram of a simplified neural network 110, providing an illustration of connections 135 between a first set of nodes 130 (e.g., neurons) and a second set of nodes 140 (e.g., neurons). [see para. 30; FIG. 1]; The temporal sequence of spikes generated by or for a particular neuron may be referred to as its “spike train.” [see para. 33]; A signal is provided at an input (e.g., ingress spikes, spike in, etc.) to a synapse (e.g., modeled by synaptic weights 220 in a synaptic variable memory) that may result in fan-out connections within the core to other dendrite structures with appropriate weight and delay offsets (e.g., represented by the synapse addresses 215 to identify to which synapse a dendrite corresponds). The signal may be modified by the synaptic variable memory (e.g., as synaptic weights are applied to spikes addressing respective synapses) and made available to the neuron model. For instance, the combination of the neuron membrane potentials 225 may be multiplexed 235 with the weighted spike and compared 240 to the neuron's potential to produce an output spike (e.g., egress spikes via an axon to one or several destination cores) based on weighted spike states. [see para. 38; FIG. 2]; Pick any node in the second set of nodes 140 as the channel in the first layer, then pick any node in the first set of nodes 130, with a connection to the selected node in the second set of nodes 140, as the channel in the 0th layer. The chip that stores the node in the second set of nodes 140 will calculate the set of values, as a spike train, based on the weight stored and the set of values, as a spike train, from the node in the first set of nodes 130).
However, Akin fails to teach learning for a plurality of edges under a condition that channels, in a first layer and a 0th layer that is a previous layer to the first layer in a neural network, are divided into groups respectively corresponding to the chips, whose number is equal to a number of the chips, a weight between channels in the first layer and the 0th layer belonging to non-corresponding groups being learned to become 0 or as close to 0 as possible;
an edge between the channels in the first layer and the 0th layer belonging to corresponding groups, and the edge between the channel, belonging to the group in the first layer, corresponding to the chip and the channel, belonging to the group in the 0th layer, which is non-corresponding to the chip; wherein the second weight is equal to or more than a predetermined threshold;
the channel that belongs to the group in the first layer corresponding to the group in the 0th layer, and a channel belonging to the group in the 0th layer corresponding to the chip;
the channel that belongs to the group corresponding to the chip in the first layer; if there is the channel belonging to the group that does not correspond to the group corresponding to the chip and for which the edge connected to the channel belonging to the group corresponding to the chip is set wherein the second weight is determined for the edge; the channel belonging to the group that does not correspond to the group corresponding to the chip from another chip that corresponds to the group that does not correspond to the group corresponding to the chip, and the channel that belongs to the group corresponding to the chip in the first layer.
In the same field of endeavor, Amir teaches:
learning for a plurality of edges under a condition that channels, in a first layer and a 0th layer that is a previous layer to the first layer in a neural network (Amir discloses a deep convolution network comprising multiple layers of neurosynaptic cores [see Amir, para. 33] where certain layers or parts of the neurosynaptic network can be designed using neurosynaptic cores in many different ways [see Amir, para 36]; In the case of a multi-chip neuro-synaptic system with K chips, the image signal (H×W×C) would be cut into K disjoint partitions/clusters such that there is minimal number of edges across chip boundaries at the successive convolution layers. Each convolution layer with overlapping patches generates cross-chip edges at the topographic chip boundaries of feature maps. [see Amir, para. 44]), are divided into groups respectively corresponding to the chips, whose number is equal to a number of the chips (Amir discloses the network model as a graph comprising nodes (neurosynaptic cores) and edges (connections between neurons on a neurosynaptic core and axons on target cores), which when placed on multi-chip hardware are partitioned with each partition placed on one chip [see Amir, para. 35], and a method for assigning neurons to the neurosynaptic cores, by labelling neurons based on their mapping from the input domain and grouping them based on the labelling, such that each group is assigned to a neurosynaptic core [see Amir, para. 74]), a weight between channels in the first layer and the 0th layer belonging to non-corresponding groups being learned (Amir discloses the network model as a graph comprising nodes (neurosynaptic cores) and edges (connections between neurons on a neurosynaptic core and axons on target cores), which when placed on multi-chip hardware are partitioned with each partition placed on one chip [see Amir, para. 35], and a method for assigning neurons to the neurosynaptic cores, by labelling neurons based on their mapping from the input domain and grouping them based on the labelling, such that each group is assigned to a neurosynaptic core [see Amir, para. 74]; In the case of a multi-chip neuro-synaptic system with K chips, the image signal (H×W×C) would be cut into K disjoint partitions/clusters such that there is minimal number of edges across chip boundaries at the successive convolution layers. Each convolution layer with overlapping patches generates cross-chip edges at the topographic chip boundaries of feature maps. [see Amir, para. 44]; Thus, corresponding groups have edges set between them when being cut into partitions across the chips, and non-corresponding groups have edges set between them under the restriction that it is only done when there are overlapping patches across chips, with the edges having weights);
an edge between the channels in the first layer and the 0th layer belonging to corresponding groups (Amir discloses the network model as a graph comprising nodes (neurosynaptic cores) and edges (connections between neurons on a neurosynaptic core and axons on target cores), which when placed on multi-chip hardware are partitioned with each partition placed on one chip [see Amir, para. 35], and a method for assigning neurons to the neurosynaptic cores, by labelling neurons based on their mapping from the input domain and grouping them based on the labelling, such that each group is assigned to a neurosynaptic core [see Amir, para. 74]; In the case of a multi-chip neuro-synaptic system with K chips, the image signal (H×W×C) would be cut into K disjoint partitions/clusters such that there is minimal number of edges across chip boundaries at the successive convolution layers. Each convolution layer with overlapping patches generates cross-chip edges at the topographic chip boundaries of feature maps. [see Amir, para. 44]; Thus, corresponding groups have edges set between them when being cut into partitions across the chips, and a group in a first layer would correspond to a group in the previous layer with an edge set between them), and the edge between the channel, belonging to the group in the first layer, corresponding to the chip and the channel, belonging to the group in the 0th layer, which is non-corresponding to the chip (Amir discloses the network model as a graph comprising nodes (neurosynaptic cores) and edges (connections between neurons on a neurosynaptic core and axons on target cores), which when placed on multi-chip hardware are partitioned with each partition placed on one chip [see Amir, para. 35], and a method for assigning neurons to the neurosynaptic cores, by labelling neurons based on their mapping from the input domain and grouping them based on the labelling, such that each group is assigned to a neurosynaptic core [see Amir, para. 74]; In the case of a multi-chip neuro-synaptic system with K chips, the image signal (H×W×C) would be cut into K disjoint partitions/clusters such that there is minimal number of edges across chip boundaries at the successive convolution layers. Each convolution layer with overlapping patches generates cross-chip edges at the topographic chip boundaries of feature maps. [see Amir, para. 44]; Thus, corresponding groups have edges set between them when being cut into partitions across the chips, and non-corresponding groups have edges set between them under the restriction that it is only done when there are overlapping patches across chips, such that there is a group in the first layer with an edge connected to a non-corresponding group in the 0th layer);
the channel that belongs to the group in the first layer corresponding to the group in the 0th layer (Amir discloses the network model as a graph comprising nodes (neurosynaptic cores) and edges (connections between neurons on a neurosynaptic core and axons on target cores), which when placed on multi-chip hardware are partitioned with each partition placed on one chip [see Amir, para. 35], and a method for assigning neurons to the neurosynaptic cores, by labelling neurons based on their mapping from the input domain and grouping them based on the labelling, such that each group is assigned to a neurosynaptic core [see Amir, para. 74]; In the case of a multi-chip neuro-synaptic system with K chips, the image signal (H×W×C) would be cut into K disjoint partitions/clusters such that there is minimal number of edges across chip boundaries at the successive convolution layers. Each convolution layer with overlapping patches generates cross-chip edges at the topographic chip boundaries of feature maps. [see Amir, para. 44]; Thus, corresponding groups have edges set between them when being cut into partitions across the chips, such that there is a group in the first layer with an edge connected to a corresponding group in the 0th layer), and a channel belonging to the group in the 0th layer corresponding to the chip (Amir discloses the network model as a graph comprising nodes (neurosynaptic cores) and edges (connections between neurons on a neurosynaptic core and axons on target cores), which when placed on multi-chip hardware are partitioned with each partition placed on one chip [see Amir, para. 35], and a method for assigning neurons to the neurosynaptic cores, by labelling neurons based on their mapping from the input domain and grouping them based on the labelling, such that each group is assigned to a neurosynaptic core [see Amir, para. 74]; In the case of a multi-chip neuro-synaptic system with K chips, the image signal (H×W×C) would be cut into K disjoint partitions/clusters such that there is minimal number of edges across chip boundaries at the successive convolution layers. Each convolution layer with overlapping patches generates cross-chip edges at the topographic chip boundaries of feature maps. [see Amir, para. 44]; Thus, corresponding groups have edges set between them when being cut into partitions across the chips, such that there is a group in the first layer with an edge connected to a corresponding group in the 0th layer, and both groups correspond to the same chip);
the channel that belongs to the group corresponding to the chip in the first layer (interpreted as the channel that belongs to the group in the first layer corresponding to the chip per 35 U.S.C. 112(b) rejection above) (Amir discloses the network model as a graph comprising nodes (neurosynaptic cores) and edges (connections between neurons on a neurosynaptic core and axons on target cores), which when placed on multi-chip hardware are partitioned with each partition placed on one chip [see Amir, para. 35], and a method for assigning neurons to the neurosynaptic cores, by labelling neurons based on their mapping from the input domain and grouping them based on the labelling, such that each group is assigned to a neurosynaptic core [see Amir, para. 74]; In the case of a multi-chip neuro-synaptic system with K chips, the image signal (H×W×C) would be cut into K disjoint partitions/clusters such that there is minimal number of edges across chip boundaries at the successive convolution layers. Each convolution layer with overlapping patches generates cross-chip edges at the topographic chip boundaries of feature maps. [see Amir, para. 44]; Thus, corresponding groups have edges set between them when being cut into partitions across the chips, such that there is a group in the first layer on a corresponding chip); if there is the channel belonging to the group that does not correspond to the group corresponding to the chip (interpreted as if there is a channel belonging to the group in the 0th layer that does not correspond to the group in the first layer corresponding to the chip per 35 U.S.C. 112(b) rejection above) and for which the edge connected to the channel belonging to the group corresponding to the chip is set wherein the second weight is determined for the edge (Amir discloses the network model as a graph comprising nodes (neurosynaptic cores) and edges (connections between neurons on a neurosynaptic core and axons on target cores), which when placed on multi-chip hardware are partitioned with each partition placed on one chip [see Amir, para. 35], and a method for assigning neurons to the neurosynaptic cores, by labelling neurons based on their mapping from the input domain and grouping them based on the labelling, such that each group is assigned to a neurosynaptic core [see Amir, para. 74]; In the case of a multi-chip neuro-synaptic system with K chips, the image signal (H×W×C) would be cut into K disjoint partitions/clusters such that there is minimal number of edges across chip boundaries at the successive convolution layers. Each convolution layer with overlapping patches generates cross-chip edges at the topographic chip boundaries of feature maps. [see Amir, para. 44]; Thus, corresponding groups have edges set between t