17991DETAILED ACTION
This action is written in response to the application filed 11/21/22. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
Claims 1-54 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. 35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Independent claim 1 recites a “brain-like neural network” comprising various ‘modules’. The Examiner interprets ‘module’ in light of the written description at p. 9:
“This kind of brain neural network is easy to be implemented in software, firmware (such as FPGA) or hardware 10 (such as ASIC), which provides the basis for the design and application of brain-like neural network chips.”
In view of the above passage, and because the Applicant provides no definitive guidance on the scope of the term, the Examiner interprets the claimed invention as encompassing software per se, which is not a process, machine, manufacture, or a composition of matter, and therefore is nonstatutory subject matter. Dependent claims 2-54 inherit this deficiency from claim 1.
Claims 1-54 are also rejected under 35 U.S.C. 101 because the claimed invention lacks a specific and substantial utility.
The claimed invention lacks a specific utility.
A "specific utility" is specific to the subject matter claimed and can "provide a well-defined and particular benefit to the public." In re Fisher, 421 F.3d 1365, 1371, 76 USPQ2d 1225, 1230 (Fed. Cir. 2005). This contrasts with a general utility that would be applicable to the broad class of the invention. Office personnel should distinguish between situations where an applicant has disclosed a specific use for or application of the invention and situations where the applicant merely indicates that the invention may prove useful without identifying with specificity why it is considered useful.” (MPEP 2107.01(I)(A))
Claim 1 recites “[a] brain-like neural network with memory and abstraction functions”. There are no meaningful limitations in the claim regarding what “memory and abstraction functions” might encompass. Although the specification makes vague references to “object recognition, spatial navigation, reasoning and autonomous decision-making” (specification, p. 2), there is no indication that the claimed invention improves any particular real-world problem. Dependent claims 2-54 inherit this deficiency from claim 1.
The claimed invention lacks a substantial utility.
"[A]n application must show that an invention is useful to the public as disclosed in its current form, not that it may prove useful at some future date after further research. Simply put, to satisfy the ‘substantial’ utility requirement, an asserted use must show that the claimed invention has a significant and presently available benefit to the public." In re Fisher, 421 F.3d 1365, 1374, 76 USPQ2d 1225, 1232. See also MPEP 2107.01.
Independent claim 1 is directed to a “brain-like neural network”. How the claimed neural network is “brain-like” is unspecified. What the neural network does is unspecified. The output of the neural network is unspecified. Because of these deficiencies, the Examiner finds that the claimed invention does not have any significant benefit to the public that was available at the time of filing based on the specification.
Claim Rejections - 35 USC § 112(b) - Indefiniteness
The following is a quotation of the second paragraph of 35 U.S.C. 112:
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claim 7 is rejected under 35 U.S.C. 112(b), as being indefinite for failing to particularly point out and distinctly claim the subject matter which applicant regards as the invention. This claim recites “wherein the plurality of neurons of the brain-like neural network are impulsive or non-impulsive neurons”. The Applicant provides no definition for terms ‘impulsive’ and ‘non-impulsive’ in their written description, and their meaning is not apparent. The term might include:
a neuron which fires at a particular time step,
a neuron which is biased towards firing at a particular time step,
a neuron which fires often, or
a neuron with particular plasticity or potentiation characteristics.
Because the Applicant does not make clear the scope and meaning of this term, the Examiner finds it ambiguous, and consequently the claim as a whole is indefinite.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action:
(a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102 of this title, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negatived by the manner in which the invention was made.
The following are the references relied upon in the rejections below:
Goodfellow (Goodfellow, Ian, Bengio, Yoshua, and Aaron Courville. Deep learning. Vol. 1. Cambridge, MA, USA: MIT press, 2017. 777 pages.)
Hunzinger (US 2016/0260012 A1)
Claims 1-3 is rejected under 35 U.S.C. 103 as being unpatentable over Goodfellow.
Regarding claim 1, Goodfellow discloses a brain-like neural network with memory and information abstraction functions, comprising:
P. 13, “Some of the earliest learning algorithms we recognize today were intended to be computational models of biological learning, that is, models of how learning happens or could happen in the brain. As a result, one of the names that deep learning has gone by is artificial neural networks (ANNs). The corresponding perspective on deep learning models is that they are engineered systems inspired by the biological brain (whether the human brain or the brain of another animal).”
a perceptual module;
P. 14, “In the 1950s, the perceptron (Rosenblatt, 1958, 1962) became the first model that could learn the weights that defined the categories given examples of inputs from each category.”
an instance encoding module;
P. 166, fig. 6.2, “h1”.
an environment encoding module;
P. 427, “Video game rendering requires performing many operations in parallel quickly. Models of characters and environments are specified in lists of 3-D coordinates of vertices.”
spatial encoding module;
P. 476, “It is also possible to assign spatial coordinates to each hidden unit and form overlapping groups of spatially neighboring units.”
a time encoding module;
“The slowness principle may be introduced by adding a term to the cost function of the form …. where _ is a hyperparameter determining the strength of the slowness regularization term, t is the index into a time sequence of examples, f is the feature extractor to be regularized, and L is a loss function measuring the distance between f(x(t)) and f(x(t+1)).
a motion and orientation encoding module;
P. 677, describing using DNN to encode face rotation (ie motion and orientation) information: “In one of the cases demonstrated in the figure, the algorithm discovered two independent factors of variation present in images of faces: angle of rotation and emotional expression.”
an information synthesis and exchange module; and
P. 103, “To make a machine learning algorithm, we need to design an algorithm that will improve the weights w in a way that reduces MSEtest when the algorithm is allowed to gain experience by observing a training set (X(train); y(train)).”
a memory module,
Id. The Examiner notes that neural networks learn weight values, which are stored for reuse by each neuron.
wherein each module comprises a plurality of neurons,
The Examiner notes that every technique described in this book pertains to deep neural networks, ie a network comprising a plurality of neurons arranged in layers. See eg p. 166, fig. 6.2, “h1” and “h2”. (Reproduced below.)
PNG
media_image1.png
258
304
media_image1.png
Greyscale
wherein the neurons comprise a plurality of perceptual encoding neurons,
P. 160, “Deep feedforward networks, also called feedforward neural networks, or multilayer perceptrons (MLPs), are the quintessential deep learning models.” (Emphasis added.)
wherein the perceptual module comprises a plurality of said perceptual encoding neurons encoding visual representation information of observed objects,
P. 95, “An example of a classification task is object recognition, where the input is an image (usually described as a set of pixel brightness values), and the output is a numeric code identifying the object in the image.”
wherein the instance encoding module comprises a plurality of said instance encoding neurons encoding instance representation information,
P. 166, fig. 6.2, “h1” and “h2”.
wherein the environment encoding module comprises a plurality of the environment encoding neurons encoding environment representation information,
Id. Neurons arranged as depicted in fig. 6.2 can be used to store environment information as discussed at p. 427 (see cited passage supra).
wherein the spatial encoding module comprises a plurality of the spatial encoding neurons encoding spatial representation information,
Id. Neurons arranged as depicted in fig. 6.2 can be used to store environment information as discussed at p. 476 (see cited passage supra).
wherein the time encoding module comprises a plurality of the time encoding neurons encoding temporal information,
Id. Neurons arranged as depicted in fig. 6.2 can be used to store environment information as discussed at p. 359: “A related idea is the use of convolution across a 1-D temporal sequence. This convolutional approach is the basis for time-delay neural networks (Lang and Hinton, 1988; Waibel et al., 1989; Lang et al., 1990). The convolution operation allows a network to share parameters across time but is shallow. The output of convolution is a sequence where each member of the output is a function of a small number of neighboring members of the input. The idea of parameter sharing manifests in the application of the same convolution kernel at each time step.”
wherein the motion and orientation encoding module comprises a plurality of the motion and orientation encoding neurons encoding instantaneous speed information or relative displacement information of intelligent agents,
Id. Neurons arranged as depicted in fig. 6.2 can be used to store motion and orientation information as discussed at p. 677 (see cited passage supra).
wherein the information synthesis and exchange module comprise an information input channel and an information output channel,
information input channel :: p. 166, fig. 6.1 (reproduced supra), x1 and x2 taken together comprise the input channel.
information output channel :: p. 166, fig. 6.1 (reproduced supra), y.
the information input channel comprises a plurality of the information input neurons, and
P. 166, fig. 6.1 (reproduced supra), x1 and x2 are each neurons.
the information output channel comprises a plurality of the information output neurons,
P. 343, “Convolutional networks can be used to output a high-dimensional structured object, rather than just predicting a class label for a classification task or a real value for a regression task. Typically this object is just a tensor, emitted by a standard convolutional layer.” (Emphasis added.)
wherein the memory module comprises a plurality of the memory neurons encoding memory information,
Id. Neurons arranged as depicted in fig. 6.2 can be used to store information. The Examiner interprets “memory information” as encompassing all information stored by these neurons.
wherein the brain-like neural network caches and encodes information through activation of the neurons, and encodes, stores, and transmits information through the connections between the neurons.
P. 103, “To make a machine learning algorithm, we need to design an algorithm that will improve the weights w in a way that reduces MSEtest when the algorithm is allowed to gain experience by observing a training set (X(train); y(train)).”
As set forth above, Goodfellow discloses each of the components recited in claim 1; however, these components are disclosed in separate embodiments throughout a large teaching reference. At the time of filing, it would have been obvious to a person of ordinary skill to include each of the recited elements in a combined brain-like neural network for the respective benefits that each provides: eg the ability to perceive things and encode spatial information is good for robotic navigation; the ability to identify objects in an environment is good for general scene comprehension; the ability to encode time information is useful for planning, as well as understanding past events; input and output channels are essential for any information processing task in a neural network. This Examiner finds that a person of ordinary skill in machine learning / deep learning would be familiar with the entire contents of this book.
Regarding claim 2, Goodfellow discloses a brain-like neural network with memory and information abstraction functions according to claim 1, wherein the connections between the neurons includes at least one of following connections:
wherein a plurality of the perceptual encoding neurons respectively form unidirectional or bidirectional excitatory or inhibitory connections with one or more other perceptual encoding neurons, and said one or more perceptual encoding neurons form unidirectional or bidirectional excitatory or inhibitory connections with one or more of the instance encoding neurons/the environment encoding neuron/the spatial encoding neurons/the information input neuron,
P. 166, fig. 6.2 (reproduced supra), illustrating unidirectional excitatory connections (ie synapses). This architecture can be applied to any of the neuron types recited. See mapping for claim 1, supra.
wherein a plurality of the instance encoding neurons respectively form unidirectional excitatory connections with one or more of the information input neurons, can also respectively form the unidirectional or bidirectional excitatory connections with a plurality of the memory neurons, can also respectively form unidirectional or bidirectional activation connections with one or more other instance encoding neurons, and can also respectively form the unidirectional or bidirectional excitatory connections with one or more of the perceptual encoding neurons,
Id.
wherein a plurality of the environment encoding neurons respectively form unidirectional excitatory connections with one or more of the information input neurons, can also respectively form the unidirectional or bidirectional excitatory connections with a plurality of the memory neurons, can also respectively form the unidirectional or bidirectional excitatory connections with one or more other environment encoding neurons, and can also respectively form the unidirectional or bidirectional excitatory connections with one or more of the perceptual encoding neurons,
Id. This architecture can be applied to any of the neuron types recited. See mapping for claim 1, supra.
wherein a plurality of the spatial encoding neurons respectively form unidirectional excitatory connections with one or more of the information input neurons, can also respectively form the unidirectional or bidirectional excitatory connections with a plurality of the memory neurons, can also respectively form the unidirectional or bidirectional excitatory connections with one or more other spatial encoding neurons, and can also respectively form the unidirectional or bidirectional excitatory connections with one or more of the perceptual encoding neurons, wherein a plurality of the instance encoding neurons, a plurality of the environment encoding neurons, and a plurality of the spatial encoding neurons form the unidirectional or bidirectional excitatory connections between each other, wherein a plurality of the time encoding neurons respectively form unidirectional excitatory connections with one or more of the information input neurons,
Id. This architecture can be applied to any of the neuron types recited. See mapping for claim 1, supra.
wherein a plurality of the motion and orientation encoding neurons respectively form unidirectional excitatory connections with one or more of the information input neurons, and can form the unidirectional or bidirectional excitatory connections with one or more of the spatial encoding neurons, wherein a plurality of the information input neurons can also form the unidirectional or bidirectional excitatory connections with one or more other information input neurons, a plurality of the information output neurons can also respectively form the unidirectional or bidirectional excitatory connections with one or more other information output neurons, wherein a plurality of the information input neurons can also respectively form the unidirectional or bidirectional excitatory connections with a plurality of the information output neurons, wherein each information input neuron forms unidirectional excitatory connections with one or more of the memory neurons, wherein a plurality of the memory neurons respectively form unidirectional excitatory connections with one or more of the information output neurons, a plurality of the memory neurons respectively form the unidirectional or bidirectional excitatory connections with one or more other memory neurons, wherein one or more of the information output neurons can respectively form unidirectional excitatory connections with one or more of the instance encoding neurons/the environment encoding neurons/the spatial encoding neurons/the perceptual encoding neurons/the time encoding neurons/the motion and orientation encoding neurons, respectively.
Id. This architecture can be applied to any of the neuron types recited. See mapping for claim 1, supra.
Regarding claim 3, Goodfellow discloses a brain-like neural network with memory and information abstraction functions according to claim 1, wherein picture or video stream are input such that one or more pixel values of multiple pixels of each frame picture are respectively weighted into a plurality of the perceptual encoding neurons so as to activate the plurality of the perceptual encoding neurons, wherein current instantaneous speed of the intelligent agents is obtained and input to the motion and orientation encoding module, and the relative displacement information is obtained by integrating the instantaneous speed against time by a plurality of the motion and orientation encoding neurons, wherein for one or more of the neurons, membrane potential is calculated to determine whether to activate the neurons, and if the neurons are determined to be activated, each downstream neuron is made to accumulate the membrane potential so as to determine whether to activate the neurons, such that the activation of the neurons will propagate in the brain-like neural network, wherein weights of connections between upstream neurons and the downstream neurons is a constant value or dynamically adjusted through a synaptic plasticity process, wherein one or more of the neurons are mapped to corresponding labels as output.
The algorithms and architecture described below can be applied to any of the component neurons described in claim 1 (see mapping supra).
P. 164, eqn. 6.2:
PNG
media_image2.png
46
504
media_image2.png
Greyscale
P. 166, fig. 6.3, illustrating a RLU activation function:
PNG
media_image3.png
280
474
media_image3.png
Greyscale
P. 202, algorithm 6.3:
PNG
media_image4.png
542
764
media_image4.png
Greyscale
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Goodfellow and Hunzinger.
Regarding claim 8, Hunzinger discloses the further limitation which Goodfellow does not disclose wherein the plurality of the neurons of the brain-like neural network are spontaneous firing neurons, wherein the spontaneous firing neurons comprise conditionally spontaneous firing neurons and unconditionally spontaneous firing neurons, wherein if the conditionally spontaneous firing neurons are not activated by external input in a first pre-set time interval, the conditionally spontaneous firing neurons are self-activated according to probability P, wherein the unconditionally spontaneous firing neurons automatically gradually accumulate the membrane potential without external input, when the membrane potential reaches the threshold, the unconditionally spontaneous firing neurons activate, and restore the membrane potential to resting potential to restart accumulation process.
[0034] “Biological synapses can mediate either excitatory or inhibitory (hyperpolarizing) actions in postsynaptic neurons and can also serve to amplify neuronal signals. Excitatory signals depolarize the membrane potential (i.e., increase the membrane potential with respect to the resting potential). If enough excitatory signals are received within a certain time period to depolarize the membrane potential above a threshold, an action potential occurs in the postsynaptic neuron. In contrast, inhibitory signals generally hyperpolarize (i.e., lower) the membrane potential. Inhibitory signals, if strong enough, can counteract the sum of excitatory signals and prevent the membrane potential from reaching a threshold. In addition to counteracting synaptic excitation, synaptic inhibition can exert powerful control over spontaneously active neurons. A spontaneously active neuron refers to a neuron that spikes without further input, for example due to its dynamics or a feedback. By suppressing the spontaneous generation of action potentials in these neurons, synaptic inhibition can shape the pattern of firing in a neuron, which is generally referred to as sculpturing. The various synapses 104 may act as any combination of excitatory or inhibitory synapses, depending on the behavior desired.”
At the time of filing, it would have been obvious to a person of ordinary skill to combine the synaptic potentiation technique described by Hunzinger with the Goodfellow system because this can improve learning performance.
Allowable Subject Matter
Claims 4-7 and 9-54 are allowable over the prior art, but are rejected under § 101 as set forth above. (Claim 7 is also rejected under §112(b).)
Additional Relevant Prior Art
The following references were identified by the Examiner as being relevant to the disclosed invention, but are not relied upon in any particular prior art rejection:
Kasabov discloses various techniques for spatio-temporal pattern recognition using a neural network, including eg spike driven synaptic plasticity (see p. 819). (Kasabov, Nikola. "Brain-like Information Processing for Spatio-Temporal Pattern Recognition." Springer Handbook of Bio-/Neuroinformatics (2014): 813-834.)
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Vincent Gonzales whose telephone number is (571) 270-3837. The examiner can normally be reached on Monday-Friday 7 a.m. to 4 p.m. MT. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Miranda Huang, can be reached at (571) 270-7092.
Information regarding the status of an application may be obtained from the USPTO Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov.
/Vincent Gonzales/Primary Examiner, Art Unit 2124