DETAILED ACTION
Response to Arguments
Argument One
Applicant alleges that Examiner erred in stating in the Final Rejection of 06/06/2025 that claim 1 did not recite the following claim limitation of a first neural network is trained to produce approximations of representations of topological structures that would arise in a second source recurrent neural network. See pgs., 6-7 of Applicant’s Remarks submitted on 09/08/2025.
Respectfully, Examiner disagrees. Claim 1 as filed on 02/20/2025 was stated as: a neural network trained to produce: in response to a first input of image data an approximation of a first representation of topological structures in patterns of activity that would arise in a source recurrent neural network. See Claim One filed on 02/20/2025(emphasis added). As shown by the bolded claim elements above, a neural network was claimed, and not a first neural network as Applicant claims and the claim element of in response to a first input of image data was also claimed which Applicant seemed to have forgotten about. Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). Accordingly, Examiner stands by the rejection issued in the Final Rejection of 06/06/2025.
Argument Two
Once again Applicant argues that the prior art of Chatzis does not teach the claim one limitation of wherein the neural network differs from the source recurrent neural network. See pgs., 7-8 of Applicant’s Remarks submitted on 09/08/2025. To support this argument Applicant points to Chatzis stating that the two components are part of an Echo State Network (ESN) and updating the ESN entails updating the two components and hence does not read upon the above claim limitation. Id.
Respectfully, Examiner disagrees. As MPEP §2173.01 details: “[t]he plain meaning of a term means the ordinary and customary meaning given to the term…[t]he ordinary and customary meaning of a term may be evidenced by a variety of sources…[h]owever, the best source for determining the meaning of a claim term is the specification - the greatest clarity is obtained when the specification serves as a glossary for the claim terms.”(Emphasis added). In this case the claim term/element at issue with respect to claim one is the neural network differs from the source recurrent neural network.
Unfortunately, Applicant’s Specification does not provide a definition for the claim term/element at issue. Nevertheless, one of ordinary skill in the art would find that the two basic components that make up the ESN i.e., recurrent neural network (RNN) and the linear readout layer differ, since they perform two different functions within the ESN. The RNN that makes up the reservoir maintains a state of nonlinear transformations, while the linear readout layer does not maintain nonlinear states but only computes a linear output. This functional difference is detailed by Chatzis on pg., 570 by stating that “[a] recurrent neural network...remains unchanged during training. This RNN is called the reservoir. It is passively excited by the input signal and maintains in its state a nonlinear transformation of the input history. The desired output signal is generated by a linear readout layer attached to the reservoir, which computes a linear combination...from the input-excited reservoir (reservoir states).”(Emphasis added).
Accordingly, in view of the previous remarks, one of ordinary skill in the art would find that Chatzis teaches the claim one limitation of the neural network differs from the source recurrent neural network.
Argument Three
Applicant argues that the prior art of Guisit does not teach the claim one limitation dealing with isomorphic topological reconstructions of the graph nor claim nine’s limitation of occurrences of the topological structures without specifying where the patterns of activity would arise. See pgs., 8-10 of Applicant’s Remarks submitted on 09/08/2025.
Respectfully Examiner disagrees. The Final Office Action of 06/06/2025 relied upon pgs. 6-7 of Guisit and in particular fig. 5(a) to teach the claim limitations of one and nine. Unlike what Applicant has referred to in Applicant’s Remarks, fig. 5(a) does not represent concurrence complexes. Rather fig. 5(a), which was used to teach the above claim limitations represent correlation/coherence matrices that form clique complexes. Accordingly, everything that Applicant stated in the Remarks with regards to fig. 5 dealing with the binary matrix encoding coactivity patterns in which the rows correspond to a neuron and columns correspond to a collection of neurons that are constructed using coactivity patterns only applies to concurrence complexes which were detailed by figs. 5(b) and 5(c), but not fig. 5(a) which represents correlation/coherence matrices that form clique complexes. Encoding correlation/coherence matrices that form clique complexes is different from encoding concurrence complexes.
In regards to Applicant suggesting that Examiner relied upon Guisti’s method of filtration to teach claims one and nine, Examiner replies by stating that only fig. 5(a) of Guisti was used to teach claims one and nine. While fig. 5(a) of Guisti teaches correlation/coherence matrices that form clique complexes there is no indication by Applicant’s Remarks (nor by the teachings of Guisti) that these correlation/coherence matrices were constructed though the use of filtration. Nor has Applicant provided any evidence that the binarized functional connectivity matrix of fig. 5(a) retains all of the information in the original weighted network.
Accordingly, in view of the previous remarks, one of ordinary skill in the art would find that Guisit teaches the claim one limitation of dealing with isomorphic topological reconstructions of the graph and claim nine’s limitation of occurrences of the topological structures without specifying where the patterns of activity would arise.
Argument Four
Applicant alleges that Examiner erred by ignoring the claim language in addressing the so-called "Argument Three" in the response filed 02/20/2025. See pg., 10 of Applicant’s Remarks submitted on 09/08/2025.
Respectfully Examiner disagrees. Examiner addressed Applicant’s argument on pg. 5 of the Final Office Action of 06/06/2025 by stating that the 103 rejection relied upon the prior art of Chatzis in view of Giusti to teach the claim limitation of an approximation of a first representation of topological structures in patterns of activity that would arise in a source recurrent neural network in response to the first input of image data. And as Applicant aware, one cannot show non-obviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986).
Argument Five
With respect to Applicant’s contentions that the components of Chatzis’ Echo State Network do not differ and that the prior art of Giusti provides an isomorphic topological reconstruction, Examiner has already addressed these issues as detailed above in sections two and three of the arguments.
With respect to Applicant’s other arguments dealing with claim 20, Applicant’s arguments have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument i.e., Chatzis’ training data for the lazy figure 8 generation task does not represent topological structures in patterns of activity in a source recurrent neural network.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 09/08/2025 has been entered.
Priority
Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged with respect to U.S. Patent Application No. 16/004,757.
Information Disclosure Statement
The information disclosure statements (IDS) submitted on 09/08/2025 and 10/01/2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-2, 5-6, 8-9 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Chatzis et al., "The copula echo state network." Pattern Recognition 45.1 (2012)(“Chatzis”) in view of Giusti, et al., "Two’s company, three (or more) is a simplex: Algebraic-topological tools for understanding higher-order structure in neural data." Journal of computational neuroscience 41 (2016)(“Giusti”).
Regarding claim 1, Chatzis teaches a device comprising:
a camera(Chatzis, pgs. 574-575, see also figs. 3(a), 3(b), and 3(c), “The pair of stereo cameras on-board the iCub platform was used to capture the demonstrated information, with the camera frame rate set to 20Hz, and the resolution being equal to 320
×
240 pixels as illustrated, e.g., in Fig. 3(b) and(c)[a camera]…[b]ased on this setting, the positions of the tracked markers on the three-dimensional space were presented to the trained models, with the goal to learn what trajectory to follow in order to reach the objects of interest under the five considered alternative scenarios.”);
and a neural network trained to produce(Chatzis, pg., 572, “As already discussed, an ESN comprises two basic components: a discrete-time RNN, called the reservoir[source recurrent neural network], and a linear readout output layer which maps the reservoir states to the actual output[a neural network trained to produce].”):
in response to a first input of image data(Chatzis, pg. 574, sec. 5.1, see also figs.2 and 3, “In this experiment, we consider a real-life application in the field of robotics: the aim is to teach by demonstration a robot how to grasp a stationary object under different settings… [i]n the model training phase, five different human demonstrators were asked to perform each one of the five tasks[in response to a first input]with the iCub observing their actions…the available training datasets are both limited, since the obtained trajectories were of variable length between 20 and 50 samples in each case… with the camera frame rate set to 20Hz,and the resolution being equal to 320
×
240 pixels as illustrated ,e.g., in Fig. 3(b) and(c).Markers were placed on human subjects (Fig. 3(d)) to track the points of interest. Based on this setting, the positions of the tracked markers on the three-dimensional space were presented to the trained models[of image data] with the goal to learn what trajectory to follow in order to reach the objects of interest under the five considered alternative scenarios.”),
in response to a second input of image data(Chatzis, pg. 574, sec. 5.1, see also figs.2 and 3, “In this experiment, we consider a real-life application in the field of robotics: the aim is to teach by demonstration a robot how to grasp a stationary object under different settings… [i]n the model training phase, five different human demonstrators were asked to perform each one of the five tasks[in response to a second input]with the iCub observing their actions…the available training datasets are both limited, since the obtained trajectories were of variable length between 20 and 50 samples in each case… with the camera frame rate set to 20Hz,and the resolution being equal to 320
×
240 pixels as illustrated ,e.g., in Fig. 3(b) and(c).Markers were placed on human subjects (Fig. 3(d)) to track the points of interest. Based on this setting, the positions of the tracked markers on the three-dimensional space were presented to the trained models[of image data] with the goal to learn what trajectory to follow in order to reach the objects of interest under the five considered alternative scenarios.”),
and in response to a third input of image data(Chatzis, pg. 574, sec. 5.1, see also figs.2 and 3, “In this experiment, we consider a real-life application in the field of robotics: the aim is to teach by demonstration a robot how to grasp a stationary object under different settings… [i]n the model training phase, five different human demonstrators were asked to perform each one of the five tasks[in response to a third input]with the iCub observing their actions…the available training datasets are both limited, since the obtained trajectories were of variable length between 20 and 50 samples in each case… with the camera frame rate set to 20Hz,and the resolution being equal to 320
×
240 pixels as illustrated ,e.g., in Fig. 3(b) and(c).Markers were placed on human subjects (Fig. 3(d)) to track the points of interest. Based on this setting, the positions of the tracked markers on the three-dimensional space were presented to the trained models[of image data] with the goal to learn what trajectory to follow in order to reach the objects of interest under the five considered alternative scenarios.”);
wherein the neural network differs from the source recurrent neural network(Chatzis, pg., 572, “As already discussed, an ESN comprises two basic components: a discrete-time RNN, called the reservoir[the source recurrent neural network], and a linear readout output layer which maps the reservoir states to the actual output[the neural network].”):
While Chatzis does teach a source recurrent neural network, a first, second and third input of image data, Chatzis does not teach:
an approximation of a first plurality of digits that represent topological structures in patterns of activity that would arise in a source recurrent neural network in response to the first input of image data, wherein the first plurality of digits provides only an isomorphic topological reconstruction of at least a portion of a functional graph of the patterns of activity in the source recurrent neural network that would arise in response to the first input of image data;1
an approximation of a second plurality of digits that represent topological structures in patterns of activity that would arise in the source recurrent neural network in response to the second input of image data, wherein the second plurality of digits provides only an isomorphic topological reconstruction of at least a portion of at least a portion of a functional graph of the patterns of activity in the source recurrent neural network that would arise in response to the first input of image data;2
an approximation of a third plurality of digits that represent topological structures in patterns of activity that would arise in the source recurrent neural network in response to the third input of image data, wherein the third plurality of digits provides only an isomorphic topological reconstruction of at least a portion of at least a portion of a functional graph of the patterns of activity in the source recurrent neural network that would arise in response to the first input of image data.3
However, Giusti teaches:
an approximation of a first plurality of digits that represent topological structures in patterns of activity that would arise in a source recurrent neural network in response to the first input of image data(Giusti, pgs. 6-7, sec. 4, see also figs. 4 and 5, “One straightforward method for constructing simplicial complexes begins with a graph where vertices represent neural units and edges represent structural or functional connectivity between those units… [g]iven such a graph, one simply replaces every clique (all-to-all connected subgraph) by a simplex on the vertices participating in the clique (Fig. 5a). This procedure produces a clique complex, which encodes the same information as the underlying graph, but additionally completes the skeletal network to its fullest possible simplicial structure.”),
wherein the first plurality of digits provides only an isomorphic topological reconstruction of at least a portion of a functional graph of the patterns activity in the source recurrent neural network that arises in response to the first input of image data(Giusti, pgs. 6-7, As fig. 5(a) details partly below “[c]orrelation or coherence matrices between regional BOLD time series can be encoded as a type of simplicial complex called a clique complex, formed by taking every complete (all-to-all) subgraph in a binarized functional connectivity matrix to be a simplex.”
PNG
media_image1.png
207
632
media_image1.png
Greyscale
);4, 5
an approximation of a second plurality of digits that represent topological structures in patterns of activity that would arise in the source recurrent neural network in response to the second input of image data(Giusti, pgs. 6-7, sec. 4, see also figs. 4 and 5, “One straightforward method for constructing simplicial complexes begins with a graph where vertices represent neural units and edges represent structural or functional connectivity between those units… [g]iven such a graph, one simply replaces every clique (all-to-all connected subgraph) by a simplex on the vertices participating in the clique (Fig. 5a). This procedure produces a clique complex, which encodes the same information as the underlying graph, but additionally completes the skeletal network to its fullest possible simplicial structure.”),
wherein the second plurality of digits provides only an isomorphic topological reconstruction of at least a portion of at least a portion of a functional graph of the patterns of activity in the source recurrent neural network that arises in response to the first input image data(Giusti, pgs. 6-7, As fig. 5(a) details partly below “[c]orrelation or coherence matrices between regional BOLD time series can be encoded as a type of simplicial complex called a clique complex, formed by taking every complete (all-to-all) subgraph in a binarized functional connectivity matrix to be a simplex.”
PNG
media_image1.png
207
632
media_image1.png
Greyscale
);6,7
an approximation of a third plurality of digits that represent topological structures in patterns of activity that would arise in the source recurrent neural network in response to the third input of image data(Giusti, pgs. 6-7, sec. 4, see also figs. 4 and 5, “One straightforward method for constructing simplicial complexes begins with a graph where vertices represent neural units and edges represent structural or functional connectivity between those units… [g]iven such a graph, one simply replaces every clique (all-to-all connected subgraph) by a simplex on the vertices participating in the clique (Fig. 5a). This procedure produces a clique complex, which encodes the same information as the underlying graph, but additionally completes the skeletal network to its fullest possible simplicial structure.”),
wherein the third plurality of digits provides only an isomorphic topological reconstruction of at least a portion of at least a portion of a functional graph of the patterns of activity in the source recurrent neural network that arises in response to the first input of image data(Giusti, pgs. 6-7, As fig. 5(a) partly details below “[c]orrelation or coherence matrices between regional BOLD time series can be encoded as a type of simplicial complex called a clique complex, formed by taking every complete (all-to-all) subgraph in a binarized functional connectivity matrix to be a simplex.”
PNG
media_image1.png
207
632
media_image1.png
Greyscale
).8,9
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Chatzis with the teachings of Giusti the motivation to do so would be to apply the techniques/tools from neuroscience and apply them to analyze the dynamical behavior of artificial neural network models(Giusti, pg. 1, “The recent development of novel imaging techniques and the acquisition of massive collections of neural data make finding new approaches to understanding neural structure a vital undertaking. Network science is rapidly becoming an ubiquitous tool for understanding the structure of complex neural systems. Encoding relationships between objects of interest using graphs…enables the use of a bevy of well-developed tools for structural characterization as well as inference of dynamic behavior. Over the last decade, network models have demonstrated broad utility in uncovering fundamental architectural principles.”).
Regarding claim 2, Chatzis in view of Giusti teaches the device of claim 1, wherein the topological structures are patterns of signal transmission activity that would arise between two or more nodes(Giusti, pgs. 6-7, As fig. 5(a) details “[c]orrelation or coherence matrices between regional BOLD time series[wherein the topological structures are patterns of signal transmission activity] can be encoded as a type of simplicial complex called a clique complex [between two or more nodes], formed by taking every complete (all-to-all) subgraph in a binarized functional connectivity matrix to be a simplex.”
PNG
media_image1.png
207
632
media_image1.png
Greyscale
)
in the source recurrent neural network(Chatzis, pg., 572, “As already discussed, an ESN comprises two basic components: a discrete-time RNN, called the reservoir[in the source recurrent neural network], and a linear readout output layer which maps the reservoir states to the actual output.”)
and one or more edges between the nodes(Giusti, pgs. 6-7, As fig. 5(a) details “[c]orrelation or coherence matrices between regional BOLD time series can be encoded as a type of simplicial complex called a clique complex[and one or more edges between the nodes], formed by taking every complete (all-to-all) subgraph in a binarized functional connectivity matrix to be a simplex[as partly shown herein].”
PNG
media_image1.png
207
632
media_image1.png
Greyscale
).10
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Chatzis with the above teachings of Giusti for the same rationale stated at Claim 1.
Regarding claim 5, Chatzis in view of Giusti teaches the device of claim 1, wherein each of the first plurality of digits, the second plurality of digits, and the third plurality of digits represent(Giusti, pgs. 6-7, As fig. 5(a) details “[c]orrelation or coherence matrices between regional BOLD time series[wherein each of the first plurality of digits, the second plurality of digits, and the third plurality of digits represent] can be encoded as a type of simplicial complex called a clique complex, formed by taking every complete (all-to-all) subgraph in a binarized functional connectivity matrix to be a simplex[as partly shown herein].”
PNG
media_image1.png
207
632
media_image1.png
Greyscale
)
an increased likelihood that the source recurrent neural network would display activity that matches simplex topological structures(Chatzis, pg., 572, “As already discussed, an ESN comprises two basic components: a discrete-time RNN, called the reservoir[the source recurrent neural network], and a linear readout output layer which maps the reservoir states to the actual output & Chatzis, pgs. 573-574, “[T]he maximization problem:
y
^
t
=
a
r
g
m
a
x
y
(
t
)
{
l
o
g
p
(
y
t
|
y
t
-
1
;
ϕ
,
σ
j
2
j
=
1
M
)
}
… [i]n this work…[this problem] is resolved by resorting to the simplex search method… [s]pecifically, a simplex in an M-dimensional space is characterized by the M+1 distinct vectors that are its vertices; for example, in a two-dimensional space, a simplex is a triangle; in a three-dimensional space, it is a pyramid. At each step of the adopted search algorithm, a new point in or near the current simplex is generated. The optimized function value at the new point is compared with the function’s values at the vertices of the simplex, and usually, one of the vertices is replaced by the new point, giving a new simplex[an increased likelihood that the source recurrent neural network would display activity that matches simplex topological structures].”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Chatzis with the above teachings of Giusti for the same rationale stated at Claim 1.
Regarding claim 6, Chatzis in view of Giusti teaches the device of claim 1, further comprising a processor(Giusti, pg. 12, “[H]omology can be computed locally and then aggregated, allowing for distributed computation over multiple processors and memory
cores[a processor]….”)
that comprises a second neural network coupled to receive the approximations of the pluralities of digits produced by the neural network device and process the received approximations(Chatzis, pg., 573, “[I]n this work we seek to introduce a conditional probability model
p
(
y
j
(
t
)
|
y
j
τ
τ
=
1
t
-
1
for the ESN-generated predictions[produced by the neural network device]… [t]he so-obtained semiparametric model with likelihood of…[equation 15] will be dubbed the copula echo state network (CESN)[ that comprises a second neural network coupled to receive the approximations of the pluralities of digits; and process the received approximations].”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Chatzis with the above teachings of Giusti for the same rationale stated at Claim 1.
Regarding claim 8, Chatzis in view of Giusti teaches the device of claim 1, wherein each of the first plurality of digits, the second plurality of digits, and the third plurality of digits comprises multi-valued, non-binary digits, wherein values of the multi-valued, non-binary digits characterize levels or strengths of connection in the activity(Giusti, pgs. 8-9, see also fig. 6a, “The simplest local measure of structure – the degree of a vertex – naturally becomes a vector-measurement whose entries are the number of maximal simplices of each size in which the vertex participates (Fig. 6a)… the simplex distribution or f-vector is the global count of simplices by size, which provides a global picture of how tightly connected the vertices are [wherein values of the multi-valued, non-binary digits characterize levels or strengths of connection in the activity]….”& Giusti, pgs. 8-9, As fig. 6(a) details “Generalizations of the degree sequence for a simplicial complex. Each vertex has a degree vector giving the number of maximal simplices of each degree to which it is incident. The f-vector gives a list of how many simplices of each degree are in the complex[wherein each of the first plurality of digits, the second plurality of digits, and the third plurality of digits comprises multi-valued, non-binary digits], and the maximal simplex distribution records only the number of maximal simplices of each dimension[as partly shown herein].”
PNG
media_image2.png
266
362
media_image2.png
Greyscale
)
arising in the source recurrent neural network(Chatzis, pg., 572, “As already discussed, an ESN comprises two basic components: a discrete-time RNN, called the reservoir[arising in the source recurrent neural network], and a linear readout output layer which maps the reservoir states to the actual output.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Chatzis with the above teachings of Giusti for the same rationale stated at Claim 1.
Regarding claim 9, Chatzis in view of Giusti teaches the device of claim 1, wherein each of the first plurality of digits, the second plurality of digits, and the third plurality of digits represents occurrences of the topological structures without specifying where the patterns of activity would arise(Giusti, pgs. 6-7, As fig. 5(a) details partly below “[c]orrelation or coherence matrices between regional BOLD time series[wherein each of the first plurality of digits, the second plurality of digits, and the third plurality of digits] can be encoded as a type of simplicial complex called a clique complex, formed by taking every complete (all-to-all) subgraph in a binarized functional connectivity matrix to be a simplex[represents occurrences of the topological structures without specifying where the patterns of activity would arise].”
PNG
media_image1.png
207
632
media_image1.png
Greyscale
);11
in a graph of the source recurrent neural network(Chatzis, pg., 572, “As already discussed, an ESN comprises two basic components: a discrete-time RNN, called the reservoir[in a graph of the source recurrent neural network], and a linear readout output layer which maps the reservoir states to the actual output.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Chatzis with the above teachings of Giusti for the same rationale stated at Claim 1.
Regarding claim 11, Chatzis in view of Giusti teaches the device of claim 1, wherein the device is a classifier(Chatzis, pg., 572, “As already discussed, an ESN comprises two basic components: a discrete-time RNN, called the reservoir, and a linear readout output layer[wherein the device is a classifier] which maps the reservoir states to the actual output.”).
Claims 20-21 and 24-27 are rejected under 35 U.S.C. 103 as being unpatentable over Chatzis et al., "The copula echo state network." Pattern Recognition 45.1 (2012)(“Chatzis”) in view of Woodward et al., "A reservoir computing approach to image classification using coupled echo state and back-propagation neural networks." International conference image and vision computing, Auckland, New Zealand. 2011(“Woodward”) and in view of Giusti, et al., "Two’s company, three (or more) is a simplex: Algebraic-topological tools for understanding higher-order structure in neural data." Journal of computational neuroscience 41 (2016)(“Giusti”).
Regarding claim 20, Chatzis teaches a method implemented by a neural network device comprising a processor(Chatzis, pg., 577, “The MATLAB implementation of the CESN method shall be made available….”)12, the method comprising:
wherein the neural network device differs from the source recurrent neural network(Chatzis, pg., 572, “As already discussed, an ESN comprises two basic components: a discrete-time RNN, called the reservoir[the source recurrent neural network], and a linear readout output layer which maps the reservoir states to the actual output[the neural network device].”).
While Chatzis does teach a source recurrent neural network, Chatzis does not teach: inputting a plurality of digits that represents of topological structures in patterns of activity in a source recurrent neural network, wherein the patterns of activity would be responsive to an input of image data into the source recurrent neural network; processing the plurality of digits, wherein the processing is consistent with a training of the neural network to process different such plurality of digits that represent topological structures in the patterns of activity that would arise in the source recurrent neural network; and outputting a result of the processing of the plurality of digits.13
However, Woodward teaches:
inputting a plurality of digits that represents topological structures in patterns of activity in a source recurrent neural network, wherein the patterns of activity would be responsive to an input of image data into the source recurrent neural network(Woodward, pgs., 2-4, see also fig. 1, “Then, at specific intervals the ESN state is spatially averaged into a smaller sized representation and sampled; these samples are concatenated to construct a vector representing the input image[inputting a plurality of digits that represents topological structures in patterns of activity in a source recurrent neural network,wherein the patterns of activity would be responsive to an input of image data into the source recurrent neural network].”);
processing the plurality of digits, wherein the processing is consistent with a training of the neural network to process different such plurality of digits that represent topological structures in the patterns of activity that would arise in the source recurrent neural network(Woodward, pgs., 2-4, see also fig. 1, “A traditional feed-forward neural network (multi-layer perceptron), with a linear input layer, non-linear hidden layer and linear output layer was used for classification. The standard back-propagation algorithm ,vas used to train the system and optimise synaptic weights - we therefore refer to this network as a back-propagation neural network (BPNN).”).
and outputting a result of the processing of the plurality of digits(Woodward, pgs., 2-4, see also fig. 1, “[G]enerating a set of response vectors. These are then fed into the second component of the system, a feedforward three-layer neural network that is trained using the standard back-propagation algorithm i.e. a back-propagation neural network (BPNN)...to generate a classification vector[and outputting a result of the processing of the plurality of digits].”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Chatzis with the teachings of Woodward the motivation to do so would be to couple a recurrent neural network to a multi-layer perceptron network to discover better spatial correlations for computer vision problems(Woodward, pg., 1, “The aim of this research is to investigate the Reservoir Computing approach applied to computer vision problems. As an example application this work focuses on its application to face recognition. The idea is to access a reservoir of complexity in order to project human face image data into a higher dimensional space. In doing so, higher order spatial correlations are accounted for and the dynamics of the reservoir can be read out and used for classification purposes. This is achieved through feeding input images to the reservoir, sampling its dynamics and then feeding this into a second feed-forward neural network that is trained on the data using back-propagation.”).
While Chatzis in view of Woodward does teach a source recurrent neural network, the representation and the input of image data, Chatzis in view of Woodward do not teach: and wherein the representation provides only an isomorphic topological reconstruction of at least a portion of a functional graph of the patterns of activity that would arise in the source recurrent neural network responsive to the input of image data.14
However, Giusti teaches:
and wherein the representation provides only an isomorphic topological reconstruction of at least a portion of a functional graph of the patterns of activity that would arise in the source recurrent neural network responsive to the input of image data (Giusti, pgs. 6-7, sec. 4, see also figs. 4 and 5, “One straightforward method for constructing simplicial complexes begins with a graph where vertices represent neural units and edges represent structural or functional connectivity between those units… [g]iven such a graph, one simply replaces every clique (all-to-all connected subgraph) by a simplex on the vertices participating in the clique (Fig. 5a). This procedure produces a clique complex, which encodes the same information as the underlying graph, but additionally completes the skeletal network to its fullest possible simplicial structure.” & Giusti, pgs. 6-7, As fig. 5(a) details partly below “[c]orrelation or coherence matrices between regional BOLD time series can be encoded as a type of simplicial complex called a clique complex, formed by taking every complete (all-to-all) subgraph in a binarized functional connectivity matrix to be a simplex.
PNG
media_image1.png
207
632
media_image1.png
Greyscale
).15,16
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Chatzis in view of Woodward with the teachings of Giusti the motivation to do so would be apply techniques/tools from neuroscience and apply them to analyze the dynamical behavior of neural network models(Giusti, pg. 1, “The recent development of novel imaging techniques and the acquisition of massive collections of neural data make finding new approaches to understanding neural structure a vital undertaking. Network science is rapidly becoming an ubiquitous tool for understanding the structure of complex neural systems. Encoding relationships between objects of interest using graphs…enables the use of a bevy of well-developed tools for structural characterization as well as inference of dynamic behavior. Over the last decade, network models have demonstrated broad utility in uncovering fundamental architectural principles.”).
Regarding claim 21, Chatzis in view Woodward and Giusti teaches the method of claim 20, wherein the topological structures are patterns of signal transmission activity between four or more nodes(Giusti, pgs. 6-7, As fig. 5(a) details partly below “[c]orrelation or coherence matrices between regional BOLD time series can be encoded as a type of simplicial complex called a clique complex[wherein the topological structures are patterns of signal transmission activity between four or more nodes], formed by taking every complete (all-to-all) subgraph in a binarized functional connectivity matrix to be a simplex.”
PNG
media_image1.png
207
632
media_image1.png
Greyscale
)17
in the source recurrent neural network(Chatzis, pg., 572, “As already discussed, an ESN comprises two basic components: a discrete-time RNN, called the reservoir[in the source recurrent neural network], and a linear readout output layer which maps the reservoir states to the actual output.”)
and three or more edges between the nodes(Giusti, pgs. 6-7, As fig. 5(a) details partly below “[c]orrelation or coherence matrices between regional BOLD time series can be encoded as a type of simplicial complex called a clique complex[and three or more edges between the nodes], formed by taking every complete (all-to-all) subgraph in a binarized functional connectivity matrix to be a simplex.”
PNG
media_image1.png
207
632
media_image1.png
Greyscale
).18
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Chatzis and Woodward with the above teachings of Giusti for the same rationale stated at Claim 20.
Regarding claim 24, Chatzis in view of Woodward and Giusti teaches the method of claim 20, wherein the plurality of digits represent an increased likelihood that the source recurrent neural network would display activity that match simplex topological structures would arise(Chatzis, pg., 572, “As already discussed, an ESN comprises two basic components: a discrete-time RNN, called the reservoir[the source recurrent neural network], and a linear readout output layer which maps the reservoir states to the actual output & Chatzis, pgs. 573-574, “[T]he maximization problem:
y
^
t
=
a
r
g
m
a
x
y
(
t
)
{
l
o
g
p
(
y
t
|
y
t
-
1
;
ϕ
,
σ
j
2
j
=
1
M
)
}
… [i]n this work…[this problem] is resolved by resorting to the simplex search method… [s]pecifically, a simplex in an M-dimensional space is characterized by the M+1 distinct vectors that are its vertices; for example, in a two-dimensional space, a simplex is a triangle; in a three-dimensional space, it is a pyramid. At each step of the adopted