Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Effective Filing Date
The effective filing date of 04/25/2022 is acknowledged.
Information Disclosure Statement
The information disclosure statement(s) submitted on 05/09/2022 is/are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement(s) is/are being considered by the examiner.
Status of Claims
The present application is being examined under the claims filed on 04/25/2022.
Claim(s) 1-20 is/are rejected.
Claim(s) 1-20 is/are pending.
Claim(s) 14, 15 is/are objected to.
Claim Objections
Claim(s) 14 is/are objected to.
Claim(s) 14 is grammatically incorrect as written. Examiner will interpret the claim in the following way for the purpose of compact prosecution, but a correction must be made by the applicant.
“wherein at least a subset of the set of candidate new reservoir subnetworks exists such that each candidate new reservoir subnetwork in the subset corresponds to a respective community sub-graph of the synaptic connectivity graph, each community sub-graph having been generated by determining a partition of the synaptic connectivity graph into a plurality of community sub-graphs by performing an optimization that encourages a higher measure of connectedness between nodes included within each community sub-graph relative to nodes included in different community sub-graphs.”
Claim(s) 15 depend(s) on claim 14 and thus are objected to by virtue of dependency.
Prior Art References
Tomizawa, F. and Sawada, Y., 2020. Combining ensemble Kalman filter and reservoir computing to predict spatio-temporal chaotic systems from imperfect observations and models. Geoscientific Model Development Discussions, 2020, pp.1-33. (Hereafter, “Tomizawa”).
Ishii, K., van Der Zant, T., Becanovic, V. and Ploger, P., 2004, November. Identification of motion with echo state network. In Oceans' 04 MTS/IEEE Techno-Ocean'04 (IEEE Cat. No. 04CH37600) (Vol. 3, pp. 1205-1210). IEEE. (Hereafter, “Ishii”).
Lukoševičius, M. and Jaeger, H., 2009. Reservoir computing approaches to recurrent neural network training. Computer science review, 3(3), pp.127-149. (Hereafter, “Lukosevicius”).
A second embodiment of Lukosevicius indicated by reference [122] in the original source. (Hereafter, “Holland”).
US 20200143243 A1 - Multiobjective Coevolution Of Deep Neural Network Architectures (Hereafter, “Liang”).
Goulas, A., Damicelli, F. and Hilgetag, C.C., 2021. Bio-instantiated recurrent neural networks: Integrating neurobiology-based network topology in artificial networks. Neural Networks, 142, pp.608-618. (Hereafter, “Goulas”).
Claim Rejections - 35 U.S.C. § 112(d)
The following is a quotation of 35 U.S.C. § 112(d):
(d) REFERENCE IN DEPENDENT FORMS.—Subject to subsection (e), a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers.
Claim(s) 14, 15 is/are rejected under 35 U.S.C. § 112(d) as being of improper dependent form for failing to further limit the subject matter of the claim upon which it depends. The recitation of claim 14 “wherein at least a subset of the set of candidate new reservoir subnetworks each […]” does not further limit the parent claim. Examiner notes that the empty set (i.e., [Symbol font/0xC6]) is a subset of all sets. Thus, the subset containing zero candidate new reservoir subnetworks teaches “each [subnetwork] correspond[ing] to a respective community sub-graph of the synaptic connectivity graph […]”. Thus, this claim element does not further limit the parent claim and is taught by the parent claim.
Claim 15 is rejected by virtue of dependency on claim 14.
Applicant may cancel the claim, amend the claim to place the claim in proper dependent form, rewrite the claim in independent form, or present a sufficient showing that the dependent claim complies with the statutory requirements.
Claim Rejections - 35 U.S.C. § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 2, 3, 4, 5, 6, 9, 10, 16, 17, 18, 19, 20 is/are rejected under 35 U.S.C. 103 as being unpatentable
over Tomizawa
in view of Ishii.
Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable
over Tomizawa
in view of Ishii
in further view of Lukosevicius.
Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable
over Tomizawa
in view of Ishii
in further view of Holland.
Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable
over Tomizawa
in view of Ishii
in further view of Liang.
Claim(s) 12, 13, 14, 15 is/are rejected under 35 U.S.C. 103 as being unpatentable
over Tomizawa
in view of Ishii
in further view of Goulas.
In reference to claim 1.
Tomizawa teaches:
“1. A system comprising an ensemble model that has been configured through training to perform a machine learning task by processing a model input to generate an ensemble model output, the ensemble model comprising a plurality of reservoir computing neural networks that are each configured to process the model input to generate a respective reservoir computing neural network output,”
(Tomizawa 5627, “[…] the state space is divided into g groups […] Each group is predicted by a different reservoir placed in parallel. The ith reservoir accepts the state variables of the ith group as well as adjacent l grids, […]”)
“model input” is taught by the state space that is input into the g reservoirs.
“ensemble model output” is taught by the collective reservoir predictions.
“wherein the ensemble model generates the ensemble model output by combining the respective reservoir computing neural network outputs,”
(Tomizawa Figure 2, “g(i)”, The collective outputs of all of the reservoirs are the output of the ensemble.)
PNG
media_image1.png
635
773
media_image1.png
Greyscale
Ishii teaches:
“the ensemble model having been trained by operations comprising, at each training stage in a sequence of training stages:”
“obtaining a [current ensemble model that comprises a] plurality of current reservoir computing neural networks; determining a respective performance measure for each current reservoir computing neural network in the current ensemble model,”
(Ishii 1207, “The evolutionary computations use the following error measurement […] The fitness criterion is defined as lowering the error. […] Every ESN is trained anew after the evolutionary operations are finished.”)
The “plurality of current reservoir computing neural networks” is taught by the population of ESNs (echo state networks). Ishii uses the terminology “ESN” to refer to reservoir computing networks (Ishii Abstract, “Echo State Networks (ESNs) use a recurrent artificial neural network as a reservoir.”
The “respective performance measure for each current reservoir computing neural network” is taught by the “fitness criterion”. The error measurement is applied to each member of the population in the evolutionary algorithm of Ishii.
“wherein the performance measure for each current reservoir computing neural network represents a predicted performance of the current reservoir computing neural network on the machine learning task after the current reservoir computing neural network has been trained to perform the machine learning task;”
(Ishii 1207, “The evolutionary computations use the following error measurement […] The fitness criterion is defined as lowering the error. […] Every ESN is trained anew after the evolutionary operations are finished.”)
“determining one or more new reservoir computing neural networks to be added to the current ensemble model based on the performance measures for the current reservoir computing neural networks; and adding the new reservoir computing neural networks to the current ensemble model.”
(Ishii 1207-1208, “In work we use evolutionary algorithms (EA), that are applied using truncation selection with an average selection pressure without subpopulations, 1 percent mutation, 50 individuals and one-point crossover. The EA is used when the selected parameters are the weights of the connectivity matrix itself.“)
“determining one or more new reservoir computing neural networks […] based on the performance measures” is taught by the evolutionary algorithm’s “selection”, “mutation”, and “crossover” stages.
Motivation to combine Tomizawa, Ishii.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Tomizawa, Ishii.
Tomizawa discloses an ensemble machine-learning model that combines the results of a plurality of reservoir neural networks.
Ishii discloses the use of evolutionary algorithms for searching the state space of reservoir neural networks and using the results of the search for identifying motion in a robot.
One would be motivated to combine these references because Tomizawa utilizes multiple reservoir networks in its ensemble and Ishii provides a method for finding performant network architectures for the ensemble.
Further, MPEP § 2143(I) EXAMPLES OF RATIONALES sets forth the Supreme Court rationales for obviousness, including:
(B) Simple substitution of one known element for another to obtain predictable results;
(E) "Obvious to try" – choosing from a finite number of identified, predictable solutions, with a reasonable expectation of success;
In reference to claim 2.
“2. The system of claim 1,” (preamble)
Ishii teaches:
“wherein at each training stage that is after the first training stage in the sequence of training stages, obtaining the current ensemble model comprises:”
“for each current reservoir computing neural network that was added to the ensemble model at the preceding training stage in the sequence of training stages, training the current reservoir computing neural network to generate trained values for a plurality of trainable parameters of the current reservoir computing neural network;”
(Ishii 1207, “Every ESN is trained anew after the evolutionary operations are finished.”)
”and for each current reservoir computing neural network that was already in the ensemble model during the preceding training stage in the sequence of training stages, obtaining trained values for a plurality of trainable parameters of the current reservoir computing neural network that were generated at a previous training stage in the sequence of training stages.”
This is also taught by Ishii since every ESN (i.e., reservoir computing neural network) is retrained after each evolutionary generation. Any reservoir network from the current generation that was also in the previous generation would have its “trained values” obtained from the training process.
In reference to claim 3.
Tomizawa teaches:
“3. The system of claim 1,” (preamble)
“wherein for each current reservoir computing neural network in the ensemble model, determining the performance measure of the current reservoir computing neural network comprises:”
“training the current reservoir computing neural network, on a set of training data, to perform the machine learning task;”
(Tomizawa 5627, “Each reservoir is trained independently using Eq. (13)”)
Ishii teaches:
“evaluating a prediction accuracy of the current reservoir computing neural network on a set of validation data; and determining the performance measure of the current reservoir computing neural network based on the prediction accuracy of the current reservoir computing neural network on the set of validation data.”
(Ishii 1207, “The evolutionary computations use the following error measurement [Equation (3)]”)
The “error measurement” is evaluative of the “prediction accuracy” and the data set used to compute the error measurement teaches the “validation data”.
In reference to claim 4.
“4. The system of claim 1,” (preamble)
Tomizawa teaches:
“wherein at each training stage in the sequence of training stages:”
“the current ensemble model comprises an output layer that comprises a respective output layer parameter corresponding to each current reservoir computing neural network in the current ensemble model; the output layer of the ensemble model processes the reservoir computing neural network outputs of the reservoir computing neural networks, in accordance with values of the output layer parameters, to generate the ensemble model output;”
(Tomizawa Figure 1)
“output layer” is taught by “Output Layer”, and “output layer parameter” is taught by “Wout”.
PNG
media_image2.png
464
1279
media_image2.png
Greyscale
(Tomizawa Figure 2, “ensemble model output” is taught by “g(i)”.)
“and obtaining the current ensemble model comprises training the values of the output layer parameters.”
(Tomizawa Eq. (15), Solving the equation for the “output layer parameters is the training process.)
In reference to claim 5.
“5. The system of claim 4,” (preamble)
Ishii teaches:
“wherein: determining the respective performance measure for each current reservoir computing neural network comprises, for each current reservoir computing neural network:”
“determining the performance measure for the current reservoir computing neural network based on the trained value of the output layer parameter corresponding to the current reservoir computing neural network.”
(Ishii 1207, “The evolutionary computations use the following error measurement […] The fitness criterion is defined as lowering the error.”, The “performance measure” is taught by the “fitness criterion”. The fitness criterion is based on the output pf the neural network which is in turn based on the “trained value output layer parameter”)
In reference to claim 6.
“6. The system of claim 1,” (preamble)
Tomizawa teaches:
“wherein each current reservoir computing neural network comprises a reservoir subnetwork that is configured to process a reservoir subnetwork input, in accordance with values of a set of reservoir subnetwork parameters, to generate a reservoir subnetwork output,”
(Tomizawa Figure 1, “Reservoir Layer”)
(Tomizawa 5626, “The state of the reservoir layer at time step k is represented as a vector rk […] which evolves given the input vector uk […] as follows: [Eq. (12)].”, “reservoir subnetwork parameters” are taught in Eq. (12) by the adjacency matrix “A”.)
“wherein a plurality of the reservoir subnetwork parameters are initialized to static values that are left unchanged during training of the current reservoir computing neural network.”
The adjacency matrices do not change during training.
In reference to claim 7.
“7. The system of claim 6,” (preamble)
Tomizawa teaches:
“wherein each current reservoir computing neural network further comprises a decoder subnetwork that is configured to process the reservoir subnetwork output, in accordance with values of a set of decoder subnetwork parameters, to generate the reservoir computing neural network output of the current reservoir computing neural network,”
(Tomizawa Figure 1, “Output Layer”, “Wout”)
Lukosevicius teaches:
“wherein the values of at least some of the decoder subnetwork parameters are iteratively adjusted during training of the current reservoir computing neural network.”
(Lukosevicius 132, “Inspired by these findings a new iterative/online RNN training method, called Backpropagation Decorrelation (BPDC), was introduced. It approximates and significantly simplifies the APRL method, and applies it only to the output weights Wout, turning it into an online RC method.”)
Motivation to combine Tomizawa, Ishii, Lukosevicius.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Tomizawa, Ishii, Lukosevicius.
Tomizawa, Ishi discloses an ensemble of reservoir neural networks and a closed-form solution for fitting the output layer weights of each network.
Lukosevicius discloses a backpropagation method for fitting output layer weights of a reservoir neural network.
One would be motivated to combine these references because the backpropagation methodology of Lukosevicius could allow for learning more complicated functions than the simple linear solution described in Tomizawa.
Further, MPEP § 2143(I) EXAMPLES OF RATIONALES sets forth the Supreme Court rationales for obviousness, including:
(A) Combining prior art elements according to known methods to yield predictable results;
(B) Simple substitution of one known element for another to obtain predictable results;
(C) Use of known technique to improve similar devices (methods, or products) in the same way;
(D) Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results;
In reference to claim 8.
“8. The system of claim 6,” (preamble)
Tomizawa teaches:
“wherein each current reservoir computing neural network further comprises an encoder subnetwork that is configured to process the model input, in accordance with values of a set of encoder subnetwork parameters, to generate the reservoir subnetwork input,”
(Tomizawa Figure 1, “Input Layer”, “Win”)
Holland teaches:
“wherein the values of at least some of the encoder subnetwork parameters are iteratively adjusted during training of the current reservoir computing neural network.”
(Holland 139, “Then an evolutionary algorithm was used on individuals consisting of all the weight matrices (Win, W, Wofb) of small (Nx = 5) reservoirs.”)
Motivation to combine Tomizawa, Ishii, Holland.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Tomizawa, Ishii, Holland.
Tomizawa, Ishii discloses an ensemble of reservoir neural networks and an evolutionary algorithm for searching for models for the ensemble.
Holland discloses a method for adjusting all weight matrices of a reservoir neural network, including the weights of the input layer.
One would be motivated to combine these references because adjusting the input layer weights during the running of the evolutionary algorithm may yield a more performant ensemble model.
Further, MPEP § 2143(I) EXAMPLES OF RATIONALES sets forth the Supreme Court rationales for obviousness, including:
(E) "Obvious to try" – choosing from a finite number of identified, predictable solutions, with a reasonable expectation of success;
In reference to claim 9.
“9. The system of claim 6,” (preamble)
Ishii teaches:
“wherein determining one or more new reservoir computing neural networks to be added to the ensemble model based on the performance measures for the current reservoir computing neural networks comprises:”
“identifying a current reservoir computing neural network in the ensemble model as being a high-performing reservoir computing neural network based on the performance measures; determining an attribute tensor defining one or more attributes of the reservoir subnetwork included in the high-performing reservoir computing neural network; and determining a new reservoir computing neural network to be added to the ensemble model based on the attribute tensor.”
(Ishii 1207-1208, “The evolutionary computations use the following error measurement […] The fitness criterion is defined as lowering the error. […] Every ESN is trained anew after the evolutionary operations are finished. In work we use evolutionary algorithms (EA), that are applied using truncation selection with an average selection pressure without subpopulations, 1 percent mutation, 50 individuals and one-point crossover. The EA is used when the selected parameters are the weights of the connectivity matrix itself.“)
“high-performing” models are identified by their “fitness” which influences how a each model will proliferate in the next generation.
The “attribute tensor” and determination of a new neural network is taught by “crossover”.
In reference to claim 10.
“10. The system of claim 9,” (preamble)
Ishii teaches:
“wherein determining the new reservoir computing neural network to be added to the ensemble model based on the attribute tensor comprises:”
“selecting a new reservoir subnetwork, from a set of candidate new reservoir subnetworks, for inclusion in the new reservoir computing neural network based on the attribute tensor of the reservoir subnetwork included in the high-performing reservoir computing neural network.”
(Ishii 1207-1208, “The evolutionary computations use the following error measurement […] The fitness criterion is defined as lowering the error. […] Every ESN is trained anew after the evolutionary operations are finished. In work we use evolutionary algorithms (EA), that are applied using truncation selection with an average selection pressure without subpopulations, 1 percent mutation, 50 individuals and one-point crossover. The EA is used when the selected parameters are the weights of the connectivity matrix itself.“)
In reference to claim 11.
“11. The system of claim 10,” (preamble)
Ishii teaches:
“wherein selecting the new reservoir subnetwork for inclusion in the new reservoir computing neural network comprises:”
“determining a respective attribute tensor for each candidate [new reservoir subnetwork in the set of possible reservoir subnetworks];”
(Ishii 1207-1208, “The evolutionary computations use the following error measurement […] The fitness criterion is defined as lowering the error. […] Every ESN is trained anew after the evolutionary operations are finished. In work we use evolutionary algorithms (EA), that are applied using truncation selection with an average selection pressure without subpopulations, 1 percent mutation, 50 individuals and one-point crossover. The EA is used when the selected parameters are the weights of the connectivity matrix itself.“)
The determination of an “attribute tensor” is taught by “crossover”.
Liang teaches:
“determining, for each candidate [new reservoir subnetwork in the set of candidate new reservoir subnetworks], a similarity measure between: (i) the attribute tensor for the candidate [new reservoir subnetwork], and (ii) the attribute tensor for the [reservoir subnetwork included in the] high-performing [reservoir computing neural network]; and selecting a candidate [new reservoir subnetwork in the set of candidate new reservoir subnetworks] for inclusion [in the new reservoir computing neural network] based on the similarity measures.”
(Liang [0027], “The population is divided into species (i.e. subpopulations) based on a similarity metric. Each species grows proportionally to its fitness and evolution occurs separately in each species.”)
The population is divided based on a “similarity measure” which compares and groups all members – candidates and high-performers included. Selection further occurs based on which subgroup a population member belongs to. Selection is thus based on the similarity measure.
“reservoir subnetwork” is taught in the parent claim.
Motivation to combine Tomizawa, Ishii, Liang.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Tomizawa, Ishii, Liang.
Tomizawa, Ishii discloses an ensemble of reservoir neural networks and an evolutionary algorithm for searching for models for the ensemble.
Liang discloses an extension of evolutionary algorithms that adds additional steps to the selection phase.
One would be motivated to combine these references because the additional EA parameters that Liang discloses could reasonably be expected to produce better results during the model search phase of Tomizawa, Ishii.
Further, MPEP § 2143(I) EXAMPLES OF RATIONALES sets forth the Supreme Court rationales for obviousness, including:
(E) "Obvious to try" – choosing from a finite number of identified, predictable solutions, with a reasonable expectation of success;
In reference to claim 12.
“12. The system of claim 10,” (preamble)
Goulas teaches:
“wherein: at least a subset of the [reservoir computing] neural networks in the ensemble model are brain emulation [reservoir computing] neural networks, each brain emulation [reservoir computing] neural network having a brain emulation [reservoir] architecture that comprises a plurality of brain emulation parameters that, when initialized, represent biological connectivity between a plurality of neuronal elements in a brain of a biological organism,”
(Goulas Abstract, “Here, we examine strategies to construct recurrent neural networks (RNNs) that instantiate the network topology of brains of different species. We refer to such RNNs as bio-instantiated.”)
“the plurality of brain emulation parameters having been determined from a synaptic connectivity graph that represents the synaptic connectivity between the neuronal elements in the brain of the biological organism, the synaptic connectivity graph comprising (i) a plurality of nodes and (ii) a plurality of edges that each connect a respective pair of nodes,”
(Goulas Fig. 1)
PNG
media_image3.png
666
1004
media_image3.png
Greyscale
“and at least a subset of the set of candidate new reservoir subnetworks each correspond to a respective sub-graph of the synaptic connectivity graph.”
The empty set is a subset of all sets. Thus, the subset containing no candidate new reservoir subnetworks teaches each subnetwork in the set corresponding to a respective sub-graph of the synaptic connectivity graph. Thus, this claim element does not further limit the parent claim and is taught by the parent claim.
Motivation to combine Tomizawa, Ishii, Goulas.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Tomizawa, Ishii, Goulas.
Tomizawa, Ishii discloses an ensemble of reservoir neural networks and an evolutionary algorithm for searching for models for the ensemble.
Goulas discloses a methodology for instantiating reservoir neural networks based on biological brains, i.e. “bio-instantiated neural networks”.
One would be motivated to combine these references because the bio-instantiation of Goulas could reasonably be expected to produce better results during either the model search phase or inference phase of Tomizawa, Ishii.
Further, MPEP § 2143(I) EXAMPLES OF RATIONALES sets forth the Supreme Court rationales for obviousness, including:
(A) Combining prior art elements according to known methods to yield predictable results;
(B) Simple substitution of one known element for another to obtain predictable results;
(C) Use of known technique to improve similar devices (methods, or products) in the same way;
(E) "Obvious to try" – choosing from a finite number of identified, predictable solutions, with a reasonable expectation of success;
(F) Known work in one field of endeavor may prompt variations of it for use in either the same field or a different one based on design incentives or other market forces if the variations are predictable to one of ordinary skill in the art;
In reference to claim 13.
“13. The system of claim 12,” (preamble)
Tomizawa teaches:
“wherein, for each [brain emulation] reservoir computing neural network, the plurality of [brain emulation] parameters representing synaptic connectivity between the plurality of neuronal elements in the brain of the biological organism are arranged in a two-dimensional weight matrix having a plurality of rows and a plurality of columns, wherein each row and each column of the weight matrix corresponds to a respective neuronal element from the plurality of neuronal elements, and wherein each [brain emulation] parameter in the weight matrix corresponds to a respective pair of neuronal elements in the brain of the biological organism, the pair comprising: (i) the neuronal element corresponding to a row of the [brain emulation] parameter in the weight matrix, and (ii) the neuronal element corresponding to a column of the [brain emulation] parameter in the weight matrix.”
(Tomizawa 5626, “A is the Dr x Dr adjacency matrix of the reservoir which determines the reservoir dynamics”, A details the weight connections between neurons in the reservoir)
Goulas teaches:
“brain emulation” is taught in Goulas in the parent claim.
In reference to claim 14.
“14. The system of claim 12,” (preamble)
“wherein at least a subset of the set of candidate new reservoir subnetworks each correspond to a respective community sub-graph of the synaptic connectivity graph, community sub-graph having been generated by determining a partition of the synaptic connectivity graph into a plurality of community sub-graphs by performing an optimization that encourages a higher measure of connectedness between nodes included within each community sub-graph relative to nodes included in different community sub-graphs.”
The empty set is a subset of all sets. Thus, the subset containing no candidate new reservoir subnetworks teaches “each [subnetwork] correspond[ing] to a respective community sub-graph of the synaptic connectivity graph […]”. Thus, this claim element does not further limit the parent claim and is taught by the parent claim.
In reference to claim 15.
“15. The system of claim 14,” (preamble)
“wherein each of the community sub-graphs is predicted to represent a corresponding community of neuronal elements in the brain of the biological organism.”
This claim element does not further limit the parent claim and is thus taught by the parent claim. See the discussion of subsets in the parent claim mapping.
In reference to claim 16.
“16. The system of claim 1,” (preamble)
Ishii teaches:
“wherein the ensemble model has been trained by performing operations further comprising, at each training stage in the sequence of training stages:”
“determining one or more current reservoir computing neural networks of the plurality of current reservoir computing neural networks in the current ensemble model to be removed from the current ensemble model based on the performance measures for the current reservoir computing neural networks; and removing the determined current reservoir computing neural networks from the current ensemble model.”
(Ishii 1207-1208, “The evolutionary computations use the following error measurement […] The fitness criterion is defined as lowering the error. […] Every ESN is trained anew after the evolutionary operations are finished. In work we use evolutionary algorithms (EA), that are applied using truncation selection with an average selection pressure without subpopulations, 1 percent mutation, 50 individuals and one-point crossover. The EA is used when the selected parameters are the weights of the connectivity matrix itself.“)
Models are removed during the selection phase of EA when they are determined to have low fitness.
In reference to claim 17.
“17. The system of claim 1,” (preamble)
Ishii teaches:
“wherein, for each reservoir computing neural network in the ensemble model:”
“the performance measure of the reservoir computing neural network comprises a plurality of performance values each corresponding to a different class of model input,”
(Ishii Equation (3), The “plurality of performance values” is taught by each index I of the summation in the equation. Each “reference signal” xT is being interpreted as its own “class of model input”.
“wherein the performance value corresponding to a particular class of model input represents a predicted performance of the reservoir computing neural network on the machine learning task when processing model inputs of the particular class.”
The performance measure is an error computation of the model and thus “represents a predicted performance” of the model related to “inputs of the particular class”.
In reference to claim 18.
“18. The system of claim 1,” (preamble)
Ishii teaches:
“wherein the ensemble model has been trained by performing operations further comprising, at each of one or more training stages in the sequence of training stages:”
“determining one or more new reservoir computing neural networks at random and adding the randomly-determined new reservoir computing neural networks to the current ensemble model.”
(Ishii 1207-1208, “The evolutionary computations use the following error measurement […] The fitness criterion is defined as lowering the error. […] Every ESN is trained anew after the evolutionary operations are finished. In work we use evolutionary algorithms (EA), that are applied using truncation selection with an average selection pressure without subpopulations, 1 percent mutation, 50 individuals and one-point crossover. The EA is used when the selected parameters are the weights of the connectivity matrix itself.“)
“determining one or more new reservoir computing neural networks at random” is taught by “mutation” because it occurs at random in EA.
In reference to claims 19 and 20.
Claims 19 and 20 are substantially similar to claim 1 and a thus rejected under the same art.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CODY RYAN GILLESPIE whose telephone number is (571)272-1331. The examiner can normally be reached M-F, 8 AM - 5 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Viker A Lamardo can be reached on 5172705871. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CODY RYAN GILLESPIE/Examiner, Art Unit 2147
/ERIC NILSSON/Primary Examiner, Art Unit 2151