Prosecution Insights
Last updated: April 19, 2026
Application No. 17/765,895

OPTIMIZING RESERVOIR COMPUTERS FOR HARDWARE IMPLEMENTATION

Final Rejection §101§102§103
Filed
Apr 01, 2022
Examiner
HICKS, AUSTIN JAMES
Art Unit
2142
Tech Center
2100 — Computer Architecture & Software
Assignee
Ohio State Innovation Foundation
OA Round
2 (Final)
76%
Grant Probability
Favorable
3-4
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
308 granted / 403 resolved
+21.4% vs TC avg
Strong +25% interview lift
Without
With
+25.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
54 currently pending
Career history
457
Total Applications
across all art units

Statute-Specific Performance

§101
13.9%
-26.1% vs TC avg
§103
46.3%
+6.3% vs TC avg
§102
17.3%
-22.7% vs TC avg
§112
19.2%
-20.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 403 resolved cases

Office Action

§101 §102 §103
Response to Arguments Applicant's arguments filed 10/17/2025 have been fully considered but they are not persuasive. Applicant argues, “A claim does not recite a mathematical concept (i.e., the claim limitations do not fall within the mathematical concept grouping), if it is only based on or involves a mathematical concept.” Remarks 5. Constructing a random reservoir, measuring performance, choosing hyperparameters and creating a reservoir network with optimized parameters is only math. It is pure math, there is not a version of those steps that isn’t accomplished with math. That is why those steps are an abstract idea. Applicant argues that steps b-e “As described in the specification… advantageously generates a topology that is much easier to realize in hardware (see specification, paragraph [0058])”, and as a result integrate the abstract idea into practical application. Remarks 6. With respect to an improvement to the functioning of a computer or other technology, MPEP 2106.04(d) says, “The Federal Circuit then continued with its analysis under part one of the Alice/Mayo test finding that the claims are not directed to an improvement in the functioning of a computer or an improvement to another technology.” Making an abstract idea easier to implement in hardware is an improvement to the abstract idea, not an improvement to the technology. Because it is not an improvement to the computer/technology, it does not amount to significantly more than the abstract idea. Applicant argues, “Yperman, however, does not teach or suggest a method to optimize the structure of the network topology…” Remarks 7. Applicant optimizes hyperparameters in the claims. Yperman does not need to teach unclaimed elements. Applicant argues, “the Applicant’s claim of a network with an in-degree of 1, which can only have a spectral radius of 0, corresponds to a functioning reservoir computer with non-zero weights. Yperman does not consider this network topology.” Remarks 7. Examiner agrees and presents new art necessitated by the amendment. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 3-5, 9-17 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a mathematical relationship without significantly more. The claims recite constructing a reservoir computer, measuring performance, choosing hyperparameters and creating a reservoir network with optimized parameters. This judicial exception is not integrated into a practical application because the mention of a method or a computer merely link the abstract idea to the field of computing. The claims do not include additional elements that are sufficient to amount to significantly more than the abstract idea because the claimed computer, according to spec. 90, can be entirely made of software components. Even if the computer was a hardware component, spec. 90 makes it clear that it would be a generic computer. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 3-4 and 9-17 are rejected under 35 U.S.C. 102(a)(1) as being described by Bayesian optimization of hyper-parameters in reservoir computing by Yperman et al. Yperman teaches claim 3. The method of claim 10, wherein the plurality of hyperparameters describe a reservoir network with extremely low connectivity. (low connectivity is not a term of art and it is not defined. Figure 2 shows a topology/network with extremely low connectivity.) Yperman teaches claim 4. The method of claim 10, wherein the reservoir has no recurrent connections. (Yperman fig. 2 doesn’t show any recurrent connection, i.e. connection to a previous layer.) Yperman teaches claim 9. The method of claim 10, wherein the reservoir is a delay line reservoir. (Yperman sec. 2.2 “The concept of a nonlinear delay node as a reservoir was introduced in [27]. It can be implemented in hardware with optical and electronic components [28, 29, 30, 12, 31, 8, 9, 10, 11, 12]. We use two different types of NDNs.”) Yperman teaches claim 10. A method for optimizing a reservoir computer, the method comprising: (a) constructing a single random reservoir computer, whose reservoir has a maximum in-degree of one, using a plurality of hyperparameters; (Yperman abs “a method for searching the optimal hyper-parameters in reservoir computing, which consists of a Gaussian process with Bayesian optimization…. We apply this method to two types of reservoirs: nonlinear delay nodes and echo state networks.” Applying the optimization to a network generates a topology/network. Finding the optimum hyperparameters generates the topology. Yperman fig. 2, below, shows that the virtual nodes have an in-degree of one because there are no recurrent nodes.) PNG media_image1.png 280 648 media_image1.png Greyscale (b) training the reservoir computer over a time interval that is divided into ranges having a fixed time step, wherein a first range is discarded, a second range that is training period, and a third range that is a testing period; (Yperman sec. 2.2.1 “They are referred to as “virtual nodes”. Training of the connections between the reservoir state and the output is done the same as with ESNs.” Yperman app. A “One often uses initialization points to bring the reservoir to a stationary state. We use the first 200 points for initialization, and discard their reservoir states… . So in fact we use 4200 points of the data set.” Yperman “the first 4000 points of the data set: the first 1000 for training, the second for optimizing hyper-parameters, and the third for optimizing the regularization parameter λ. The NMSE is reported on the fourth set.” The first 200 for initialization are the discard range. The first 1000 for training are Applicant’s second range, and the NMSE is the test range.) (c) measuring a performance of the reservoir computer; (Yperman sec. 2.1 “Performance is measured with the normalized mean squared error (NMSE…”) (d) choosing a second plurality of hyperparameters; (Yperman sec. 3 “The vector x are the hyper-parameters of the reservoir and f(x) is the error function, e.g., f(x) = NMSE({γ, η, p}) for the Mackey-Glass NDN…. Intuitively, we expect small changes in the objective function if the hyper-parameters are changed slightly.”) (e) repeating (a)-(c) with the second plurality of hyperparameters to determine a set of optimized hyperparameters; and (Yperman abs “We have optimized up to six hyper-parameters simultaneously…” Yperman sec. 6.1 “A plot of the NMSE on the validation set as a function of the number of Spearmint iterations is shown in Figure 4…” Fig 4 shows the iterations of hyperparameter updates converges on an optimal NMSE.) (f) creating a reservoir as a network or interacting nodes with a topology using the set of optimized hyperparameters, (Yperman abs “We apply this method to two types of reservoirs: nonlinear delay nodes and echo state networks.” Yperman p. 2 sec. 1 “The method is applied to two types of reservoirs: dynamical nonlinear delay nodes (NDNs) and echo state networks (ESNs).” Yperman p. 2 sec. 2.1 “The values of the nodes are updated according to the rule…” Training the network/topology creates the reservoir as a network.) wherein the topology is a single line. (Yperman fig. 2 there are many single lines in fig. 2. Also the reservoir topology of virtual nodes is a single line, see below.) PNG media_image1.png 280 648 media_image1.png Greyscale Yperman teaches claim 11. The method of claim 10, further comprising choosing the plurality of hyperparameters prior to constructing the single random reservoir computer. (Yperman sec. 6.1 “One needs to specify the boundaries of the hyper-parameter search space.”) Yperman teaches claim 12. The method of claim 11, wherein choosing the plurality of hyperparameters comprises selecting the plurality of hyperparameters by searching a range of values selected to minimize a forecasting error using a Bayesian optimization procedure. (Yperman sec. 6.1 “One needs to specify the boundaries of the hyper-parameter search space…. The NMSE for the MSI is an order of magnitude lower than [50], while the OSI result is four times lower.” Yperman sec. 3 “The vector x are the hyper-parameters of the reservoir and f(x) is the error function, e.g., f(x) = NMSE({γ, η, p}) for the Mackey-Glass NDN.” The f(x) is mapped to the Gaussian process GP. “The implementation of the GP with Bayesian updating is done with Spearmint…” Yperman sec. 3.) Yperman teaches claim 13. The method of claim 10, further comprising generating a topology using the set of optimized hyperparameters. (Yperman abs “We apply this method to two types of reservoirs: nonlinear delay nodes and echo state networks.” Yperman p. 2 sec. 1 “The method is applied to two types of reservoirs: dynamical nonlinear delay nodes (NDNs) and echo state networks (ESNs).” Yperman p. 2 sec. 2.1 “The values of the nodes are updated according to the rule…” Training the network/topology creates the reservoir as a network.) Yperman teaches claim 14. The method of claim 13, wherein creating the reservoir using the set of optimized hyperparameters comprises creating the reservoir as a network of interacting nodes with the topology. (Yperman abs “We apply this method to two types of reservoirs: nonlinear delay nodes and echo state networks.” Yperman p. 2 sec. 1 “The method is applied to two types of reservoirs: dynamical nonlinear delay nodes (NDNs) and echo state networks (ESNs).” Yperman p. 2 sec. 2.1 “The values of the nodes are updated according to the rule…” Training the network/topology creates the reservoir as a network. Yperman fig. 2.) Yperman teaches claim 15. The method of claim 13, wherein the topology is a single line. (Yperman fig. 2 there are many single lines in fig. 2. Also the reservoir topology of virtual nodes is a single line.) Yperman teaches claim 16. The method of claim 1, wherein the plurality of RC hyperparameters comprise: γ, which sets a characteristic time scale of the reservoir, (Time scale is not a term of art and the specification doesn’t define it. Yperman fig. 3 “The “time” t denotes the number of measurements of the error function.” The time t is the claimed time scale. Yperman fig. 2 “Values of the nodes are the solution to the MG equation, taken at time intervals θ.” The time intervals also teach Applicant’s time scale.) σ, which determines a probability a node is connected to a reservoir input, (Reservoir input and probability of connection are not a terms of art and the specification doesn’t define it. Yperman sec. 2.1 shows the reservoir state at n+1 equals σ (W x(n) + Winu(n + 1) + b). So the probability of a connecting to a previous state is varied by the function σ(), which “is a sigmoid…”) ρin, which sets a scale of input weights, (Yperman fig. 2.1 “The reservoir weight matrix is rescaled as W’ = ρW/ |λmax|, where |λmax| is the spectral radius of the network (i.e., the largest eigenvalue of W) and ρ is a scaling parameter (effectively the spectral radius of the rescaled network)…” The p divided by lambda max is the claimed pin.) k, a recurrent in-degree of the network, and (in-degree is not a term of art and it is not defined in the specification. Yperman fig. 2.2.2 “N is the total number of virtual nodes…In our implementation we set k = [N/3]” Yperman’s k is applicant’s k.) ρr, a spectral radius of the network. (Yperman fig. 2.1 “The reservoir weight matrix is rescaled as W0 = ρW/ |λmax|, where |λmax| is the spectral radius of the network (i.e., the largest eigenvalue of W) and ρ is a scaling parameter (effectively the spectral radius of the rescaled network)…” p is the sprectral radius.) Yperman teaches claim 17. The method of claim 10, further comprising iterating (a)-(d) a predetermined number of times with different hyperparameters for each iteration. (Yperman fig. 4 “Figure 4: NMSE as a function of the number iterations of the algorithm…”) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Bayesian optimization of hyper-parameters in reservoir computing by Yperman et al and Reservoir Topology in Deep Echo State Networks by Gallicchio et al. Yperman teaches claim 5. The method of claim 10, wherein the reservoir has a spectral radius (Yperman sec. 1 “Although the training stage is dramatically simplified, one still needs to set several hyper-parameters that determine the general properties of the network, such as its size and spectral radius.”) Yperman doesn’t teach the spectral radius of zero and the in-degree of one. However, Gallicchio teaches an in-degree of one in Figure 2a and spectral radius that equals zero. (Gallicchio p. 6 “The only non-zero elements in Wˆ (l) are located in the lower sub-diagonal…” This means each unit takes input only from the previous unit, therefore a max in-degree of 1. Gallicchio p. 6 “in this case W(l) is nilpotent and hence its spectral radius is always 0…”) Yperman, Gallicchio and the claims are all reservoir networks. It would have been obvious to a person having ordinary skill in the art, at the time of filing, to use the chain taught in Gallicchio because “the chain topology results in a particularly simple design strategy that, from the architectural perspective, applies a further simplification to the ring topology by removing one of the connections between the internal units.” Gallicchio p. 6. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Austin Hicks whose telephone number is (571)270-3377. The examiner can normally be reached Monday - Thursday 8-4 PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Miranda Huang can be reached at (571) 270-7092. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AUSTIN HICKS/Primary Examiner, Art Unit 2124
Read full office action

Prosecution Timeline

Apr 01, 2022
Application Filed
Apr 01, 2022
Response after Non-Final Action
Jul 21, 2025
Non-Final Rejection — §101, §102, §103
Oct 17, 2025
Response Filed
Oct 29, 2025
Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591767
NEURAL NETWORK ACCELERATION CIRCUIT AND METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12554795
REDUCING CLASS IMBALANCE IN MACHINE-LEARNING TRAINING DATASET
2y 5m to grant Granted Feb 17, 2026
Patent 12530630
Hierarchical Gradient Averaging For Enforcing Subject Level Privacy
2y 5m to grant Granted Jan 20, 2026
Patent 12524694
OPTIMIZING ROUTE MODIFICATION USING QUANTUM GENERATED ROUTE REPOSITORY
2y 5m to grant Granted Jan 13, 2026
Patent 12524646
VARIABLE CURVATURE BENDING ARC CONTROL METHOD FOR ROLL BENDING MACHINE
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+25.1%)
3y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 403 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month