Prosecution Insights
Last updated: April 19, 2026
Application No. 18/138,015

RAPID LEARNING WITH HIGH LOCALIZED SYNAPTIC PLASTICITY

Non-Final OA §101
Filed
Apr 21, 2023
Examiner
TRAN, QUOC A
Art Unit
2145
Tech Center
2100 — Computer Architecture & Software
Assignee
The University of Chicago
OA Round
1 (Non-Final)
80%
Grant Probability
Favorable
1-2
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
590 granted / 735 resolved
+25.3% vs TC avg
Strong +29% interview lift
Without
With
+29.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
21 currently pending
Career history
756
Total Applications
across all art units

Statute-Specific Performance

§101
21.8%
-18.2% vs TC avg
§103
43.1%
+3.1% vs TC avg
§102
6.2%
-33.8% vs TC avg
§112
10.2%
-29.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 735 resolved cases

Office Action

§101
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This is Non-Final Office Action, in responses to Patent Application filed 04/21/2023; Claims Priority from Provisional Application 63335684, filed 04/27/2022 . Claim(s) 1-20 are pending. Claim(s) 1, 10 and 17 is/are independent. Drawings Color photographs and color drawings are not accepted in utility applications unless a petition filed under 37 CFR 1.84(a)(2) (I.E., “GRANTED”. It is noted the petition of color drawings or color photographs filed 04/21/2023; is “Dismissed” (See the petition decision dated 05/13/2024). In addition, in the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Information Disclosure Statement A signed and dated copy of applicant’s IDS, which was 07/21/2023 is/are attached to this Office Action. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim(s) 1-20 fail to recite statutory subject matter, as defined in 35 U.S.C. 101, because: The claimed invention is/are directed to a judicial exception (i.e., abstract idea) without significantly more. Step 1: YES (Claim(s) is/are process, machine, manufacture or composition of the matter). ... for training one or more artificial neural networks capable of rapidly “ solving tasks “ with constrained plasticity, comprising: “selecting, via one or more processors”, a set of artificial neural network parameters; “sampling the network parameters” from a uniform distribution with defined ranges; “selecting connection weights” for one or more artificial neural networks; “initializing the one or more artificial neural networks” using the network parameters and “connection weights”; “running the artificial neural networks” on a series of trials of cognitive tasks; and “determining” whether activity of each of the artificial neural networks is within an acceptable range ... and therefore, fall into one of the four categories of patent eligible subject matter (process, machine, manufacture or composition of the matter). Step 2A : Prong One and Two, the claim(s) recite “additional element(s) that integrate the “Judicial Exception” into “A Practical Application” ? The claim(s) recite additional limitation(s) such as “computer system/processor”... for training the artificial neural networks, sampling the network parameters from a uniform distribution with defined ranges, connection weights, ..., determining whether activity of each of the artificial neural networks is within an acceptable range and “determining” whether activity of each of the artificial neural networks is within an acceptable range.... These limitation(s) only recite a generic computer component(s) that only amounts to mere instructions to implement the abstract idea on a computer, and therefore, do not integrate the judicial exception into a practical application. (MPEP 2106.04(d), 2106.05(f)). Step 2B: (Whether a Claim Amounts to Significantly More) ? The claim(s) recite additional limitation(s) such as “computer system/processors. These limitation(s) only recite a generic computer component(s) that only amounts to mere instructions to implement the abstract idea on a computer, and therefore, do not amount to significantly more than the abstract idea itself (MPEP 2106.05, 2106.04(d) and 2106.05(f)). As to the dependent claim(s) 2-9, 11-16 and 18-20, further recite, addition limitation(s) such as, i) a shape of excitatory-to-excitatory weight distribution parameter (κ.sub.EE); OR ii) a shape of excitatory-to-inhibitory weight distribution parameter (κ.sub.EI); OR iii) a shape of inhibitory-to-excitatory weight distribution parameter (κIE); iv) a shape of inhibitory-to-inhibitory weight distribution parameter (κ.sub.II); OR v) a shape of topological modifier for excitatory-to-excitatory connectivity parameter (λ.sub.EE); OR vi) a shape of topological modifier for excitatory-to-inhibitory connectivity parameter (λ.sub.EI); OR vii) a shape of topological modifier for inhibitory-to-excitatory connectivity parameter (λ.sub.IE); ORviii) a shape of topological modifier for inhibitory-to-inhibitory connectivity parameter (λ.sub.II); ix) a global strength of recurrent weights parameter (w.sub.r); OR x) a strength of reciprocal connectivity from excitatory to excitatory units parameter (α.sub.EE); OR xi) a strength of excitatory-to-inhibitory/inhibitory-to-excitatory reciprocal connectivity parameter (α.sub.EI/α.sub.IE); OR xii) a strength of inhibitory-to-inhibitory reciprocal connectivity parameter (α.sub.II); OR xiii) a shape of bottom-up input weight distribution onto excitatory units parameter (κ.sub.inp,E); OR xiv) a shape of bottom-up input weight distribution onto inhibitory units parameter (κ.sub.inp,I); OR xv) a shape of topological modifier for bottom-up input onto excitatory units parameter (λ.sub.inp,E); OR xvi) a shape of topological modifier for bottom-up input onto inhibitory units parameter (λ.sub.inp,I); OR xvii) a global strength of bottom-up input weights parameter (w.sub.inp); OR xviii) a strength of normalization from excitatory to excitatory units parameter (β.sub.E,E); OR xix) a strength of normalization from excitatory to inhibitory units parameter (β.sub.E,I); OR xx) a strength of normalization from inhibitory to excitatory units parameter (β.sub.I,E); OR xxi) a strength of normalization from inhibitory to inhibitory units parameter (β.sub.I,I); OR xxii) a time constant of network activity modulation parameter (T.sub.n); xxiii) a shape of top-down weight distribution onto excitatory units parameter (κ.sub.T,D,E); OR xxiv) a shape of topological modifier for top-down input onto excitatory units parameter (λ.sub.TD,E); OR xxv) a shape of top-down weight distribution onto inhibitory units parameter (κ.sub.T,D,I);); OR xxvi) a shape of topological modifier for top-down input onto inhibitory units parameter (λ.sub.TD,I); randomly sampling; calculating the mean accuracy across the trials of the tasks; selecting one or more top-performing networks according to the maximum mean accuracy at the end of training; identified top-performing networks; verify the generalizability artificial neural networks; and at least one of i) a recurrent neural network or ii) a feed-forward neural network...etc., These limitation(s) only amounts to mere instructions to implement the abstract idea ...and do not include elements that amount to significantly more than the abstract idea and are also rejected under the same rational. Accordingly, claims 1-20 fail to recite statutory subject matter, as defined in 35 U.S.C. 101. Allowable Subject Matter Claim(s) 1-20 would be allowable if rewritten and/or amending to remedy the 101 rejection(s). Reason for Allowance Under the broadest reasonable interpretation of the claimed limitation which is consistence with the Applicant's Specification, the prior arts of recorded when taken individually or in combination do not expressly teach or render obvious the limitations recited in claim(s) 1, 10 and 17 when taken in the context of the claims as a whole, especially the concept ..,“... for training one or more artificial neural networks capable of rapidly solving tasks with constrained plasticity, comprising: selecting, via one or more processors, a set of artificial neural network parameters; sampling the network parameters from a uniform distribution with defined ranges; selecting connection weights for one or more artificial neural networks; initializing the one or more artificial neural networks using the network parameters and connection weights; running the artificial neural networks on a series of trials of cognitive tasks; and determining whether activity of each of the artificial neural networks is within an acceptable range......” as claims and further supports in USPGPUB 20240160944 A1 Para 118. In addition, neither a reference uncovered that would have provided a basis of evidence for asserting a motivation, nor one of ordinary skilled in the art before the effective filing date of the claimed invention, would have combined them to arrive at the present invention as recited in the context of independent claim(s) 1, 10 and 17 as a whole. Thus, claim(s) 1, 10 and 17 is/are allowed over the prior arts of record. Also, dependent claim(s) 2-9, 11-16 and 18-20 are also allowable due to its dependency of independent claim(s) 1, 10 and 17; if Claim(s) 1-20 rewritten and/or amending to remedy the 101 rejection(s) as recited herein and provide further explanation or dispute the examiner’s interpretation of the corresponding structure, applicant must identify the corresponding structure with reference to the specification by page and line number, and to the drawing, if any, by reference characters in response to this Office action regarding the pre-AIA 35 U.S.C. 112, sixth paragraph. Any comments considered necessary by applicant must be submitted no later than the payment of the issue fee and, to avoid processing delays, should preferably accompany the issue fee. Such submissions should be clearly labeled “Comments on Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Bellec et al.,(“ A solution to the learning dilemma for recurrent networks of spiking neurons” Published 2020 by Nature Communication, 15 pages, discloses the recurrently connected networks of spiking neurons underlie the astounding information processing capabilities of the brain. Yet in spite of extensive research, how they can learn through synaptic plasticity to carry out complex network computations remains unclear....the arguments were, that two pieces of this puzzle were provided by experimental data from neuroscience. A mathematical result tells us how these pieces need to be combined to enable biologically plausible online network learning through gradient descent, in particular deep reinforcement learning. This learning method-called e-prop-approaches the performance of backpropagation through time (BPTT), the best-known method for training recurrent neural networks in machine learning. In addition, it suggests a method for powerful on-chip learning in energy-efficient spike-based hardware for artificial intelligence... [the Abstract]. Song et al.,(“ Training Excitatory-Inhibitory Recurrent Neural Networks for Cognitive Tasks: A Simple and Flexible Framework” Published 2016 by PLOS Computational Biology, 30 pages, discloses the ability to simultaneously record from large numbers of neurons in behaving animals has ushered in a new era for the study of the neural circuit mechanisms underlying cognitive functions. One promising approach to uncovering the dynamical and computational principles governing population responses is to analyze model recurrent neural networks (RNNs) that have been optimized to perform the same tasks as behaving animals. ..., which are essential if RNNs are to provide insights into the operation of biological circuits. Moreover, trained networks can achieve the same behavioral performance but differ substantially in their structure and dynamics, highlighting the need for a simple and flexible framework for the exploratory training of RNNs. Here, we describe a framework for gradient descent based training of excitatory-inhibitory RNNs that can incorporate a variety of biological knowledge... [the Abstract]. Huang et al.,(“ Improving Learning Efficiency of Recurrent Neural Network through Adjusting Weights of All Layers in a Biologically-inspired Framework” Published 2017 by IEEE, 7 pages, discloses the brain-inspired models have become a focus in artificial intelligence field. As a biologically plausible network, the recurrent neural network in reservoir computing framework has been proposed as a popular model of cortical computation because of its complicated dynamics and highly recurrent connections. To train this network, unlike adjusting only readout weights in liquid computing theory or changing only internal recurrent weights, inspired by global modulation of human emotions on cognition and motion control, we introduce a novel reward-modulated Hebbian learning rule to train the network by adjusting not only the internal recurrent weights but also the input connected weights and readout weights together, with solely delayed, phasic rewards. Experiment results show that the proposed method can train a recurrent neural network in near-chaotic regime to complete the motion control and working memory tasks with higher accuracy and learning efficiency... [the Abstract]. Yang et al. “Task representations in neural networks trained to perform many cognitive tasks” Published 2019 by Nature Neuroscience, 16 pages, discloses trained single network models to perform 20 cognitive tasks that depend on working memory, decision making, categorization, and inhibitory control... found that after training, recurrent units can develop into clusters that are functionally specialized for different cognitive processes, and.. introduce a simple yet effective measure to quantify relationships between single-unit neural representations of tasks. Learning often gives rise to compositionality of task representations, a critical feature for cognitive flexibility, whereby one task can be performed by recombining instructions for other tasks. Finally, networks developed mixed task selectivity similar to recorded prefrontal neurons after learning multiple tasks sequentially with a continual-learning technique. This work provides a computational platform to investigate neural representations of many cognitive tasks... [the Introduction]. Any inquiry concerning this communication or earlier communications from the examiner should be directed to QUOC A TRAN whose telephone number is (571)272-8664. The examiner can normally be reached Monday-Friday 9am-5pm MT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Cesar Paula can be reached at 571-272-4128. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /QUOC A TRAN/Primary Examiner, Art Unit 2145
Read full office action

Prosecution Timeline

Apr 21, 2023
Application Filed
Jan 09, 2026
Non-Final Rejection — §101 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586003
Method and Apparatus for Generating Operator
2y 5m to grant Granted Mar 24, 2026
Patent 12585951
METHOD AND ELECTRONIC DEVICE FOR GENERATING OPTIMAL NEURAL NETWORK (NN) MODEL
2y 5m to grant Granted Mar 24, 2026
Patent 12572772
SCALABLE DIGITAL TWIN SERVICE SYSTEM AND METHOD
2y 5m to grant Granted Mar 10, 2026
Patent 12561617
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Feb 24, 2026
Patent 12561610
METHOD AND APPARATUS FOR PRESENTING CANDIDATE CHARACTER STRING, AND METHOD AND APPARATUS FOR TRAINING DISCRIMINATIVE MODEL
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
80%
Grant Probability
99%
With Interview (+29.4%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 735 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month