Prosecution Insights
Last updated: April 19, 2026
Application No. 17/127,762

SELF ORGANIZATION OF NEUROMORPHIC MACHINE LEARNING ARCHITECTURES

Non-Final OA §102§103§112
Filed
Dec 18, 2020
Examiner
DASGUPTA, SHOURJO
Art Unit
2144
Tech Center
2100 — Computer Architecture & Software
Assignee
California Institute Of Technology
OA Round
4 (Non-Final)
65%
Grant Probability
Favorable
4-5
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 65% — above average
65%
Career Allow Rate
293 granted / 449 resolved
+10.3% vs TC avg
Strong +38% interview lift
Without
With
+38.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
32 currently pending
Career history
481
Total Applications
across all art units

Statute-Specific Performance

§101
11.8%
-28.2% vs TC avg
§103
56.8%
+16.8% vs TC avg
§102
12.2%
-27.8% vs TC avg
§112
15.6%
-24.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 449 resolved cases

Office Action

§102 §103 §112
Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 2. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office Action has been withdrawn pursuant to 37 CFR 1.114. Claim Objections 3. In the claims received 12/3/25, a typographical error has appeared in claim 10, and hence the claim is objected to for featuring a minor informality. Specifically, the claim’s wherein clause reads “where7in an architecture ...” with a stray/mistyped ‘7’ appearing to disrupt the intended word “wherein.” Appropriate correction is required. Claim Rejections - 35 USC § 112 4. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. 5. Claim 35 is rejected under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention. In particular, the claim recites, in part, the limitation “wherein the spatiotemporal waves in a lower first layer of the plurality of layers further comprises at least one malfunctioning node.” As the Examiner understands it (e.g., Applicants’ specification [0010], [0055], [0068]), waves as recited result from nodes and their excitation but the waves themselves are not understood to be comprised of the nodes themselves. Further, the specification teaches that waves “can emerge and travel over layers with arbitrary geometries and even in the presence of defective sensor-nodes” ([0080]). Hence, the waves are not nodes but are separate and apart from nodes. Yet, the limitation referenced here phrases that waves logically comprise nodes, which does not seem technically correct. The Examiner recommends a clarifying amendment to clarify what is meant by waves in relation to a malfunctioning node that is more consistent with the specification as filed. Claim Rejections - 35 USC § 102 6. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 7. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office Action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. 8. Claims 1, 3-6, 9-11, 14, 22-23, 26-27, 33, and 36 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Non-Patent Literature “Cell-splitting grid: a self-creating and self-organizing neural network” (“Chow”). Regarding claim 1, CHOW teaches A method for constructing a neural network (Abstract on page 373: “A new model of self-creating and self-organizing neural network, cell-splitting grid (CSG), is presented. In this proposed CSG algorithm, the neurons and their connections are created and organized on a 2-D plane according to the input data distribution.”) comprising: under control of a hardware processor in communication with a non-transitory memory configured to store executable instructions (section 5, “Experimental Results”, beginning on page 383, in its first paragraph, discussing the application of the CSG algorithm, as one such applied algorithm among many others, to data to compare performance (see, e.g., Tables 1-2 on page 384), which the Examiner reasons would be understood by one of ordinary skill in the art to involve the use of a computer with essential computer elements such as a processor and memory (e.g., a memory to store the algorithms to be executed, such as CSG, and a processor to execute the algorithms) to obtain the performance data as detailed therein): growing, from one single node, a plurality of layers of a neural network, wherein each layer comprises a plurality of nodes (CSG is understood to start with one neuron, which is subject to splitting (see, e.g., page 377: “The executing steps of the CSG algorithms are as follows: 1. Start from one neuron... 6. When the activation level of winner neuron c decreases to zero, perform the cell-splitting mechanism, i.e., to delete the neuron c and then generate four new neurons within the square region of original neuron c... ”), and see also FIG. 2 illustrating the same on the following page 378 and FIG. 3 on page 379, and where the resulting generated neurons are understood to be children having a different depth/layer, e.g. akin to the discussion provided in sections 4.1-4.2 on pages 382-383); and self-organizing the plurality of layers of the neural network, using spatiotemporal waves in a lower first layer of the plurality of layers of the neural network, wherein each node in the lower first layer makes an excitatory connection with nodes within a local excitation radius and an inhibitory connection with nodes over a global inhibition radius, thus forming a ring of no connection between an excitation region and an inhibition region, and/or a learning rule implemented in a higher second layer of the plurality of layers of the neural network connected to the lower first layer of the plurality of layers of the neural network (page 377: “(1) During the processing in the CSG algorithm, the network itself determines the growth of new neurons according to the activation level. Since the network size is not pre-specified, CSG algorithm is flexible for different input data sets. (2) The weight adaptation in the CSG algorithm is adapted slightly within the winner neuron and its direct neighboring neurons. The learning rate is small and does not decrease to zero. This is called dynamic equilibrium.”, which is followed by a setting and adjustment policy for neuron activation level that drives when new neurons are generated therefrom, such that the CSG algorithm is detailed numerically 1-8 (pages 377-378) {the Examiner reasons that the algorithm as detailed and applied is akin to the application of a learning rule as applied that details when and how the network expands node by node and layer by layer}), to alter inter-layer connectivity between the lower first layer and the higher second layer (weight initialization and weight adjustment via training, per pages 377-378, which the Examiner understands to define the connectivity between nodes in one layer and another), wherein said growing and said self-organizing are performed over a plurality of iterations (CSG as applied to train a self-creating and self-organizing neural network is iterative, per sections 3.3-3.4, and recursive per section 4.1), and wherein said growing is performed prior to said self-organizing in each of the plurality of iterations (based on the reference, it reasons that the network must grow new layers with new neurons before those new neurons in those new layers are subject to any self-organizing). Regarding claim 3, Chow teaches the method of claim 1, wherein the growing, from the one single node, the plurality of layers of the neural network comprises dividing the one single node to generate a daughter node, of the at least one node, in the lower first layer (as cited above with respect to claim 1, the CSG algorithm is understood to start with one neuron, which is subject to splitting (see, e.g., page 377: “The executing steps of the CSG algorithms are as follows: 1. Start from one neuron... 6. When the activation level of winner neuron c decreases to zero, perform the cell-splitting mechanism, i.e., to delete the neuron c and then generate four new neurons within the square region of original neuron c... ”), and see also FIG. 2 illustrating the same on the following page 378 and FIG. 3 on page 379, and where the resulting generated neurons are understood to be children having a different depth/layer, e.g. akin to the discussion provided in sections 4.1-4.2 on pages 382-383). Regarding claim 4, Chow teaches the method of claim 3, as discussed above, further comprising dividing the daughter node in the lower first layer to generate a further daughter node, of the daughter node of the one single node, in the lower first layer (as cited above with respect to claim 1, the CSG algorithm is understood to start with one neuron, which is subject to splitting (see, e.g., page 377: “The executing steps of the CSG algorithms are as follows: 1. Start from one neuron... 6. When the activation level of winner neuron c decreases to zero, perform the cell-splitting mechanism, i.e., to delete the neuron c and then generate four new neurons within the square region of original neuron c... ”), and see also FIG. 2 illustrating the same on the following page 378 and FIG. 3 on page 379, and where the resulting generated neurons are understood to be children having a different depth/layer, e.g. akin to the discussion provided in sections 4.1-4.2 on pages 382-383, and as such, the Examiner reasons that it is within Chow’s teachings to consider a neuron having a child neuron in a next layer after splitting, and then for that child neuron to further be split to create what would essentially be a grandchild to the first neuron). Regarding claim 5, Chow teaches the method of claim 3, further comprising dividing the daughter node in the lower first layer to generate a further daughter node, of the daughter node of the one single node, in the higher second layer (as cited above with respect to claim 1, the CSG algorithm is understood to start with one neuron, which is subject to splitting (see, e.g., page 377: “The executing steps of the CSG algorithms are as follows: 1. Start from one neuron... 6. When the activation level of winner neuron c decreases to zero, perform the cell-splitting mechanism, i.e., to delete the neuron c and then generate four new neurons within the square region of original neuron c... ”), and see also FIG. 2 illustrating the same on the following page 378 and FIG. 3 on page 379, and where the resulting generated neurons are understood to be children having a different depth/layer, e.g. akin to the discussion provided in sections 4.1-4.2 on pages 382-383, and as such, the Examiner reasons that it is within Chow’s teachings to consider a neuron having a child neuron in a next layer after splitting, and then for that child neuron to further be split to create what would essentially be a grandchild to the first neuron). Regarding claim 6, Chow teaches the method of claim 1, wherein the growing, from the one single node, the plurality of layers of the neural network comprises dividing the one single node to generate a daughter node, of the one single node, in the higher second layer (as cited above with respect to claim 1, the CSG algorithm is understood to start with one neuron, which is subject to splitting (see, e.g., page 377: “The executing steps of the CSG algorithms are as follows: 1. Start from one neuron... 6. When the activation level of winner neuron c decreases to zero, perform the cell-splitting mechanism, i.e., to delete the neuron c and then generate four new neurons within the square region of original neuron c... ”), and see also FIG. 2 illustrating the same on the following page 378 and FIG. 3 on page 379, and where the resulting generated neurons are understood to be children having a different depth/layer, e.g. akin to the discussion provided in sections 4.1-4.2 on pages 382-383). Regarding claim 9, Chow teaches the method of claim 1, wherein an architecture of the lower first layer and higher second layer comprises a pooling architecture, and/or wherein an architecture of two layers of the plurality of layers comprises a pooling architecture (page 377, the Examiner understands the following as the pooling of activation inputs in one level/layer until a threshold that warrants a generation/split/divide action creating neurons in a next level/layer: “When the neurons are generated, the new neuron is endowed with an initial value as the activation level. When a neuron is activated, the activation level decreases by a constant value. This process continues until (activation level) of one neuron becomes zero and the neuron is split to generate its four offspring neurons.”). Regarding claim 10, Chow teaches the method of claim 1, where7in1 an architecture of the lower first layer and higher second layer comprises an expansion architecture, and/or wherein an architecture of two layers of the plurality of layers comprises an expansion architecture (the CSG algorithm is explicitly an expansion architecture in the generation of further neurons from existing ones in a next level/layer, see e.g. the discussions provided in sections 4.1-4.2 comparing it with other similar expansion architectures that feature parents/children constructs). Regarding claim 11, Chow teaches the method of claim 1, wherein the lower first layer and/or the higher second layer comprises a square geometry or a rectangular geometry (the geometry taught is square, see e.g., Figs. 1-4, for example). Regarding claim 14, Chow teaches the method of claim 1, wherein the neural network comprises a spiking node, and/or wherein the neural network comprises a spiking neural network (section 3.3 starting on page 376 discusses the CSG algorithm, which the Examiner understands to dynamically expand and organize on the basis of activation, with the expansion being specific to the areas of activation as shown per Figs. 1-4 in a manner that is the same as a spiking neural network and spiking nodes as generally understood in the state of the art). Regarding claim 22, Chow teaches the method of claim 1, wherein said self-organizing comprises applying structural training data to the lower first layer (page 376, section 3.2, discussing the operative data to be “input data x distributed in n-dimensional space”). Regarding claim 23, Chow teaches the method of claim 1, wherein the learning rule comprises a local learning rule, and/or wherein the learning rule comprises a dynamic learning rule (page 377: “The weight adaptation in the CSG algorithm is adapted slightly within the winner neuron and its direct neighboring neurons. The learning rate is small and does not decrease to zero. This is called dynamic equilibrium.”, which the Examiner reasons is indicative of both local learning (subject to a type of neighbor-proximity) and is characterized explicitly as providing a dynamic learning aspect). Regarding claim 26, Chow teaches the method of claim 1, wherein the hardware processor comprises a neuromorphic processor (Chow’s self-creating and self-organizing neural network is equivalent to a neuromorphic device/construct/processor as is generally understood in the state of the art). Regarding claim 27, the claim includes the same or similar limitations as claim 1 discussed above, and is therefore rejected under the same rationale. The claim additionally recites a limitation for having executable instructions to perform a task using the neural network, which the Examiner believes is further taught: see, e.g., page 375, section 2, first paragraph discussing the use of a trained neural network to “be used for data analysis.” Regarding claim 33, the claim includes the same or similar limitations as claim 1 discussed above, and is therefore rejected under the same rationale. Regarding claim 36, Chow teaches the method of claim 1, wherein the learning rule is a local learning rule (page 377: “The weight adaptation in the CSG algorithm is adapted slightly within the winner neuron and its direct neighboring neurons.”, which the Examiner reasons is indicative of both local learning (subject to a type of neighbor-proximity)). Claim Rejections - 35 USC § 103 9. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office Action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 10. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 11. Claims 12-13 and 25 are rejected under 35 U.S.C. 103 as being unpatentable over Chow in view of previously-cited CN 104700121 A (“Li”). Regarding claim 12, Chow teaches the method of claim 1, as discussed above. The aforementioned reference does not teach the further limitation wherein the lower first layer and/or the higher second layer comprises a non-rectangular geometry. Rather, the Examiner relies upon LI to teach what Chow otherwise lacks, see e.g., Li’s [0004] discussing the use of spherical geometries in a self-organizing map. Chow and Li both relate to self-organizing neural network structures, with an aim to improve performance. Hence, the references are similarly directed and therefore analogous. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to vary the geometry as Li considers, with a reasonable expectation of success, as to try and improve the performance accordingly as mentioned per [0004]. Regarding claim 13, Chow in view of Li teach the method of claim 12, as discussed above. The aforementioned references teach the further limitation wherein the non-rectangular geometry comprises an annulus geometry, a spherical geometry, and/or disk geometry, wherein the disk geometry has a hyperbolic distribution (Li’s [0004] as discussed just above per claim 12). The motivation for combining the references is as discussed above in relation to claim 12. Regarding claim 25, Chow teaches the method of claim 1, as discussed above. The aforementioned reference does not teach the further limitation comprising training a classifier connected to the plurality of layers and/or the neural network. Rather, the Examiner relies upon LI to teach what Chow otherwise lacks, see e.g., Li’s [0029] discussing a classification method as related to its self-organizing map/network. Chow and Li both relate to self-organizing neural network structures, with an aim to improve performance. Hence, the references are similarly directed and therefore analogous. Chow’s page 375, section 2, first paragraph, discusses the use of its self-created and self-organized trained neural network to “... for data analysis.” Li, as mentioned here, is more explicitly directed to classification, which the Examiner reasons is a type of data analysis. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply a self-created and self-organizing map/network, per Chow, for a data analysis task that is directed to classification, as is widely practiced in the state of the art, with a reasonable expectation of success. 12. Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Chow in view of Non-Patent Literature “Recurrently connected and localized neuronal communities initiate coordinated spontaneous activity in neuronal networks” (“Lonardoni”). Regarding claim 21, Chow teaches the method of claim 1, as discussed above. The aforementioned reference does not teach the further limitation further comprising generating the spatiotemporal waves based on noisy interactions between nodes of the first layer of the plurality of layers of the neural network. Rather, the Examiner relies upon LONARDONI to teach what Chow otherwise lacks, see e.g., Lonardoni’s pages 6-7, sections titled “Simulations of spontaneous and pharmacologically manipulated neuronal culture activities” and “Simulated spontaneous spiking activity in the neuronal network model”, discussing the reproduction of observed pharmacologically manipulated neural network activity in a simulated spiking model, where the observations are inclusive of spatiotemporal patterns among the neurons that the Examiner reasons as equivalent to the recitation for “spatiotemporal waves based on noisy interactions” between nodes of a same layer. Chow and Lonardoni both relate to self-organizing neural network structures. Hence, the references are similarly directed and therefore analogous. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to aim to incorporate and/or implement a sort of rigorous reproduction of neural behavior as observed, per Lonardoni, in the sort of optimized comparable neural network contemplated by Chow with a reasonable expectation of success. 13. Claim 35 is rejected under 35 U.S.C. 103 as being unpatentable over Chow in view of Non-Patent Literature “Neuromorphic Computing: The Potential for High-Performance Processing in Space” (“Bersuker”). Regarding claim 35, Chow teaches the method of claim 1, as discussed above. The aforementioned reference does not teach the further limitation wherein the spatiotemporal waves in a lower first layer of the plurality of layers further comprises at least one malfunctioning node. Rather, the Examiner relies upon BERSUKER to teach what Chow otherwise lacks, see e.g., Bersuker’s page 8, right column in the second-to-last paragraph, discussing the capability of neuromorphic networks (such as Chow’s) to be fault tolerant. Chow and Bersuker both relate to self-organizing neural network structures. Hence, the references are similarly directed and therefore analogous. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to aim to incorporate and/or implement a neural network in the manner described by Chow to produce a fault tolerant network as engineered by Bersuker with a reasonable expectation of success. Conclusion 14. The prior art made of record and not relied upon is considered pertinent to Applicants’ disclosure: CA 2642041 A1 CA 2977126 A1 CN 103279958 A JP 2007518541 A Non-Patent Literature “Inference of neuronal functional circuitry with spike-triggered non-negative matrix factorization” Non-Patent Literature “Organizing Sequential Memory in a Neuromorphic Device Using Dynamic Neural Fields” Non-Patent Literature “Scalable NoC-based Neuromorphic Hardware Learning and Inference” Non-Patent Literature “Let’s code a Neural Network in plain NumPy” Non-Patent Literature “Simulation of networks of spiking neurons: A review of tools and strategies” Non-Patent Literature “Artificial Neural Network (ANN) with Practical Implementation” Non-Patent Literature “A Survey of Neuromorphic Computing and Neural Networks in Hardware” 15. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHOURJO DASGUPTA whose telephone number is (571)272-7207. The examiner can normally be reached M-F 8am-5pm CST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tamara Kyle can be reached at 571 272 4241. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SHOURJO DASGUPTA/Primary Examiner, Art Unit 2144 1 The Examiner understands this as “wherein.”
Read full office action

Prosecution Timeline

Dec 18, 2020
Application Filed
Feb 03, 2022
Response after Non-Final Action
Apr 17, 2024
Non-Final Rejection — §102, §103, §112
Aug 19, 2024
Response Filed
Nov 14, 2024
Non-Final Rejection — §102, §103, §112
May 16, 2025
Response Filed
Jun 05, 2025
Final Rejection — §102, §103, §112
Dec 03, 2025
Request for Continued Examination
Dec 11, 2025
Response after Non-Final Action
Dec 22, 2025
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591802
GENERATING ESTIMATES BY COMBINING UNSUPERVISED AND SUPERVISED MACHINE LEARNING
2y 5m to grant Granted Mar 31, 2026
Patent 12586371
SENSOR DATA PROCESSING
2y 5m to grant Granted Mar 24, 2026
Patent 12578979
VISUALIZATION OF APPLICATION CAPABILITIES
2y 5m to grant Granted Mar 17, 2026
Patent 12572782
SCALABLE AND COMPRESSIVE NEURAL NETWORK DATA STORAGE SYSTEM
2y 5m to grant Granted Mar 10, 2026
Patent 12549397
MULTI-USER CAMERA SWITCH ICON DURING VIDEO CALL
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

4-5
Expected OA Rounds
65%
Grant Probability
99%
With Interview (+38.1%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 449 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month