Prosecution Insights
Last updated: April 19, 2026
Application No. 19/026,276

REAL-TIME NEURAL NETWORK ARCHITECTURE ADAPTATION THROUGH SUPERVISED NEUROGENSIS DURING INFERENCE OPERATIONS

Final Rejection §103§112§DP
Filed
Jan 16, 2025
Examiner
GODO, MORIAM MOSUNMOLA
Art Unit
2148
Tech Center
2100 — Computer Architecture & Software
Assignee
AtomBeam Technologies Inc.
OA Round
2 (Final)
44%
Grant Probability
Moderate
3-4
OA Rounds
4y 8m
To Grant
78%
With Interview

Examiner Intelligence

Grants 44% of resolved cases
44%
Career Allow Rate
30 granted / 68 resolved
-10.9% vs TC avg
Strong +33% interview lift
Without
With
+33.4%
Interview Lift
resolved cases with interview
Typical timeline
4y 8m
Avg Prosecution
47 currently pending
Career history
115
Total Applications
across all art units

Statute-Specific Performance

§101
16.1%
-23.9% vs TC avg
§103
56.7%
+16.7% vs TC avg
§102
12.7%
-27.3% vs TC avg
§112
12.9%
-27.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 68 resolved cases

Office Action

§103 §112 §DP
DETAILED ACTION 1. This office action is in response to the Application No. 19026276 filed on 11/24/2025. Claims 1-24 are presented for examination and are currently pending. Response to Arguments 2. The double patenting rejection has been withdrawn in light of the terminal disclaimer filed and approved on 11/20/2025. The claim amendment of 11/24/2025 has overcome the 112(f) interpretation of 09/17/2025. As a result, the 112(f) is withdrawn. The claim amendment of 11/24/2025 has overcome the 112(b) rejections of 09/17/2025. As a result, the 112(b) rejections are withdrawn. However, a new 112(b) has been issued. The Applicants argument on page 11 that “The hierarchical supervisory network continuously gathers activation data from multiple layers, maintains spatiotemporal maps using adaptive kernel functions, detects processing bottlenecks through information theory metrics, and implements neurogenesis operations to reconfigure the network while inference continues. These coordinated operations produce improvements in processing capacity, stability, and adaptability of neural computations” is persuasive because it improves the functioning of the computer. As a result the 101 rejection has been withdrawn. It is noted that the Applicant’s arguments has been considered but moot in light of the new references applied to the independent claims. On page 17-18, the Applicant argues that “The references don't teach "fuse codewords of dissimilar data types into unified codeword representations. Fusion of heterogeneous codewords is not disclosed. Eddahech processes single-type "workload values" for subsystem predictions (Fig. 5, p. 51). Guo processes only "acoustic features" (Mel-spectrograms, § II, p. 1812) without multi-type fusion-its multiple codebooks merely divide "the latent feature into G groups" (§ III-A, p. 1813) of the same data type. Meng handles only "3D point coordinates" (§ 2, p. 2362). None teach combining codewords from different data types into unified representations. The references don't teach "provide the unified codeword representations to the core neural network for processing" Since none of the references teach creating unified codeword representations from dissimilar types, they cannot teach providing such representations to a neural network for processing”. The argument above is not persuasive because Guo teaches fuse codewords of dissimilar data types (In Fig. 5, z which is an output from speech signals s node is fused with P which is an output from text sequence t node, pg. 1815. The Examiner notes that z is fused with p, z and p are dissimilar datatypes) into unified codeword representations (In Fig. 5, codebook C from speech signal s node is received by text sequence; This model is first trained to minimize the loss function Lmsmc, and then provides MSMCR Z and codebook group C for synthesis and prediction, pg. 1815, left col., last para.); and provide the unified codeword representations to the core neural network for processing (The output sequence is also processed by another neural network based module X for prediction, pg. 1814, right col., last para.). However, Eddahech and Meng prior arts no longer applies to the independent claims. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. 3. Claim 2 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 2 recites “the neurogenesis control system” which appears to lack antecedent basis. it is not clear which neurogenesis control system is referred to. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 4. Claims 1, 4-13, 16-24 are rejected under 35 U.S.C. 103 as being unpatentable over Kasabov ("Evolving fuzzy neural networks for supervised/unsupervised online knowledge-based learning," in IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 31, no. 6, pp. 902-918, Dec. 2001, doi: 10.1109/3477.969494) in view of Shrivastava et al. (US20200311548) and further in view of Guo et al. ("MSMC-TTS: Multi-stage multi-codebook VQ-VAE based neural TTS." IEEE/ACM Transactions on Audio, Speech, and Language Processing 31 (2023): 1811-1824) Regarding claim 1, Kasabov teaches a computer system (The EFuNN (evolving fuzzy neural networks) methods and the ECOS (evolving connectionist systems) can be implemented in software and/or in hardware with the use of either conventional or new computational techniques … This includes (1) computer systems that learn speech and language, pg. 916, right col., last para.), maintain a core neural network (Fig. 3. Evolving fuzzy neural network EFuNN, pg. 904) comprising a plurality of interconnected neurons arranged in layers (EFuNNs have a five-layer structure, pg. 903, right col., last para.), wherein the core neural network is configured to process codeword representations (396 codebook vectors and 500 training iterations on the whole training set (pg. 916, left col., first para.); … input vector is propagated through the EFuNN, pg. 908, left col., fourth para.); execute a hierarchical supervisory network (A block diagram of the ECOS framework is given in Fig. 2. ECOS are multilevel, multimodular structures where many neural network modules (NNM) are connected with interconnections and intraconnections, pg. 903, left col., third para.) comprising: a plurality of low-level supervisory nodes (2) Representation (Memory) Part Where Information (Patterns) are Stored, pg. 903, left col., third para., Fig. 2), each monitoring a subset of neurons in the core neural network (It is a multimodular, evolving structure of NNMs organized in groups, pg. 903, left col., third para.); at least one mid-level supervisory node ((5) Knowledge-Based Part:, 903, right col., Fig. 2) monitoring a group of low-level supervisory nodes (This part extracts compressed abstract information from the representation modules and from the decision modules in different forms of rules, abstract associations, etc. This part requires that the NNM should operate in a knowledge-based learning mode and provide knowledge about the problem under consideration, pg. 903, right col., Fig. 2); and at least one high-level supervisory node ((6) Adaptation Part:, 903, right col., Fig. 2) monitoring one or more mid-level supervisory nodes (This part uses statistical, evolutionary (e.g., genetic algorithms (GAs) …) and other techniques to evaluate and optimize the parameters of the ECOS during its operation, 903, right col., Fig. 2); wherein each supervisory node ((2) Representation (Memory) Part, (5) Knowledge-Based Part, and (6) Adaptation Part, Fig. 2) is configured to: collect activation data and information flow patterns from nodes monitored by the supervisory node (The EFuNN system was explained so far with the use of one rule node activation (the winning rule node for the current input data) (pg. 906, right col., last para.); Fig. 5(b) shows how the center … of the rule node adjusts (after learning each new data point) to its new positions … when one pass learning is applied. Fig. 5(c) shows how the rule node position would move to new positions …, … and if another pass of learning was applied, pg. 906, left col., third para.); perform statistical and spatiotemporal analysis on the collected data (EFuNNs can learn spatial-temporal sequences in an adaptive way through one pass learning and automatically adapt their parameter values as they operate (abstract); As a statistical model the EFuNN performs clustering of the input space, pg. 915, left col., second to the last para.); and make decisions regarding a plurality of architectural modifications based on the statistical and spatiotemporal analysis (The ratio spatial-similarity/temporal-correlation can be balanced for different applications through two parameters and Ss and Tc (pg. 906, right col., second para.); EFuNNs allow for meaningful rules to be extracted and inserted at any time of the operation of the system thus providing the knowledge about the problem and reflecting changes in its dynamics. In this respect, the EFuNN is a flexible, online, knowledge engineering and statistical model, pg. 915, left col., second to the last para.); maintain spatiotemporal maps that continuously track activation patterns of neurons in the core neural network (As EFuNN allows for continuous training on new data, further testing and also training of the EFuNN on the test data in an online mode leads to a significant improvement of the accuracy, pg. 916, left col., second para.) using adaptive kernel functions, wherein the maps are updated in real-time (a slightly modified version of the algorithms described below is applied, mainly in terms of measuring Euclidean distance and using Gaussian activation functions (pg. 904, right col., second para.); incremental learning on all data is possible (in many cases of what time series prediction it is) EFuNNs should be continuously evolved in an adaptive, life-long learning mode, always improving their performance, pg. 911, right col., second para.); determine optimal placement of new neurons using geometric optimization that considers network topology (In terms of online neuron allocation, the EFuNN model is similar to the resource allocating network (RAN) … The RAN model allocates a new neuron for a new input example if the input vector is not close in the input space to any of the already allocated radial basis neurons (centers), pg. 902, right col., last para.), information density distribution (calculating in an online mode the histogram of each variable and placing the centers of its MFs (membership functions) at the middle of the areas that are of highest density, pg. 910, left col., second para.), and connectivity patterns (After a certain time (when a certain number of examples have been presented) some neurons and connections may be pruned or aggregated, pg. 907, left col., third para.); manage real-time integration of new neurons during inference operations while maintaining operational continuity (At any time (phase) of the evolving (learning) process, fuzzy or exact rules can be inserted and extracted. Insertion of fuzzy rules is achieved through setting a new rule node for each new rule, such that the connection weights and of the rule node represent this rule, pg. 908, left col., last para.); implement the plurality of architectural modifications decided by the hierarchical supervisory network (Adaptation can be achieved through the analysis of the behavior of the system or through a feedback connection from higher level modules in the ECOS architecture, pg. 915, left col., first para.); execute neurogenesis operations through controlled connection establishment (New connections and new neurons are created during the operation of the system, abstract); manage gradual activation of new neurons while maintaining network stability (a weighted sum input function and a saturated linear activation function is used for the neurons to calculate the membership degrees to which the output vector associated with the presented input vector belongs to each of the output MFs (membership functions), pg. 904, right col., first para.); allocate codewords to input data, wherein codewords are mapped to entries in a dynamically maintained codebook (The LVQ model has the following parameter values: 396 codebook vectors and 500 training iterations on the whole training set (pg. 916, left col., first para.); … input vector is propagated through the EFuNN, pg. 908, left col., fourth para.); Kasabov does not explicitly teach comprising a hardware memory, wherein the computer system is configured to execute software instructions stored on nontransitory machine-readable storage media that: detect processing bottlenecks in information flow between layers of the core neural network using information theory metrics including entropy rate calculations and channel capacity estimations; fuse codewords of dissimilar data types into unified codeword representations; and provide the unified codeword representations to the core neural network for processing. Shrivastava teaches a computer system comprising a hardware memory, wherein the computer system is configured to execute software instructions stored on nontransitory machine-readable storage media that (The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform actions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit) [0090]; Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for performing actions in accordance with instructions and one or more memory devices for storing instructions and data [0091]): detect processing bottlenecks in information flow between layers of the core neural network (The described approach includes identifying the features layer as a bottleneck, where the target objective encourages information flow [0034]) using information theory metrics including entropy rate calculations (In some implementations, generating the intermediate representation 135 includes applying an entropy term to the loss function L to obtain an augmented loss function. The intermediate representation 135 can be generated by using the augmented loss function to compute features at the bottleneck layer of the neural network [0049]) and channel capacity estimations (In case of bottlenecks with a spatial configuration, system 100 is configured such that spatial elements within a same channel have the same distribution (e.g., parameters of the density model were shared across space [0058]); It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Kasabov to incorporate the teachings of Shrivastava for the benefit of identifying the features layer as a bottleneck, where the target objective encourages information flow (Shrivastava [0034]) Modified Kasabov does not explicitly teach fuse codewords of dissimilar data types into unified codeword representations; and provide the unified codeword representations to the core neural network for processing. Guo teaches fuse codewords of dissimilar data types (In Fig. 5, z which is an output from speech signals s node is fused with P which is an output from text sequence t node, pg. 1815. The Examiner notes that z is fused with p, z and p are dissimilar datatypes) into unified codeword representations (In Fig. 5, codebook C from speech signal s node is received by text sequence; This model is first trained to minimize the loss function Lmsmc, and then provides MSMCR Z and codebook group C for synthesis and prediction, pg. 1815, left col., last para.); and provide the unified codeword representations to the core neural network for processing (The output sequence is also processed by another neural network based module X for prediction, pg. 1814, right col., last para.). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Kasabov to incorporate the teachings of Guo for the benefit of using VQ-VAE (Vector-Quantized Variational AutoEncoder) which aims to learn a discrete latent representation from target data with an encoder-decoder model (pg. 1812, right col., section A. Vector Quantized Variational AutoEncoder) Regarding claim 4, Modified Kasabov teaches the computer system of claim 1, Shrivastava teaches wherein detecting processing bottlenecks (The described approach includes identifying the features layer as a bottleneck, where the target objective encourages information flow [0034]) comprises calculation of local entropy rates for constraint identification (In some implementations, generating the intermediate representation 135 includes applying an entropy term to the loss function L to obtain an augmented loss function. The intermediate representation 135 can be generated by using the augmented loss function to compute features at the bottleneck layer of the neural network [0049]) alongside channel capacity estimation (In case of bottlenecks with a spatial configuration, system 100 is configured such that spatial elements within a same channel have the same distribution (e.g., parameters of the density model were shared across space [0058]) for regional saturation detection, using dynamic thresholds that adapt based on network state and performance requirements (using an adaptive probabilistic approach [0084]; the prediction performance can be improved by increasing the number of possible states …For example, when the number of possible states in {circumflex over (z)} is reduced (e.g., by scaling down the outputs of f1), the bit rate R can be reduced [0049]. The number of possible states is the threshold). The same motivation to combine independent claim 1 applies here. Regarding claim 5, Modified Kasabov teaches the computer system of claim 1, Kasabov teaches wherein the geometric optimization for new neuron placement employs a comprehensive analysis incorporating local network topology (In terms of online neuron allocation, the EFuNN model is similar to the resource allocating network (RAN) … The RAN model allocates a new neuron for a new input example if the input vector is not close in the input space to any of the already allocated radial basis neurons (centers), pg. 902, right col., last para.), information density distribution (calculating in an online mode the histogram of each variable and placing the centers of its MFs (membership functions) at the middle of the areas that are of highest density, pg. 910, left col., second para.), existing connectivity patterns (After a certain time (when a certain number of examples have been presented) some neurons and connections may be pruned or aggregated, pg. 907, left col., third para.), and activity gradient fields in a unified optimization framework (centers will be adapted to minimize the error for the example through a gradient descent algorithm. In terms of adaptive optimization of many individual linear units, EFuNN is close to the receptive field weight regression (RFWR) model). Regarding claim 6, Modified Kasabov teaches the computer system of claim 1, Kasabov teaches wherein the software instructions further: implement connection cloning that copies connection patterns from parent neurons with controlled mutations to the copied patterns; establish adaptive random connections with short-time-scale plasticity (Adaptation can be achieved through the analysis of the behavior of the system or through a feedback connection from higher level modules in the ECOS architecture, pg. 914, left col., first para.): and compute connectivity patterns between neurons based on information flow analysis (The learned temporal associations can be used to support the activation of rule nodes based on temporal pattern similarity. Here, temporal dependencies are learned through establishing structural links, pg. 906, right col., second para.). Regarding claim 7, Modified Kasabov teaches the computer system of claim 1, Guo teaches wherein the core neural network is a latent transformer model (VQ-VAE aims to learn a discrete latent representation from target data with an encoder-decoder model (pg. 1812, right col., last para.); MSMC-VQ-VAE is implemented based on Feed-Forward Transformer in FastSpeech, pg. 1816, right col., third para.). Regarding claim 8, Modified Kasabov teaches the computer system of claim 1, Kasabov teaches wherein the subsystem comprises software instructions further: integrate error detection and recovery mechanisms that continuously monitor network stability during neurogenesis while implementing rollback procedures (If the chosen error threshold E was smaller (e.g., 0.05 or 0.02) more rule nodes would have been evolved and better prediction accuracy could have been achieved (pg. 911, left col., third para.); After a certain time (when a certain number of examples have been presented) some neurons and connections may be pruned or aggregated, pg. 907, left col., third para.); and ensure performance improvements through systematic modification evaluation (Both changing the number and the shape of MFs may be needed for a better performance of an EFuNN, pg. 910, left col., second para.). Regarding claim 9, Modified Kasabov teaches the computer system of claim 1, wherein the low-level supervisory nodes (2) Representation (Memory) Part Where Information (Patterns) are Stored, pg. 903, left col., third para., Fig. 2) are configured to initiate fine-grained neurogenesis operations for individual neurons or small clusters (It is a multimodular, evolving structure of NNMs organized in groups. This is the most important part of ECOS. One realization of a NNM is the EFuNN, presented in the next section (pg. 903, left col., fourth para.); multimodular structures where many neural network modules (NNM) are connected with interconnections and intraconnections, pg. 903, left col., third para.). Regarding claim 10, Modified Kasabov teaches the computer system of claim 1, Kasabov teaches wherein the mid-level supervisory nodes ((5) Knowledge-Based Part:, 903, right col., Fig. 2) are configured to coordinate neurogenesis operations across local regions of the network (Another knowledge-based technique applied to EFuNNs is rule node aggregation. Through this technique several rule nodes are merged into one as it is shown in Fig. 7(a)–(c) on an example of three rule nodes , and (only the input space is shown there), pg. 908, right col., section D. Rule Node Aggregation and Membership Function Modification). Regarding claim 11, Modified Kasabov teaches the computer system of claim 1, Kasabov teaches wherein the high-level supervisory nodes ((6) Adaptation Part:, 903, right col., Fig. 2) are configured to manage global resource allocation for neurogenesis operations (EFuNNs are adaptive rule-based systems. Manipulating rules is essential for their operation. This includes rule insertion, rule extraction, and rule adaptation, pg. 908, left col., last para.). Regarding claim 12, Modified Kasabov teaches the computer system of claim 1, Kasabov teaches wherein the supervisory nodes at different levels ((2) Representation (Memory) Part, (5) Knowledge-Based Part, and (6) Adaptation Part, Fig. 2) coordinate neurogenesis decisions through information exchange about resource availability and network capacity (EFuNNs deal with the process of interactive, online adaptive learning of a single system that evolves from incoming data or through interaction with the environment. The system can either have its parameters (genes) predefined or self-adjusted during the learning process starting from some initial values, pg. 916, left col., second to the last para.). Regarding claim 13, claim 13 is similar to claim 1. It is rejected in the same manner and reasoning applying. Further, Kasabov teaches computer-implemented method (The EFuNN (evolving fuzzy neural networks) methods and the ECOS (evolving connectionist systems) can be implemented in software and/or in hardware with the use of either conventional or new computational techniques … This includes (1) computer systems that learn speech and language, pg. 916, right col., last para.) for adapting neural network architecture in real-time time series forecasting (The example here demonstrates that an EFuNN can learn a complex chaotic function online evolving from one-pass data propagation. But the real strength of the EFuNNs is in learning time series that change their dynamics through time, e.g., changing values for the parameter, pg. 911, left col., fourth para.), comprising the steps of: maintaining a core neural network (Fig. 3. Evolving fuzzy neural network EFuNN, pg. 904) comprising a plurality of interconnected neurons arranged in layers (EFuNNs have a five-layer structure, pg. 903, right col., last para.), Regarding claim 16, claim 16 is similar to claim 4. It is rejected in the same manner and reasoning applying. Regarding claim 17, claim 17 is similar to claim 5. It is rejected in the same manner and reasoning applying. Regarding claim 18, Modified Kasabov teaches the method of claim 13, Kasabov teaches wherein implementing the step of executing neurogenesis operations comprises a flexible connection strategy system combining connection cloning with controlled mutation from parent neurons (The supervised learning in EFuNN is based on the above explained principles, so when a new data example is presented, the EFuNN either creates a new rule node to memorize the two input and output fuzzy vectors and or adjusts the winning rule node (or rule nodes, respectively). After a certain time (when a certain number of examples have been presented) some neurons and connections may be pruned or aggregated, pg. 907, left col., second para.), adaptive random connections with short-time-scale plasticity, and computed connectivity based on information flow analysis (EFuNNs deal with the process of interactive, online adaptive learning of a single system that evolves from incoming data or through interaction with the environment. The system can either have its parameters (genes) predefined or self-adjusted during the learning process starting from some initial values, pg. 916, left col., second to the last para.). Regarding claim 19, claim 19 is similar to claim 7. It is rejected in the same manner and reasoning applying. Regarding claim 20, claim 20 is similar to claim 8. It is rejected in the same manner and reasoning applying. Regarding claim 21, claim 21 is similar to claim 9. It is rejected in the same manner and reasoning applying. Regarding claim 22, claim 22 is similar to claim 10. It is rejected in the same manner and reasoning applying. Regarding claim 23, claim 23 is similar to claim 11. It is rejected in the same manner and reasoning applying. Modified Kasabov teaches the 24, Modified Kasabov teaches the method of claim 13, Kasabov teaches wherein the step of implementing a second plurality of architectural modifications determining architectural modifications (Another knowledge-based technique applied to EFuNNs is rule node aggregation … Through this technique several rule nodes are merged into one as it is shown in Fig. 7(a)–(c) on an example of three rule nodes , and (only the input space is shown there), pg. 908, right col., section D. Rule Node Aggregation and Membership Function Modification) comprises coordinating neurogenesis decisions through information exchange about resource availability and network capacity across different levels of the hierarchical supervisory network (Through node creation and consecutive aggregation, an EFuNN system can adjust over time to changes in the data stream and at the same time preserve its generalization capabilities, pg. 909, right col., second to the last para.). 5. Claims 2 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Kasabov ("Evolving fuzzy neural networks for supervised/unsupervised online knowledge-based learning," in IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 31, no. 6, pp. 902-918, Dec. 2001, doi: 10.1109/3477.969494) in view of Shrivastava et al. (US20200311548) in view of Guo et al. ("MSMC-TTS: Multi-stage multi-codebook VQ-VAE based neural TTS." IEEE/ACM Transactions on Audio, Speech, and Language Processing 31 (2023): 1811-1824) and further in view of Meng et al. ("Parameterization of point-cloud freeform surfaces using adaptive sequential learning RBF networks." Pattern Recognition 46.8 (2013): 2361-2375) Regarding claim 2, Modified Kasabov teaches the computer system of claim 1, Kasabov teaches wherein the neurogenesis control system (New connections and new neurons are created during the operation of the system, abstract)) but does not explicitly teach maintains activity maps using topology-aware distance metrics that account for both structural and functional relationships between neurons. Meng teaches maintains activity maps using topology-aware distance metrics that account for both structural and functional relationships between neurons (This criterion aims to ensure that neurons are inserted at a distance of at least DK from each other, so as to guarantee a well spread and balanced neuron distribution in the network space, pg. 2364, right col., first full para. The Examiner notes that the activity map is maintained in the network space using distance). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Kasabov to incorporate the teaching of Meng for the benefit of a neural network which solves the problem of point-cloud surface parameterization, it employs a dynamic structure through adaptive learning, allows Gaussian neurons to be fitted according to the novelty of inputs, while also being fully adjustable on their locations, widths and weights and achieved a high compression ratio and a comparable level of accuracy (Meng, conclusion) Regarding claim 14, claim 14 is similar to claim 2. It is rejected in the same manner and reasoning applying. 6. Claims 3 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Kasabov ("Evolving fuzzy neural networks for supervised/unsupervised online knowledge-based learning," in IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 31, no. 6, pp. 902-918, Dec. 2001, doi: 10.1109/3477.969494) in view of Shrivastava et al. (US20200311548) in view of Guo et al. ("MSMC-TTS: Multi-stage multi-codebook VQ-VAE based neural TTS." IEEE/ACM Transactions on Audio, Speech, and Language Processing 31 (2023): 1811-1824) and further in view of Eddahech et al. (“Hierarchical neural networks based prediction and control of dynamic reconfiguration for multilevel embedded systems”, Journal of Systems Architecture, Volume 59, issue 1, 2013, pages 48-59) Regarding claim 3, Modified Kasabov teaches the computer system of claim 1, Modified Kasabov does not explicitly teach wherein the statistical and spatiotemporal analysis comprises simultaneous monitoring of multiple time scales together with gradient field computation for tracking information movement and velocity field analysis that combines structural weights with functional activations. Eddahech teaches wherein the statistical and spatiotemporal analysis comprises simultaneous monitoring of multiple time scales together with gradient field computation for tracking information movement and velocity field analysis that combines structural weights with functional activations (The number of neurons in the hidden layer and activation functions may also be modified to fit the training data more accurately (pg. 58, left col., last para.); The network weights, PNG media_image1.png 32 32 media_image1.png Greyscale , are updated using the following gradient descent rules: (pg. 50, right col., second para.); the prediction window and the transfer function are the following; Number of inputs neurons: 5, Number of Hidden neurons: 7, Prediction window: 3, Training algorithm: error back propagation training rule, The network transfer function is a sigmoid function, expressed as:, pg. 51, left col., third para.). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Kasabov to incorporate the teaching of Eddahech for the benefit of using a reconfigurable architectures are more efficiently and dynamic allocation which shows many advantages such as performance improvement as well as energy saving (Eddahech, pg. 48, right col., second to the last para.) Regarding claim 15, claim 15 is similar to claim 3. It is rejected in the same manner and reasoning applying. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MORIAM MOSUNMOLA GODO whose telephone number is (571)272-8670. The examiner can normally be reached Monday-Friday 8am-5pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle T Bechtold can be reached on (571) 431-0762. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /M.G./Examiner, Art Unit 2148 /MICHELLE T BECHTOLD/Supervisory Patent Examiner, Art Unit 2148
Read full office action

Prosecution Timeline

Jan 16, 2025
Application Filed
Mar 04, 2025
Response after Non-Final Action
Sep 08, 2025
Non-Final Rejection — §103, §112, §DP
Nov 24, 2025
Response Filed
Jan 02, 2026
Final Rejection — §103, §112, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602586
SUPERVISORY NEURON FOR CONTINUOUSLY ADAPTIVE NEURAL NETWORK
2y 5m to grant Granted Apr 14, 2026
Patent 12530583
VOLUME PRESERVING ARTIFICIAL NEURAL NETWORK AND SYSTEM AND METHOD FOR BUILDING A VOLUME PRESERVING TRAINABLE ARTIFICIAL NEURAL NETWORK
2y 5m to grant Granted Jan 20, 2026
Patent 12511528
NEURAL NETWORK METHOD AND APPARATUS
2y 5m to grant Granted Dec 30, 2025
Patent 12367381
CHAINED NEURAL ENGINE WRITE-BACK ARCHITECTURE
2y 5m to grant Granted Jul 22, 2025
Patent 12314847
TRAINING OF MACHINE READING AND COMPREHENSION SYSTEMS
2y 5m to grant Granted May 27, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
44%
Grant Probability
78%
With Interview (+33.4%)
4y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 68 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month