Prosecution Insights
Last updated: April 19, 2026
Application No. 18/928,022

NETWORK OF SUPERVISORY NEURONS FOR GLOBALLY ADAPTIVE DEEP LEARNING CORE

Non-Final OA §103§112
Filed
Oct 26, 2024
Examiner
GODO, MORIAM MOSUNMOLA
Art Unit
2148
Tech Center
2100 — Computer Architecture & Software
Assignee
AtomBeam Technologies Inc.
OA Round
3 (Non-Final)
44%
Grant Probability
Moderate
3-4
OA Rounds
4y 8m
To Grant
78%
With Interview

Examiner Intelligence

Grants 44% of resolved cases
44%
Career Allow Rate
30 granted / 68 resolved
-10.9% vs TC avg
Strong +33% interview lift
Without
With
+33.4%
Interview Lift
resolved cases with interview
Typical timeline
4y 8m
Avg Prosecution
47 currently pending
Career history
115
Total Applications
across all art units

Statute-Specific Performance

§101
16.1%
-23.9% vs TC avg
§103
56.7%
+16.7% vs TC avg
§102
12.7%
-27.3% vs TC avg
§112
12.9%
-27.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 68 resolved cases

Office Action

§103 §112
DETAILED ACTION This office action is in response to the Application No. 18928022 filed on 11/10/2025. Claim 1-21 are presented for examination and are currently pending. Applicant’s arguments have been carefully and respectfully considered. Notice of Pre-AIA or AIA Status 2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 3. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant’s submission filed on 11/10/2025 has been entered. Response to Arguments 4. The claim amendment of 11/10/2025 has overcome the 112(f) interpretation of 08/28/2025. As a result, the 112(f) is withdrawn. The claim amendment of 11/10/2025 has overcome the 112(b) rejections of 08/28/2025. As a result, the 112(b) rejections are withdrawn. The claim amendment of 11/10/2025 has overcome the 112(a) rejections of 08/28/2025. As a result, the 112(a) rejections are withdrawn. However, a new 112(b) has been issued. The Applicants argument on page 9 of the remarks, that “The hierarchical supervisory network continuously gathers activation signals from multiple layers, computes their temporal and spatial spectra, and instructs a modification subsystem to reconfigure connections while inference continues. These coordinated operations produce improvements in latency, stability, and throughput of neural computations. The claims are confined to one particular technical arrangement, a multi-level supervisory architecture performing spectral analysis and dynamic, non-interruptive structural modification” is persuasive because they provide technological improvement to computer technology. As a result, the 101 rejection has been withdrawn. Applicant’s arguments have been considered and are moot in view of the new grounds of rejection. The Examiner is withdrawing the rejections in the previous office action 08/28/2025 because the applicant amendments necessitated the new grounds of rejection presented in this office action. Furthermore, Kasabov in view of Guo and in view of Shrivastava has been applied to the independent claims. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. 5. Claims 1-21 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 1 and 12 recite “wherein each supervisory node is configured to”. It is unclear which supervisory node is used to configure the data since the low-level supervisory nodes, mid-level supervisory node and the high-level supervisory node have different functionalities. Claim 9 recites “the modification subsystem”. Claim 9 depends on claim 1. It is unclear which modification subsystem is being referred to. Claim 10 recites “the codeword allocation subsystem”. Claim 10 depends on claim 1. It is unclear which codeword allocation subsystem is being referred to. Claims 2-11 and 13-21 that are not specifically mentioned are rejected due to dependency. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 6. Claims 1-6, 8-10 and 12-20 are rejected under 35 U.S.C. 103 as being unpatentable over Kasabov ("Evolving fuzzy neural networks for supervised/unsupervised online knowledge-based learning," in IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 31, no. 6, pp. 902-918, Dec. 2001, doi: 10.1109/3477.969494) in view of Guo et al. ("MSMC-TTS: Multi-stage multi-codebook VQ-VAE based neural TTS." IEEE/ACM Transactions on Audio, Speech, and Language Processing 31 (2023): 1811-1824) and further in view of Shrivastava et al. (US20200311548) Regarding claim 1, Kasabov teaches a computer system (The EFuNN (evolving fuzzy neural networks) methods and the ECOS (evolving connectionist systems) can be implemented in software and/or in hardware with the use of either conventional or new computational techniques … This includes (1) computer systems that learn speech and language, pg. 916, right col., last para.) maintain a core neural network (Fig. 3. Evolving fuzzy neural network EFuNN, pg. 904) comprising a plurality of interconnected neurons arranged in layers (EFuNNs have a five-layer structure, pg. 903, right col., last para.), wherein the core neural network is configured to process codeword representations (396 codebook vectors and 500 training iterations on the whole training set (pg. 916, left col., first para.); … input vector is propagated through the EFuNN, pg. 908, left col., fourth para.); execute a hierarchical supervisory network (A block diagram of the ECOS framework is given in Fig. 2. ECOS are multilevel, multimodular structures where many neural network modules (NNM) are connected with interconnections and intraconnections, pg. 903, left col., third para.) comprising: a plurality of low-level supervisory nodes ((2) Representation (Memory) Part Where Information (Patterns) are Stored, pg. 903, left col., third para., Fig. 2), each monitoring a subset of neurons in the core neural network (It is a multimodular, evolving structure of NNMs organized in groups, pg. 903, left col., third para.); at least one mid-level supervisory node (5) Knowledge-Based Part:, 903, right col., Fig. 2) monitoring a group of low-level supervisory nodes (This part extracts compressed abstract information from the representation modules and from the decision modules in different forms of rules, abstract associations, etc. This part requires that the NNM should operate in a knowledge-based learning mode and provide knowledge about the problem under consideration, pg. 903, right col., Fig. 2); and at least one high-level supervisory node (6) Adaptation Part:, 903, right col., Fig. 2) monitoring one or more mid-level supervisory nodes (This part uses statistical, evolutionary (e.g., genetic algorithms (GAs) …) and other techniques to evaluate and optimize the parameters of the ECOS during its operation, 903, right col., Fig. 2); wherein each supervisory node (2) Representation (Memory) Part, (5) Knowledge-Based Part, and (6) Adaptation Part, Fig. 2) is configured to: collect activation data (The EFuNN system was explained so far with the use of one rule node activation (the winning rule node for the current input data) (pg. 906, right col., last para.); Fig. 5(b) shows how the center … of the rule node adjusts (after learning each new data point) to its new positions … when one pass learning is applied. Fig. 5(c) shows how the rule node position would move to new positions …, … and if another pass of learning was applied, pg. 906, left col., third para.) comprising neuron activation levels (The radius of the input hypersphere of a rule node is defined as where is the sensitivity threshold parameter defining the minimum activation of the rule node to a new input vector from a new example in order for the example to be considered for association with this rule node, pg. 904, right col., fourth para. The Examiner notes minimum activation indicates activation levels), activation frequencies (The learned temporal associations can be used to support the activation of rule nodes based on temporal pattern similarity, pg. 906, right col., second para. The Examiner notes that the activation frequencies is referred to in the instant specification as “By examining patterns in activation data over time” [0095]), and inter-neuron correlation patterns from its monitored elements (The ratio spatial-similarity/temporal-correlation can be balanced for different applications through two parameters and such that the activation of a rule node r for a new data example dnew, pg. 906, right col., second para.); perform statistical analysis on the collected data (As a statistical model the EFuNN performs clustering of the input space, pg. 915, left col., second to the last para.), wherein the statistical analysis comprises computing temporal and spatial spectra of neuron outputs to identify frequency components and pattern (EFuNNs can learn spatial-temporal sequences in an adaptive way through one pass learning and automatically adapt their parameter values as they operate (abstract); The rule nodes represent prototypes (exemplars, clusters) of input–output (I/O) data associations that can be graphically represented as associations of hyperspheres from the fuzzy input and the fuzzy output spaces, pg. 904, left col., fourth para.); determine architectural modifications to the core neural network based on the statistical analysis (EFuNNs allow for meaningful rules to be extracted and inserted at any time of the operation of the system thus providing the knowledge about the problem and reflecting changes in its dynamics. In this respect, the EFuNN is a flexible, online, knowledge engineering and statistical model, pg. 915, left col., second to the last para.) and implement the determined architectural modifications during operation of the core neural network without interrupting processing of input data (In terms of online neuron allocation, the EFuNN model is similar to the resource allocating network (RAN) … The RAN model allocates a new neuron for a new input example if the input vector is not close in the input space to any of the already allocated radial basis neurons (centers), pg. 902, right col., last para.): and allocate codewords to input data, wherein codewords are mapped to entries in a dynamically maintained codebook (The LVQ model has the following parameter values: 396 codebook vectors and 500 training iterations on the whole training set (pg. 916, left col., first para.); … input vector is propagated through the EFuNN, pg. 908, left col., fourth para.) and Kasabov does not explicitly teach system comprising a hardware memory, wherein the computer system is configured to execute software instructions stored on nontransitory machine-readable storage media that: fuse codewords of dissimilar data types into unified codeword representations for processing by the core neural network. Guo teaches fuse codewords of dissimilar data types (In Fig. 5, z which is an output from speech signals s node is fused with P which is an output from text sequence t node, pg. 1815. The Examiner notes that z is fused with p, z and p are dissimilar datatypes) into unified codeword representations (In Fig. 5, codebook C from speech signal s node is received by text sequence; This model is first trained to minimize the loss function Lmsmc, and then provides MSMCR Z and codebook group C for synthesis and prediction, pg. 1815, left col., last para.) for processing by the core neural network (The output sequence is also processed by another neural network based module X for prediction, pg. 1814, right col., last para.). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Kasabov to incorporate the teachings of Guo for the benefit of using VQ-VAE (Vector-Quantized Variational AutoEncoder) which aims to learn a discrete latent representation from target data with an encoder-decoder model (pg. 1812, right col., section A. Vector Quantized Variational AutoEncoder) Modified Kasabov does not explicitly teach system comprising a hardware memory, wherein the computer system is configured to execute software instructions stored on nontransitory machine-readable storage media that: Shrivastava teaches a computer system comprising a hardware memory, wherein the computer system is configured to execute software instructions stored on nontransitory machine-readable storage media (The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform actions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit) [0090]; Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for performing actions in accordance with instructions and one or more memory devices for storing instructions and data [0091]) It would have been obvious to a person of ordinary skill in the art before the effective filing data of the claimed invention to have modified the method of Modified Kasabov to incorporate the teachings of Shrivastava for benefit of performing actions in accordance with instructions and one or more memory devices for storing instructions and data [0091] to enable efficient real-time processing (Shrivastava [0084]) Regarding claim 2, Modified Kasabov teaches the system of claim 1, Guo teaches wherein the core neural network is a Transformer model (MSMC-VQ-VAE is implemented based on Feed-Forward Transformer in FastSpeech, pg. 1816, right col. section B. Implement Details). The same motivation to combine independent claim 1 applies here. Regarding claim 3, Modified Kasabov teaches the system of claim 1, Kasabov teaches wherein the architectural modifications comprise at least one of: neuron splitting, neuron pruning, and connection bundling (After a certain time (when a certain number of examples have been presented) some neurons and connections may be pruned or aggregated, pg. 907, left col., second para.). Regarding claim 4, Modified Kasabov teaches the system of claim 1, Kasabov teaches wherein the low-level supervisory nodes (2) Representation (Memory) Part Where Information (Patterns) are Stored, pg. 903, left col., third para., Fig. 2) are configured to initiate fine-grained modifications to individual neurons or small clusters of neurons (the EFuNN either creates a new rule node to memorize the two input and output fuzzy vectors W1 … and W2 … or adjusts the winning rule node (or m rule nodes, respectively), pg. 907, left col., second para.). Regarding claim 5, Modified Kasabov teaches the system of claim 1, Kasabov teaches wherein the mid-level supervisory nodes (5) Knowledge-Based Part:, 903, right col., Fig. 2) are configured to initiate modifications to local topology and connectivity patterns within the core neural network (EFuNNs are adaptive rule-based systems. Manipulating rules is essential for their operation. This includes rule insertion, rule extraction, and rule adaptation … For example, the fuzzy rule (IF is Small and is Small THEN is Small) can be inserted into an EFuNN structure by setting the input connections of a new rule node from the fuzzy input nodes -small and -small to a value of one, and setting the output connection of this rule node to the fuzzy output node -small to a value of one. The rest of the connections are set to a value of zero. Similarly, an exact rule can be inserted into an EFuNN structure, pg. 908, left col., last para.). Regarding claim 6, Modified Kasabov teaches the system of claim 1, Kasabov teaches wherein the high-level supervisory nodes (6) Adaptation Part:, 903, right col., Fig. 2) are configured to initiate large-scale architectural changes affecting entire layers or subsystems of the core neural network (EFuNNs have a five-layer structure, similar to the structure of FuNNs [Fig. 3(a)]. But here nodes and connections are created/connected as data examples are presented (pg. 903, right col., last para.- pg. 904, first para.); Changing (evolving) MF is another knowledge-based operation that may be needed for a refined performance after a certain time moment of the EFuNNs operation. Changing the shape of the MF in a fuzzy neural structure, pg. 910, left col. second para). Regarding claim 8, Modified Kasabov teaches the system of claim 1, Kasabov teaches wherein the supervisory nodes at different levels are configured to communicate with each other to coordinate decision-making across multiple scales ((3) Higher-Level Decision Part: It consists of modules that receive inputs from the representation part and also feedback from the environment). Regarding claim 9, Modified Kasabov teaches the system of claim 1, Kasabov teaches wherein the modification subsystem is configured to implement architectural modifications during the operation of the core neural network without interrupting its functioning (Fig. 9. Online membership function modification. (a) New MFs are inserted without modifying the existing ones, pg. 910, right col., Fig. 9a). Regarding claim 10, Modified Kasabov teaches the system of claim 1, Guo teaches wherein the codeword allocation subsystem is configured to adaptively update codewords and their corresponding codebooks (Meanwhile, codewords in the codebook are updated using the exponential moving average-based method, pg. 1813, left col., last para.) to reflect incoming data inputs (In this experiment, we build two more low-resource TTS datasets based on Nancy, which are described as follows: D1: 1,000 pairs of text and audio. D2: 1,000 pairs of text and audio + 10,000 audios without transcripts, pg. 1819, right col., second para.). The same motivation to combine independent claim 1 applies here. Regarding claim 12, claim 12 is similar to claim 1. It is rejected in the same manner and reasoning. Further, Kasabov teaches a method for adapting neural network architecture in real-time time series forecasting, comprising (EFuNNs can learn spatial-temporal sequences in an adaptive way through one pass learning and automatically adapt their parameter values as they operate (abstract); Here the operation of EFuNNs is illustrated on the … time series data, pg. 910, left col., last para.): Regarding claim 13, Modified Kasabov teaches the method of claim 12, wherein analyzing the activation patterns comprises performing statistical analysis on collected activation data at each level of the hierarchical supervisory network (lj,1 and lj,2 are the current learning rates of rule node for its input layer and its output layer of connections respectively; further in the paper we will assume that the two learning rates have the same value calculated as …, where … is the number of examples currently associated with rule node rj. The statistical rationale behind this is that the more examples are associated with a rule node the less it will “move” in the input space when a new example has to be accommodated by this rule node, pg. 906, first para.). Regarding claim 14, Modified Kasabov teaches the method of claim 12, Kasabov teaches wherein determining architectural modifications comprises coordinating decisions between different levels of the hierarchical supervisory network (Adaptation can be achieved through the analysis of the behavior of the system or through a feedback connection from higher level modules in the ECOS architecture, pg. 914, left col., first para.). Regarding claim 15, claim 15 is similar to claim 3. It is rejected in the same manner and reasoning applying. Regarding claim 16, Modified Kasabov teaches the method of claim 12, Kasabov teaches comprising dynamically allocating computational resources within the core neural network based on the analysis of activation patterns (In EFuNNs there are several possibilities to implement such dynamical changes of MF as it is graphically illustrated in Fig. 9(a) and (b). 1) New MF are created (fuzzy nodes are inserted) in the most dense areas of the input space without a need for the old MF to be changed [Fig. 9(a)]. The degree to which each cluster center (each rule node) belongs to the new MF can be calculated through the following, pg. 10, left col., second para.). Regarding claim 17, claim 17 is similar to claim 2. It is rejected in the same manner and reasoning applying. Regarding claim 18, Modified Kasabov teaches the method of claim 12, Guo teaches wherein the core neural network uses a latent transformer-based architecture (VQ-VAE aims to learn a discrete latent representation from target data with an encoder-decoder model (pg. 1812, right col., last para.); MSMC-VQ-VAE is implemented based on Feed-Forward Transformer in FastSpeech, pg. 1816, right col., third para.). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Kasabov to incorporate the teachings of Guo for the benefit of using VQ-VAE (Vector-Quantized Variational AutoEncoder) which aims to learn a discrete latent representation from target data with an encoder-decoder model (pg. 1812, right col., section A. Vector Quantized Variational AutoEncoder) Regarding claim 19, Modified Kasabov teaches the method of claim 12, Kasabov teaches wherein the variety of input data inputs includes real-time time series data (the real strength of the EFuNNs is in learning time series that change their dynamics through time (pg. 911, left col., third para.); The EFuNN is evolved on the first 500 data examples from the same Mackey–Glass time series as in example 1. Fig. 11(a) shows the desired versus the predicted online values on the first 500 examples of the time series, pg. 911, left col., last para.). Regarding claim 20, Modified Kasabov teaches the method of claim 19, Guo teaches further comprising processing fused codeword representations of the real-time time series data into short-term forecasts for the time series data (Multi-stage modeling and prediction force the model to pay sufficient attention to short- and long-time contextual information at different time resolutions, pg. 1818, right col., last para.). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Kasabov to incorporate the teachings of Guo for the benefit of using VQ-VAE (Vector-Quantized Variational AutoEncoder) which aims to learn a discrete latent representation from target data with an encoder-decoder model (pg. 1812, right col., section A. Vector Quantized Variational AutoEncoder) 7. Claims 7, 11 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Kasabov ("Evolving fuzzy neural networks for supervised/unsupervised online knowledge-based learning," in IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 31, no. 6, pp. 902-918, Dec. 2001, doi: 10.1109/3477.969494) in view of Guo et al. ("MSMC-TTS: Multi-stage multi-codebook VQ-VAE based neural TTS." IEEE/ACM Transactions on Audio, Speech, and Language Processing 31 (2023): 1811-1824) in view of Shrivastava et al. (US20200311548) and further in view of Eddahech et al. (“Hierarchical neural networks based prediction and control of dynamic reconfiguration for multilevel embedded systems”, Journal of Systems Architecture, Volume 59, issue 1, 2013, pages 48-59) Regarding claim 7, Modified Kasabov teaches the system of claim 1, Modified Kasabov does not explicitly teach further comprising a top-level supervisory node configured to manage global objectives and constraints for the entire core neural network. Eddahech teaches further comprising a top-level supervisory node configured to manage global objectives and constraints for the entire core neural network (Thus, we developed a multilevel predictor that used information coming from correlation between subsystems constituting the global system which potentially ameliorate the prediction, pg. 50, left col., third para.) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Kasabov to incorporate the teachings of Eddahech for the benefit of implementing reconfigurable systems with neural networks (pg. 48, right col., last para.) performance improvement as well as energy saving (pg. 48, right col., second to the last para.). Regarding claim 11, Modified Kasabov teaches the system of claim 1, Shrivastava teaches wherein the computer system is further configured to execute software instructions stored on non-transitory machine- readable media that (The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform actions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit) [0090]; Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for performing actions in accordance with instructions and one or more memory devices for storing instructions and data [0091]): The same motivation to combine independent claim 1 applies here. Modified Kasabov does not explicitly teach maintain a historical record of activation patterns at multiple levels of the hierarchical supervisory network; compare current activation patterns to the recorded historical data to identify trends or anomalies in the activation patterns over time; and determine structural modifications based on the identified trends or anomalies; evaluate the impact of implemented structural modifications on the performance of the core neural network; and adaptively maintain modifications that improve performance and revert modifications that do not. Eddahech teaches maintain a historical record of activation patterns at multiple levels of the hierarchical supervisory network (Based on the prediction given by the hierarchical multi-level predictor developed previously and on a history on the manual reconfiguration we generated the neural controller which would allow system reconfiguration, page 52 right column, third paragraph); compare current activation patterns to the recorded historical data (The whole system is composed of twenty three subsystems. Fig. 5 shows a comparison between the desired (real) and the predicted output of the subsystem number 6 (M6) in the fourth level (page 51, left column, last paragraph)) to identify trends or anomalies in the activation patterns over time (We notice that the predicted time series behavior is similar to the desired one (page 51, right column, first paragraph)); and determine structural modifications based on the identified trends or anomalies (In order to improve the predictor’s training capacity, many simulation tests were conducted to identify the structure that gave the best prediction results (page 51, left column, third paragraph)); evaluate the impact of implemented structural modifications on the performance of the core neural network (Prediction performance is evaluated using the global prediction error which is given by the next expression, page 56 left column, last paragraph)); and adaptively maintain modifications that improve performance and revert modifications that do not (That is why the reconfigurable architecture/controller participates in the performance improvement and the energy saving (page 58, left column, second paragraph)). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Kasabov to incorporate the teachings of Eddahech for the benefit of implementing reconfigurable systems with neural networks (pg. 48, right col., last para.) performance improvement as well as energy saving (pg. 48, right col., second to the last para.). Regarding claim 21, claim 21 is similar to claim 11. It is rejected in the same manner and reasoning applying. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MORIAM MOSUNMOLA GODO whose telephone number is (571)272-8670. The examiner can normally be reached Monday-Friday 8:00am-5:00pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle T. Bechtold can be reached on (571) 431-0762. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /M.G./Examiner, Art Unit 2148 /MICHELLE T BECHTOLD/Supervisory Patent Examiner, Art Unit 2148
Read full office action

Prosecution Timeline

Oct 26, 2024
Application Filed
Dec 27, 2024
Response after Non-Final Action
Apr 22, 2025
Non-Final Rejection — §103, §112
Jul 24, 2025
Response Filed
Aug 23, 2025
Final Rejection — §103, §112
Nov 10, 2025
Request for Continued Examination
Nov 16, 2025
Response after Non-Final Action
Jan 23, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602586
SUPERVISORY NEURON FOR CONTINUOUSLY ADAPTIVE NEURAL NETWORK
2y 5m to grant Granted Apr 14, 2026
Patent 12530583
VOLUME PRESERVING ARTIFICIAL NEURAL NETWORK AND SYSTEM AND METHOD FOR BUILDING A VOLUME PRESERVING TRAINABLE ARTIFICIAL NEURAL NETWORK
2y 5m to grant Granted Jan 20, 2026
Patent 12511528
NEURAL NETWORK METHOD AND APPARATUS
2y 5m to grant Granted Dec 30, 2025
Patent 12367381
CHAINED NEURAL ENGINE WRITE-BACK ARCHITECTURE
2y 5m to grant Granted Jul 22, 2025
Patent 12314847
TRAINING OF MACHINE READING AND COMPREHENSION SYSTEMS
2y 5m to grant Granted May 27, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
44%
Grant Probability
78%
With Interview (+33.4%)
4y 8m
Median Time to Grant
High
PTA Risk
Based on 68 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month