Prosecution Insights
Last updated: April 19, 2026
Application No. 16/926,407

EFFICIENT SEARCH OF ROBUST ACCURATE NEURAL NETWORKS

Non-Final OA §103§112
Filed
Jul 10, 2020
Examiner
GODO, MORIAM MOSUNMOLA
Art Unit
2148
Tech Center
2100 — Computer Architecture & Software
Assignee
Rensselaer Polytechnic Institute
OA Round
6 (Non-Final)
44%
Grant Probability
Moderate
6-7
OA Rounds
4y 8m
To Grant
78%
With Interview

Examiner Intelligence

Grants 44% of resolved cases
44%
Career Allow Rate
30 granted / 68 resolved
-10.9% vs TC avg
Strong +33% interview lift
Without
With
+33.4%
Interview Lift
resolved cases with interview
Typical timeline
4y 8m
Avg Prosecution
47 currently pending
Career history
115
Total Applications
across all art units

Statute-Specific Performance

§101
16.1%
-23.9% vs TC avg
§103
56.7%
+16.7% vs TC avg
§102
12.7%
-27.3% vs TC avg
§112
12.9%
-27.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 68 resolved cases

Office Action

§103 §112
DETAILED ACTION 1. This office action is in response to the Application No. 16926407 filed on 02/19/2025. Claims 2-4, 11 and 13-15 has been cancelled. Claims 1, 5-10, 12 and 16-21 are presented for examination and are currently pending. Applicant’s arguments have been carefully and respectfully considered. Notice of Pre-AIA or AIA Status 2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Reopening of Prosecution After Appeal Brief 3. In view of the appeal brief filed on 07/28/2025, PROSECUTION IS HEREBY REOPENED. A new ground of rejection is set forth below. To avoid abandonment of the application, appellant must exercise one of the following two options: (1) file a reply under 37 CFR 1.111 (if this Office action is non-final) or a reply under 37 CFR 1.113 (if this Office action is final); or, (2) initiate a new appeal by filing a notice of appeal under 37 CFR 41.31 followed by an appeal brief under 37 CFR 41.37. The previously paid notice of appeal fee and appeal brief fee can be applied to the new appeal. If, however, the appeal fees set forth in 37 CFR 41.20 have been increased since they were previously paid, then appellant must pay the difference between the increased fees and the amount previously paid. A Supervisory Patent Examiner (SPE) has approved of reopening prosecution by signing below: Allowable Subject Matter 4. Claims 6-8 and 17 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims and overcome the 35 USC 112(b) rejection. Response to Arguments 5. Upon further review, the final rejection has been withdrawn. As a result, a new non-final rejection has been issued because the Examiner determined that indefinite rejections were not included in the now withdrawn final office action. In addition, the Examiner has applied new a secondary reference because the Applicant’s arguments on page 10 and 11 of the appeal that “Hestness does not disclose or suggest "searching between said two aligned models along said minimal loss curve for a new model that performs better than said two aligned models"; rather, Hestness uses a "grid" search. …, since Hestness selects a model with a grid search, Hestness does not disclose or suggest "selecting said new model along said minimal loss curve" and “Hestness does not disclose or suggest "searching between said two aligned models along said minimal loss curve for a new model." Producing a curve does not read on searching along an existing curve” are persuasive. The Applicant argued on page 13 that “Li does not, however, disclose or suggest “permuting one or more model weights of a second model of said two trained neural network models.” Moreover, the statements that “[W]e find matching units between a pair of networks -- here Net1 and Net2 --in two ways. In the first approach, for each unit in Net1, we find the unit in Net2 with maximum correlation to if, which is the max along each row of Figure 1c” does not infer that a “correlation between corresponding hidden states” is maximized. in particular, Li does not disclose or suggest “permuting one or more model weights of a second model of said two trained neural network models to maximize correlation between corresponding hidden states”. The Examiner respectfully disagrees with the argument above because Li as secondary reference clearly teaches permuting weights and a maximum correlation between hidden states. The broadest reasonable interpretation of Li reads on the claimed limitations of the Appellant. Li clearly teaches permuting one or more model weights of a second model of said two trained neural network models (Figure 6 shows a visualization of the learned weight matrix for conv1, along with a permuted weight matrix that aligns units from Net2 with the Net1 units that most predict them, pg. 8, third para.; … and permuting the outgoing weights accordingly, pg. 4, second para.; (d) Between-net correlation for Net1 vs. a version of Net2 that has been permuted to approximate Net1’s feature order, pg. 3, Fig. 1) to maximize correlation between corresponding hidden states ((d) Between-net correlation for Net1 vs. a version of Net2 that has been permuted to approximate Net1’s feature order. The partially white diagonal of this final matrix shows the extent to which the alignment is successful pg. 3, Fig. 1; We find matching units between a pair of networks — here Net1 and Net2 — in two ways. In the first approach, for each unit in Net1, we find the unit in Net2 with maximum correlation to it, which is the max along each row of Figure 1c, pg. 4, section 3.1, first para.) Furthermore, the Applicant has not provided any explanations comparing how the above teachings of Li differs from the claimed limitations of “permuting” and “maximize correlation between corresponding hidden states”. On page 14 of the remarks, the Applicant argued that “Li discusses, for example, multiple training runs of the same network architecture. Li does not, however, disclose or suggest iteratively repeating the following three steps: the neuron alignment step, the training step, and the selecting step to obtain a further refined new model. Thus, Uriot, Hestness, Ryan and Li, alone or in combination, do not disclose or suggest iteratively repeating “said neuron alignment, training, and selecting steps to obtain a further refined new model,” as recited by Claims 6 and 17, The prior art arguments directed to claims 6 and 17 are persuasive. However, the claims are still indefinite as detailed in this office action but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims and overcome the 35 USC 112(b) rejection. On page 14 of the remarks, the Applicant argued that “Moreover, with regard to Claims 5-9 and 16-20, which depend directly or indirectly from Claims 1, 10, and 12, applicant asserts that these claims are also patentable at least by virtue of their dependency from Claims 1, 10, and 12, which are believed to be patentable for at least the reasons stated above”. The Examiner note that since a new non-final rejection has been issued, i.e., new grounds of rejection has been presented. Claims 5-9 and 16-20 are not patentable in light of the new grounds of rejection. On page 14 of the remarks, the Applicant argued that “With regard to Claim 21, which depends directly from Claim 1, applicant asserts that Claim 1 is also patentable at least by virtue of its dependency from Claim 1, which is believed to be patentable for at least the reasons stated above. The Examiner note that since a new non-final rejection has been issued, i.e., new grounds of rejection has been presented. Claim 21 is not patentable in light of the new grounds of rejection. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. 6. Claims 1, 5-10, 12 and 16-21 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 1, 10 and 12 recites “searching between said two aligned models”. It is not clear what the Applicant means by searching because the process of performing the searching is unclear. It is unclear if the searching is done using different datapoints from within the loss curve graph or how the searching is performed between said two aligned models along said minimal loss curve for a new model. In light of the issue above, it is further unclear in claim 1 that states “with said at least one hardware processor, implementing said new model on a computer in an artificial intelligence application”, how a “new model” based on data points obtained from a x-y cartesian curve or graph can be used to implement a new network model algorithm that has input, hidden and output layers that is implemented by a processor. According to the instant specification [0067]: “After alignment, connect the two models by finding a path in step 805. Then, search for the best model on the path. This can be done iteratively; i.e., use the best model as an endpoint and look for an even better model”. Using the broadest reasonable interpretation and for the purposes of examination, the Examiner has interpreted searching between two models to be finding the neural network model out of different neural network models that gives the best loss in the loss curve graph. Further, it is unclear how a new model performs better is obtained in claim 1. It is not clear what metric is used to measure the performance of the model that results in a model that performs better. Using the broadest reasonable interpretation and for the purposes of examination, the Examiner has interpreted new model performs better to be the new best loss. Claims 5-9 and 16-21 that are not specifically mentioned are rejected due to dependency. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 7. Claims 1, 10, 12 are rejected under 35 U.S.C. 103 as being unpatentable over Uriot et al. ("Safe Crossover of Neural Networks Through Neuron Alignment’, arXiv:2003.10306v1 [cs.NE] 23 Mar 2020) in view of Nair et al. (US20200372342 filed 12/9/2019) and further in view of Ryan et al (US20200387797 filed 08/14/2019) Regarding claim 1, Uriot teaches a method comprising: obtaining, data specifying: two trained neural network models; and alignment data (In this work, we will consider several pairs of hidden layers La ∈ Rn×p and Lb ∈ Rn×q coming from two neural networks θa and θb , trained on the same dataset but with different random initializations, pg. 3, left col. to right col., section 3.1), wherein said alignment data includes training data (In order to find a mapping between the hidden layers of two feedforward neural networks (i.e. a correspondence between the neurons of the two layers) trained on the same dataset, we first have to define how to represent a layer, pg. 3, left col., section 3.1. The Examiner notes the training data (i.e dataset) is included in the alignment data); with said at least one hardware processor, carrying out neuron alignment on said two trained neural network models using said alignment data (In order to find a mapping between the hidden layers of two feedforward neural networks (i.e. a correspondence between the neurons of the two layers) trained on the same dataset, we first have to define how to represent a layer, pg. 3, left col., section 3.1. The Examiner notes the dataset is the alignment data) to obtain two aligned models (Following Algorithm 2, we can then functionally align θa and θb by permuting the neurons of the layers {Lda, Ldb }Dd=1 (and thus the weights) according to the pairings {lda , ldb }Dd=1. Finally, once the weights of the two neural networks are permuted, we can safely crossover the two networks by directly matching the weights at the same location in both networks … where the networks are now functionally aligned according to a uniquely defined mapping obtained by applying Algorithm 2, pg. 5, right col., section 4.3); Uriot does not explicitly teach with said at least one hardware processor, training a minimal loss curve between said two aligned models; with said at least one hardware processor, searching between said two aligned models along said minimal loss curve for a new model that performs better than said two aligned models; with said at least one hardware processor, selecting said new model along said minimal loss curve that maximizes accuracy on adversarially perturbed data; with said at least one hardware processor, implementing said new model on a computer in an artificial intelligence application; and with said at least one hardware processor, controlling at least one of a vehicle and a tool with said new model based at least in part on adversarial input, wherein said artificial intelligence application comprises computer vision. Nair teaches with said at least one hardware processor (Computing device 100 may include a controller or processor 105 that may be or include, for example, one or more central processing unit processor(s) (CPU), one or more Graphics Processing Unit(s) (GPU or GPGPU), a chip or any suitable computing or computational device [0056]), training a minimal loss curve (For a given hyperparameter configuration, the models (e.g. models ml, m2 and m3 in the example above) may be responsible for predicting the minimum value of the loss curve [0046]; In some embodiments, since the models used have been trained or developed using training losses of multiple NNs [0039]) between said two aligned models (The predicted loss may be a mean of loss for all NNs within a certain category of instances, possibly normalized when displayed in FIG. 5 to the range of loss values of the NN whose actual loss over epochs is shown in orange lines 602 [0093]); with said at least one hardware processor, searching between said two aligned models along said minimal loss curve (The example operations of FIG. 4B may be used for early stopping when searching for a set of hyperparameters for a NN [0082]; In global mode, the model may predict the probability that the current loss curve will be able to improve beyond the best performing model (e.g. NN with certain hyperparameters) seen so far [0015]; In operation 530, training may begin or continue (e.g. for a next training interval or epoch) on a NN having hyperparameters chosen in operation 525. The use of an early stopping prediction may be delayed near the beginning of training. The training of a specific NN having a specific or unique set of hyperparameters chosen in operation 525 may be called an “experiment”, and a number of experiments may take place. The resulting best loss for each NN in an experiment may be recorded. In another embodiment one best loss value over all different NN-hyperparameter combinations seen thus far may be recorded (e.g. as best_metric) [0086]) for a new model that performs better than said two aligned models (In another embodiment one best loss value over all different NN-hyperparameter combinations seen thus far may be recorded (e.g. as best_metric); when a new best loss occurs, it may replace the best loss value [0086]. The Examiner notes that the new best loss is a new model); with said at least one hardware processor, selecting said new model along said minimal loss curve that maximizes accuracy (the embodiment shown in FIG. 4B may allow choosing among the best NN structure. [0101]; Embodiments of the present invention may improve and make more efficient prior NN hyperparameter selection by stopping training of a NN with a set of hyperparameters at a point where it is unlikely that the current NN can be trained to have a loss better than the best seen so far for other NNs [0120]); It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Uriot to incorporate the teachings of Nair for the benefit of reducing model training time by an average of 20% (Nair [0018]) Modified Uriot does not explicitly teach maximizes accuracy on adversarially perturbed data, with said at least one hardware processor, implementing said new model on a computer in an artificial intelligence application; and with said at least one hardware processor, controlling at least one of a vehicle and a tool with said new model based at least in part on adversarial input, wherein said artificial intelligence application comprises computer vision. Ryan teaches at least one hardware processor (The instructions further cause the one or more processors to detect outliers of the obtained data with respect to the window using an unsupervised learning process including one or more of a Generalized Adversarial Network (GAN) learning technique [0017]; When stored in the non-transitory computer-readable medium, software can include instructions executable by a processor or device (e.g., any type of programmable circuitry or logic) that, in response to such execution, cause a processor or the device to perform a set of operations, steps, methods, processes, algorithms, functions, techniques, etc. as described herein for the various embodiments [0180]) maximizes accuracy on adversarially perturbed data (Noise may be introduced into the inputs to the black boxes [0139]; an intrusion activity or cyber-attack in network traffic data [0129]. The Examiner notes that noise and cyber-attack in network traffic data are adversarial attacks on data.). with said at least one hardware processor, implementing said new model on a computer in an artificial intelligence application (Meta Learning 224 selects the best performing models which is sent as input according to the arrow into machine learning 222 (which is considered as artificial intelligence application) Fig. 15, diagram on the right of last shaded section; meta learning 224 processes for providing models and selecting the best performing models [0139]); and with said at least one hardware processor, controlling at least one of a vehicle and a tool (…. control operations of the server 500 pursuant to the software instructions [0176]; forecasting traffic congestion on streets by detecting patterns in a time-series from video cameras on streets, cars [0061]) with said new model based at least in part on adversarial input (One way that this can be done is by creating images from time-series data, as described above, and then passing the image data to a Generalized Adversarial Network (GAN), which is a Deep Neural Network that enables learning of a distribution of the data from the time-series [0182]), wherein said artificial intelligence application comprises computer vision (forecasting traffic congestion on streets by detecting patterns in a time-series from video cameras on streets, cars [0061]. The Examiner notes that video cameras as tools for perceiving scenes in computer vision). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Uriot to incorporate the teachings of Ryan for the benefit of providing models with high accuracy in detecting anomalies [0138] and anomalies are deviations from regular patterns of data profiles (Ryan [0129]). Regarding claim 10, claim 10 is similar to claim 1. It is rejected in the same manner and reasoning applying. Regarding claim 12, claim 12 is similar to claim 1. It is rejected in the same manner and reasoning applying. 8. Claims 5, 9, 16 and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Uriot et al. ("Safe Crossover of Neural Networks Through Neuron Alignment’, arXiv:2003.10306v1 [cs.NE] 23 Mar 2020) in view of Nair et al. (US20200372342 filed 12/9/2019) in view of Ryan et al (US20200387797 filed 08/14/2019) and further in view of Li et al. ("Convergent learning: Do different neural networks learn the same representations?." arXiv:1511.07543v3 [cs.LG] 28 Feb 2016) Regarding claim 5, Modified Uriot teaches the method of claim 1, Nair teaches with said at least one hardware processor (Computing device 100 may include a controller or processor 105 that may be or include, for example, one or more central processing unit processor(s) (CPU), one or more Graphics Processing Unit(s) (GPU or GPGPU), a chip or any suitable computing or computational device [0056]), Modified Uriot does not explicitly teach wherein said carrying out of said neuron alignment comprises: computing correlations between hidden states of said two trained neural network models; and with said at least one hardware processor, permuting one or more model weights of a second model of said two trained neural network models to maximize correlation between corresponding hidden states. Li teaches wherein said carrying out of said neuron alignment (we also performed one-to-one alignments of neurons by measuring the mutual information between them, pg. 6, section 3.2, first para.; We begin research into this question by introducing three techniques to approximately align different neural networks on a feature or subspace level, abstract) comprises: computing correlations between hidden states of said two trained neural network models; (Figure 1: Correlation matrices for the conv1 layer, displayed as images with minimum value at black and maximum at white. (a, b) Within-net correlation matrices for Net1 and Net2, respectively, pg. 3, Fig. 1; The within-net correlation values for each layer can be considered as a symmetric square matrix with side length equal to the number of units in that layer (e.g. a 96 × 96 matrix for conv1 as in Figure 1a,b), pg. 3, last para.) and with said at least one hardware processor, permuting one or more model weights of a second model of said two trained neural network models (Figure 6 shows a visualization of the learned weight matrix for conv1, along with a permuted weight matrix that aligns units from Net2 with the Net1 units that most predict them, pg. 8, third para.; … and permuting the outgoing weights accordingly, pg. 4, second para.; (d) Between-net correlation for Net1 vs. a version of Net2 that has been permuted to approximate Net1’s feature order, pg. 3, Fig. 1) to maximize correlation between corresponding hidden states ((d) Between-net correlation for Net1 vs. a version of Net2 that has been permuted to approximate Net1’s feature order. The partially white diagonal of this final matrix shows the extent to which the alignment is successful pg. 3, Fig. 1; We find matching units between a pair of networks — here Net1 and Net2 — in two ways. In the first approach, for each unit in Net1, we find the unit in Net2 with maximum correlation to it, which is the max along each row of Figure 1c, pg. 4, section 3.1, first para.) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Uriot to incorporate the teachings of Li for the benefit of improvements that are possible via training multiple models and then using model compilation techniques to realize the resulting ensemble in a single model. (Li, pg. 2, first para.) Regarding claim 9, Modified Uriot teaches the method of claim 1, Li teaches wherein training said minimal loss curve comprises applying stochastic gradient descent (This layer is then trained to minimize the sum of squared prediction errors plus an L1 penalty, the strength of which is varied.9, pg. 7, second para.; Second, … making the initial cost about the same on all layers and allowing the same learning rate and SGD momentum hyperparameters to be used for all layers, pg. 7, footnote 9. The Examiner notes that SGD also known as stochastic gradient descent is an optimization technique that minimizes the loss function) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Uriot to incorporate the teachings of Li for the benefit of improvements that are possible via training multiple models and then using model compilation techniques to realize the resulting ensemble in a single model. (Li, pg. 2, first para.) Regarding claim 16, claim 16 is similar to claim 5. It is rejected in the same manner and reasoning applying. Regarding claim 18, Modified Uriot teaches the apparatus of Claim 16, Ryan teaches wherein said at least one processor is further operative to implement said further refined new model in the artificial intelligence application (… and selecting the best algorithm among the optimized algorithms, given the current network context [0095]; (Meta Learning 224 selects the best performing models which is sent as input according to the arrow into machine learning 222 (which is considered as artificial intelligence application) Fig. 15, diagram on the right of last shaded section) . It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Uriot to incorporate the teachings of Ryan for the benefit of providing models with high accuracy in detecting anomalies [0138] and anomalies are deviations from regular patterns of data profiles (Ryan [0129]). Regarding claim 19, Modified Uriot teaches the apparatus of Claim 18, Ryan teaches wherein said at least one processor is further operative to control (…. control operations of the server 500 pursuant to the software instructions [0176]) at least one of the vehicle and the tool (forecasting traffic congestion on streets by detecting patterns in a time-series from video cameras on streets, cars [0061]. The Examiner notes that video cameras as tools for perceiving scenes in computer vision) with said further refined new model based at least in part on the adversarial input (… and selecting the best algorithm among the optimized algorithms, given the current network context [0095]; Noise may be introduced into the inputs to the black boxes, … The black boxes may be described as machine learning 222 and meta learning 224 processes for providing models and selecting the best performing models [0139]; Anomalies are deviations from regular patterns of data profiles. Unexpected bursts in time-series data might indicate, … an intrusion activity or cyber-attack in network traffic data [0129]; One way that this can be done is by creating images from time-series data, as described above, and then passing the image data to a Generalized Adversarial Network (GAN), which is a Deep Neural Network that enables learning of a distribution of the data from the time-series [0182]. The Examiner notes that noise and cyber-attack in network traffic data are adversarial attacks on data.). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Uriot to incorporate the teachings of Ryan for the benefit of providing models with high accuracy in detecting anomalies [0138] and anomalies are deviations from regular patterns of data profiles (Ryan [0129]). Regarding claim 20, claim 20 is similar to claim 9. It is rejected in the same manner and reasoning applying. 9. Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Uriot et al. ("Safe Crossover of Neural Networks Through Neuron Alignment’, arXiv:2003.10306v1 [cs.NE] 23 Mar 2020) in view of Nair et al. (US20200372342 filed 12/9/2019) in view of Ryan et al (US20200387797 filed 08/14/2019) and further in view of Setlur et al. ("Waveform design for radar STAP in signal dependent interference." IEEE Transactions on Signal Processing 64.1 (2015): 19-34). Regarding claim 21, Modified Uriot teaches the method of Claim 1, Modified Uriot does not explicitly teach further comprising applying a proximal alternating minimization (PAM) scheme to iteratively optimize a permutation of second model weights and optimize curve parameters of the minimal loss curve. Setlur teaches further comprising applying a proximal alternating minimization (PAM) scheme to iteratively optimize a permutation of second model weights and optimize curve parameters of the minimal loss curve (The proximal version of the constrained alternating minimization is iterative, and for the filter design step, optimizes at the k-th iteration … where can be seen as a weight attached to the regularizer/penalizer. This parameter can be interpreted as follows, if it is small, it encourages the optimizer to look for viable solutions in the vicinity of . However, if large, it penalizes the optimizer heavily for focusing even slightly in the immediate vicinity of wk-1, pg. 26, right col., Section c). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Uriot to incorporate the teachings of Setlur for the benefit of using the constant modulus alternating minimization, which, at each step, iteratively optimizes the space time adaptive processing (STAP) filter (Setlur, abstract) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MORIAM MOSUNMOLA GODO whose telephone number is (571)272-8670. The examiner can normally be reached Monday-Friday 8:00am-5:00pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle T. Bechtold can be reached on (571) 431-0762. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /M.G./Examiner, Art Unit 2148 /MICHELLE T BECHTOLD/Supervisory Patent Examiner, Art Unit 2148
Read full office action

Prosecution Timeline

Jul 10, 2020
Application Filed
Oct 13, 2022
Non-Final Rejection — §103, §112
Feb 21, 2023
Response Filed
Mar 14, 2023
Applicant Interview (Telephonic)
Mar 16, 2023
Examiner Interview Summary
Jun 02, 2023
Final Rejection — §103, §112
Sep 14, 2023
Request for Continued Examination
Sep 16, 2023
Response after Non-Final Action
Sep 21, 2023
Non-Final Rejection — §103, §112
Dec 28, 2023
Response Filed
Jan 17, 2024
Applicant Interview (Telephonic)
Jan 17, 2024
Examiner Interview Summary
Apr 23, 2024
Non-Final Rejection — §103, §112
Aug 07, 2024
Response Filed
Aug 26, 2024
Applicant Interview (Telephonic)
Aug 26, 2024
Examiner Interview Summary
Nov 12, 2024
Final Rejection — §103, §112
Feb 19, 2025
Response after Non-Final Action
Apr 21, 2025
Notice of Allowance
Apr 21, 2025
Response after Non-Final Action
May 23, 2025
Response after Non-Final Action
Jul 28, 2025
Response after Non-Final Action
Aug 06, 2025
Response after Non-Final Action
Jan 26, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602586
SUPERVISORY NEURON FOR CONTINUOUSLY ADAPTIVE NEURAL NETWORK
2y 5m to grant Granted Apr 14, 2026
Patent 12530583
VOLUME PRESERVING ARTIFICIAL NEURAL NETWORK AND SYSTEM AND METHOD FOR BUILDING A VOLUME PRESERVING TRAINABLE ARTIFICIAL NEURAL NETWORK
2y 5m to grant Granted Jan 20, 2026
Patent 12511528
NEURAL NETWORK METHOD AND APPARATUS
2y 5m to grant Granted Dec 30, 2025
Patent 12367381
CHAINED NEURAL ENGINE WRITE-BACK ARCHITECTURE
2y 5m to grant Granted Jul 22, 2025
Patent 12314847
TRAINING OF MACHINE READING AND COMPREHENSION SYSTEMS
2y 5m to grant Granted May 27, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

6-7
Expected OA Rounds
44%
Grant Probability
78%
With Interview (+33.4%)
4y 8m
Median Time to Grant
High
PTA Risk
Based on 68 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month