Prosecution Insights
Last updated: April 19, 2026
Application No. 18/028,434

SYSTEM AND METHOD FOR AN ADJUSTABLE NEURAL NETWORK

Non-Final OA §103
Filed
Mar 24, 2023
Examiner
HICKS, AUSTIN JAMES
Art Unit
2142
Tech Center
2100 — Computer Architecture & Software
Assignee
Ailectric LLC
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
308 granted / 403 resolved
+21.4% vs TC avg
Strong +25% interview lift
Without
With
+25.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
54 currently pending
Career history
457
Total Applications
across all art units

Statute-Specific Performance

§101
13.9%
-26.1% vs TC avg
§103
46.3%
+6.3% vs TC avg
§102
17.3%
-22.7% vs TC avg
§112
19.2%
-20.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 403 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Allowable Subject Matter Claims 3, 4, 6, 7, 10, 11, 13 and 14 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: With respect to claims 3, 10 and 17, The prior art teaches identifying, via the processor, at least one layer of the CNN corresponding to the at least one drop-off point; (Wang p. 5 experiment 2 “where x is the original image and g(·) is the mapping to a representation layer.”) However, Wang does not identify at least one replacement layer associated which resonates with the pre-defined features; and replacing, via the processor, the at least one layer of the CNN with the at least one replacement layer, resulting in the modified CNN. Network Morphism by Wei teaches “Formally, we need to replace the layer Bl+1 = ϕ(G ~ Bl+1) with two layers…” Wei sec. 3.3. However, the replacement layer does not necessarily resonate with pre-defined features. Further, Wei teaches that this replacement is “not trivial…” sec. 3.3. Therefore, this idea is not taught or made obvious by the prior art of record. Claims 4, 11 and 18 depend on this subject matter and are likewise allowable. With respect to claims 6, 13 and 20, the prior art teaches measuring, via the processor, applicability of a new image with the modified CNN; (Wang fig. 4 “Figure 4. (First Row) Left: the input image. Middle: an example of computing the attribution score for a specific frequency component by ablating the frequency of interest with zeros.” The new image is the input image. The computed attribution score is the applicability.) determining, via the processor based on the applicability meeting a predefined threshold, that the new image represents a new category, resulting in a determination; (Wang fig. 4 “Higher scores denotes higher contribution to the prediction. (Second and Third Row) The input image with the k − th lowest frequency components are ablated.” The k-th lowest frequency components is the threshold. The determination is to ablate the lowest frequencies.) However, Wang does not generate a new branch of features associated with the new image; and adding the new branch of features to the modified CNN, resulting in an updated, modified CNN. Towards Open World Recognition by Bendale et al teaches generating a new branch of features associated with the new image in figure 4 below. PNG media_image1.png 166 636 media_image1.png Greyscale Bendale does not incorporate these new features into a modified CNN. Claims 7 and 14 and 18 depend on this subject matter and are likewise allowable. Claim Objections A series of singular dependent claims is permissible in which a dependent claim refers to a preceding claim which, in turn, refers to another preceding claim. A claim which depends from a dependent claim should not be separated by any claim which does not also depend from said dependent claim. It should be kept in mind that a dependent claim may refer to any preceding independent claim. In general, applicant's sequence will not be changed. See MPEP § 608.01(n). Claims 9-14 are out of order. Applicant is advised that should claim 2 be found allowable, claim 9 will be objected to under 37 CFR 1.75 as being a substantial duplicate thereof. When two claims in an application are duplicates or else are so close in content that they both cover the same thing, despite a slight difference in wording, it is proper after allowing one claim to object to the other as being a substantial duplicate of the allowed claim. See MPEP § 608.01(m). Applicant is advised that should claim 3 be found allowable, claim 10 will be objected to under 37 CFR 1.75 as being a substantial duplicate thereof. When two claims in an application are duplicates or else are so close in content that they both cover the same thing, despite a slight difference in wording, it is proper after allowing one claim to object to the other as being a substantial duplicate of the allowed claim. See MPEP § 608.01(m). Applicant is advised that should claim 5 be found allowable, claim 12 will be objected to under 37 CFR 1.75 as being a substantial duplicate thereof. When two claims in an application are duplicates or else are so close in content that they both cover the same thing, despite a slight difference in wording, it is proper after allowing one claim to object to the other as being a substantial duplicate of the allowed claim. See MPEP § 608.01(m). Applicant is advised that should claim 6 be found allowable, claim 13 will be objected to under 37 CFR 1.75 as being a substantial duplicate thereof. When two claims in an application are duplicates or else are so close in content that they both cover the same thing, despite a slight difference in wording, it is proper after allowing one claim to object to the other as being a substantial duplicate of the allowed claim. See MPEP § 608.01(m). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 2, 5, 8, 9, 12, 15, 16 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Towards Frequency-Based Explanation for Robust CNN by Wang et al and Diagnosing Convolutional Neural Networks using their Spectral Response by Stamatescu et al (Stat). Wang teaches claims 1, 8 and 15. A method comprising: measuring, via a processor, feature applicability for an octave of a Convolutional Neural Network (CNN) at a standard scale, resulting in (1) at least one drop-off point where the octave no longer resonates with pre-defined features; and (2) a common drop-off between the CNN and at least one other CNN trained on at least one other separate domain; Wang sec. 1 p. 2 “Occluded Frequency, our measurement of the contribution of each frequency component towards the prediction and we further show that robust models are usually less relied on the high-frequency components in the input…” Wang fig. 1 shows the drop-off in attribution score among frequency classes (octaves), below. Attribution score is the feature applicability for an octave/class.) PNG media_image2.png 378 500 media_image2.png Greyscale measuring, via the processor, octave resonance for a plurality of CNNs trained on large data sets with a distribution of octaves for features; (Wang fig. 1 above states, “Average Attribution scores for each frequency components on each subset of CIFAR-10. We compute the attribution scores on three ResNet models…” The average attribution scores are a measure of octave resonance.) Wang doesn’t look at the pattern of octaves. However, Stat teaches measuring a pattern of octaves learned in the CNN, resulting in a measurement pattern; (Stat sec. I “measuring CNN spectral response to a specific test image.” Spectral response is a pattern of frequencies/octaves learned in the CNN.) comparing that measurement pattern to the pre-defined features, resulting in a level of adaptability of the CNN; and (Stat sec. I “summarize the CNN spectral response using a single metric, which can then can be used in a similar way to the training and validation losses to identify problems during training.” Training and validation losses are the level of adaptability.) modifying the CNN based on the level of adaptability of the CNN, resulting in a modified CNN. (The losses are used to train the CNN, see Stat sec. IV(A) “In the case of the largest learning rate, training and validation losses begin to rise beyond epoch 7, which indicates that the CNN is not learning…”) Stat, Wang and the claims all use frequency analysis to train CNNs. It would have been obvious to a person having ordinary skill in the art, at the time of filing, to measure a pattern of octaves to use “as a diagnostic tool and potential replacement for the validation loss when hold-out validation data are not available.” Stat abs. Wang teaches claims 2, 9 and 16. The method of claim 1, wherein the octave resonance results in only partial overage of the at least one other CNN by the CNN. (Wang fig. 1 below states, “Average Attribution scores for each frequency components on each subset of CIFAR-10. We compute the attribution scores on three ResNet models…” The average attribution scores are a measure of octave resonance. Some of the models have, at times, a higher frequency attribution score than other models – this is the claimed overage.) PNG media_image2.png 378 500 media_image2.png Greyscale Wang teaches claims 5, 12 and 19. The method of claim 1, wherein the measuring of feature applicability uses three types of inputs sets: an objective known set, an objective unknown set, and a nonobjective set. (Wang abs “show that the vulnerability of the model against tiny distortions is a result of the model is relying on the high-frequency features, the target features of the adversarial (black and white-box) attackers, to make the prediction. We further show that if the model develops stronger association between the low-frequency component with true labels, the model is more robust, which is the explanation of why adversarially trained models are more robust against tiny distortions.” True labels are the objective known set. Adversarial images are the objective unknown set. The non-target features of the adversarial images are the nonobjective set.) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Austin Hicks whose telephone number is (571)270-3377. The examiner can normally be reached Monday - Thursday 8-4 PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mariela Reyes can be reached at (571) 270-1006. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AUSTIN HICKS/Primary Examiner, Art Unit 2124
Read full office action

Prosecution Timeline

Mar 24, 2023
Application Filed
Dec 12, 2025
Non-Final Rejection — §103
Mar 27, 2026
Response Filed

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591767
NEURAL NETWORK ACCELERATION CIRCUIT AND METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12554795
REDUCING CLASS IMBALANCE IN MACHINE-LEARNING TRAINING DATASET
2y 5m to grant Granted Feb 17, 2026
Patent 12530630
Hierarchical Gradient Averaging For Enforcing Subject Level Privacy
2y 5m to grant Granted Jan 20, 2026
Patent 12524694
OPTIMIZING ROUTE MODIFICATION USING QUANTUM GENERATED ROUTE REPOSITORY
2y 5m to grant Granted Jan 13, 2026
Patent 12524646
VARIABLE CURVATURE BENDING ARC CONTROL METHOD FOR ROLL BENDING MACHINE
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+25.1%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 403 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month