Prosecution Insights
Last updated: April 19, 2026
Application No. 17/798,046

SEARCHING FOR NORMALIZATION-ACTIVATION LAYER ARCHITECTURES

Final Rejection §103
Filed
Aug 05, 2022
Examiner
ZECHER, CORDELIA P K
Art Unit
2100
Tech Center
2100 — Computer Architecture & Software
Assignee
Google LLC
OA Round
2 (Final)
50%
Grant Probability
Moderate
3-4
OA Rounds
3y 8m
To Grant
76%
With Interview

Examiner Intelligence

Grants 50% of resolved cases
50%
Career Allow Rate
253 granted / 509 resolved
-5.3% vs TC avg
Strong +26% interview lift
Without
With
+25.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
287 currently pending
Career history
796
Total Applications
across all art units

Statute-Specific Performance

§101
19.0%
-21.0% vs TC avg
§103
46.8%
+6.8% vs TC avg
§102
13.1%
-26.9% vs TC avg
§112
16.0%
-24.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 509 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statements submitted on 01/05/2023 and 11/19/2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement(s) are being considered by the examiner. Priority Acknowledgment is made of applicant’s claim for priority to U.S. provisional application 62/971,887, filed February 7, 2020. Status of Claims The present application is being examined under the amended claims filed on 11/07/2025. Claims 1-13, 15-21 are rejected. Claims 1-13, 15-21 are pending. Abstract The abstract of the disclosure filed on 08/05/2025 is objected to because it has not been filed on a separate sheet. A corrected abstract of the disclosure is required and must be presented on a separate sheet, apart from any other text. See MPEP § 608.01(b). Prior Art References Short Name Reference Wistuba Wistuba, M., Rawat, A. and Pedapati, T., 2019. A survey on neural architecture search. arXiv preprint arXiv:1905.01392. Singh Singh, V. and image segmentation using Simulated, A.C., 2015, March. Genetic algorithm. In Proceedings of the 2015 International Conference on Advances in Computer Engineering and Applications, Ghaziabad, India (pp. 19-20). Jin Jin, H., Song, Q. and Hu, X., 2019, July. Auto-keras: An efficient neural architecture search system. In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining (pp. 1946-1956). Stamoulis Stamoulis, D., Ding, R., Wang, D., Lymberopoulos, D., Priyantha, B., Liu, J. and Marculescu, D., 2020. Single-path mobile automl: Efficient convnet design and nas hyperparameter optimization. IEEE Journal of Selected Topics in Signal Processing, 14(4), pp.609-622. Liu Liu, H., Simonyan, K., Vinyals, O., Fernando, C. and Kavukcuoglu, K., 2017. Hierarchical representations for efficient architecture search. arXiv preprint arXiv:1711.00436. Response to Arguments - 101 Applicant remarks: “Without conceding the merits of the rejections, Applicant has amended the independent claims. Moreover, the Applicant respectfully submits that the Specification describes how the claimed invention provides an improvement to a technical field, and that the claims as amended reflect that improvement. […] Thus, the Specification describes how the claimed invention provides a technical improvement to the technical field of machine learning by determining an architecture of the normalization - activation layer that "outperform[s] conventional human-designed architectures" and "generalize[s] to many different architectures" of neural networks and allows neural networks "to be able to be trained to converge quickly by improving the stability of the training process." Id The independent claims have been amended to recite how the claimed invention determines an architecture of the normalization - activation layer by "for each of the plurality of candidate architectures" for the normalization - activation layer, "for each of two or more of the plurality of different neural network architectures: training, on the respective training data for the neural network architecture, a neural network having the neural network architecture but with the at least one respective normalization - activation layer within the neural network architecture each being replaced with a new normalization - activation layer having the candidate architecture; and determining a fitness from a measure of performance of the trained neural network on the validation data for the neural network architecture." By doing so, the claimed invention determines an architecture of the normalization - activation layer based on each of the candidate architectures of the normalization - activation layer's overall fitness, determined from the fitnesses of the different trained neural network architectures "that each have each of the at least one respective normalization - activation layer within the neural network architecture replaced with respective new normalization - activation layers each having the same candidate architecture for the normalization - activation layer."” (pg. 10-12) Examiner response: Applicant’s arguments have been fully considered and they are persuasive. Thus, examiner withdraws the previous 35 U.S.C. 101 rejections. Response to Arguments - 103 Applicant remarks: “The Action cites a portion of Xia as teaching the feature "for each of the plurality of candidate architectures: for each of two or more of the plurality of different neural network architectures: training, on the respective training data for the neural network architecture, a neural network having the neural network architecture but with the normalization – activation layers replaced with new normalization - activation layers having the candidate architecture" as previously recited by the independent claims. In particular, the cited portion of Xia reads "Finally, the obtained offspring is trained from scratch and its fitness is evaluated." (Xia, page 19). However, in Xia's genetic algorithm, "mutations are defined as random flip operations on the adjacency matrices that define the segments." Id Thus, instead of replacing a particular component in two different architectures with the same architecture component, in Xia, two different architectures each have respective "random flip operations" applied on the adjacency matrices that define the segments," each segment "consisting of multiple convolution layers" in the network. Id The obtained offspring in Xia is therefore not obtained by, for each of a plurality of candidate architectures, replacing the normalization - activation layers within two or more different network architectures with "new normalization - activation layers having the candidate architecture." Thus, Xia does not teach training each of two or more of the plurality of different neural network architectures with "the normalization - activation layers replaced with new normalization - activation layers having the candidate architecture" as described by the amended independent claims. The Action does not assert that Wistuba, Dong, Real, or Singh teaches the feature "for each of the plurality of candidate architectures: for each of two or more of the plurality of different neural network architectures: training, on the respective training data for the neural network architecture, a neural network having the neural network architecture but with the normalization - activation layers replaced with new normalization - activation layers having the candidate architecture" as previously recited by the independent claims.” (pg. 13-14) Examiner response: Applicant’s arguments have been fully considered but they are not persuasive. Examiner notes that the “candidate architecture(s)” are taught by the architecture resulting from the “random flip operations” and thus Zia does teach “replacing the normalization - activation layers within two or more different network architectures with "new normalization - activation layers having the candidate architecture”. Applicant remarks: “The Action cites a portion of Singh as teaching the feature "determining an overall fitness for the candidate architecture from the fitnesses for the two or more different neural network architectures" as previously recited by the independent claims. In particular, the cited portion of Singh reads "Moreover, the average fitness is also calculated by adding the fitness of all chromosomes from a generation and then dividing it by the total population size." (Singh, page 2). However, the average fitness described in Singh is determined based on the fitnesses of all chromosomes from a generation where each chromosome is generated by having "two random chromosomes" selected and performing "Ordered and One Point Crossover and swap mutation." Id Rather than determining an overall fitness of a candidate architecture from the fitnesses of different neural network architectures having the same candidate architecture for the normalization - activation layer, Singh describes determining an average fitness from the fitnesses of all chromosomes from a generation in which each chromosome has a different mutation that results from applying "Ordered and One Point Crossover and swap mutation" to two random chromosomes selected for the generation of that chromosome. Singh' s overall fitness is thus not based on "the fitnesses for the two or more different neural network architectures, that each have each of the at least one respective normalization - activation layer within the neural network architecture replaced with respective new normalization – activation layers each having the same candidate architecture for the normalization - activation layer" as recited by the amended independent claims. The Action does not assert that Wistuba, Dong, Real, or Xie teaches the feature "determining an overall fitness for the candidate architecture from the fitnesses for the two or more different neural network architectures" as previously recited by the independent claims.” (pg. 14-15) Examiner response: Applicant’s arguments have been fully considered but they are not persuasive. Singh teaches an overall fitness by the computation of an average fitness of all individuals in a population (Singh 2, “Moreover, the average fitness is also calculated by adding the fitness of all chromosomes from a generation and then dividing it by the total population size.”). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-13, 15-21 are rejected under 35 U.S.C. 103. Claims 1, 2, 4, 6, 7, 11, 15, 16, 17, 19, and 21 are rejected under 35 U.S.C. 103 as being unpatentable over a first embodiment of Wistuba (hereafter Wistuba) in view of a second embodiment of Wistuba (hereafter Dong) in further view of a third embodiment of Wistuba (hereafter Real) in further view of a fourth embodiment of Wistuba (hereafter Xie) in further view of Singh. Claims 3 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Wistuba in view of Dong in further view Real in further view of Xie in further view of Singh in further view of Jin. Claims 5 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Wistuba in view of Dong in further view Real in further view of Xie in further view of Singh in further view of a fifth embodiment of Wistuba (hereafter Wistuba-Liu). Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over a first embodiment of Wistuba in view of Dong in further view of Real in further view of Xie in further view of Singh in further view of a sixth embodiment of Wistuba (hereafter Zoph). Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Wistuba in view of Dong in further view Real in further view of Xie in further view of Singh in further view of Zoph in further view of Wistuba-Liu. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Wistuba in view of Dong in further view Real in further view of Xie in further view of Singh in further view of Zoph in further view of Wistuba-Liu in view of Liu. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Wistuba in view of Dong in further view Real in further view of Xie in further view of Singh in further view of Stamoulis. Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Wistuba in view of Dong in further view Real in further view of Xie in further view of Singh in further view of Liu. In reference to claim 1. - “1. A method comprising:” (preamble) Wistuba teaches: - “receiving data specifying a plurality of different neural network architectures (Wistuba 17, “In the context of neural architecture search, the population consists of a pool of network architectures.”),” - “and selecting a final architecture for the normalization - activation layer based on the overall fitnesses for the candidate architectures (Wistuba 17, “Several different policies are used to achieve this, ranging from selecting only the best (elitist selection) to selecting all individuals.”).” Dong teaches: - “each neural network architecture including at least one respective normalization - activation layer, each respective normalization - activation layer configured to: receive a layer input comprising a plurality of values; apply one or more normalization operations to the values in the layer input to generate a normalized layer input (Dong 10, “Architectures in this search space comprise of cells without branches and a fixed internal structure that alternates two normalization (e.g. batch normalization) and convolution layers.”);” Motivation to combine Wistuba with Dong. It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Wistuba and Dong. Wistuba discloses a survey of the state of the art of neural network architectural search approaches. Dong discloses an approach to neural network architectural search that prioritizes simpler models with faster inference times that include normalization layers. One would be motivated to combine these references because the disclosure of Dong offers an obvious improvement to the system. Further, MPEP 2143 sets forth the Supreme Court rationales for obviousness including: (A) Combining prior art elements according to known methods to yield predictable results. Real teaches: - “and apply an element-wise activation function to the normalized layer input to generate a layer output (Real Figure 12, “batch norm” followed by “ReLU”);” PNG media_image1.png 233 831 media_image1.png Greyscale - “generating a plurality of candidate architectures for the normalization - activation layer (Real 18, “The set of mutations consists of simple operations such as adding convolutions (possibly with batch normalization or ReLU activation)”);” Motivation to combine Wistuba, Dong with Real. It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Wistuba, Dong and Real. Wistuba, Dong disclose a survey of the state of the art of neural network architectural search approaches. Real discloses specific normalization and activation architectures. One would be motivated to combine these references because the disclosure of Real offers a specific implementation of part of the system. Further, MPEP 2143 sets forth the Supreme Court rationales for obviousness including: (A) Combining prior art elements according to known methods to yield predictable results. Xie teaches: - “receiving, for each neural network architecture, respective training data for training a neural network having the neural network architecture to perform a corresponding neural network task and validation data for evaluating how well the neural network performs on the corresponding neural network task; (Xie 19, “Finally, the obtained offspring is trained from scratch and its fitness is evaluated. The fitness of an individual is defined as the difference between its validation accuracy and the minimum accuracy among all individuals of the population.”);” - “for each of the plurality of candidate architectures: for each of two or more of the plurality of different neural network architectures: training, on the respective training data for the neural network architecture, a neural network having the neural network architecture but with the at least one respective normalization - activation layers within the neural network architecture each being replaced with a new normalization - activation layer having the candidate architecture (Xie 19, “Finally, the obtained offspring is trained from scratch and its fitness is evaluated.”);” - “and determining a fitness from a measure of performance of the trained neural network on the validation data for the neural network architecture (Xie 19, “The fitness of an individual is defined as the difference between its validation accuracy and the minimum accuracy among all individuals of the population.”);” Motivation to combine Wistuba, Dong, Real with Xie. It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Wistuba, Dong, Real and Xie. Wistuba, Dong, Real disclose a survey of the state of the art of neural network architectural search approaches. Xie discloses the utilization of fitness with respect to candidate neural network architectures. One would be motivated to combine these references because the disclosure of Xie offers a specific implementation of part of the system. Further, MPEP 2143 sets forth the Supreme Court rationales for obviousness including: (A) Combining prior art elements according to known methods to yield predictable results. Singh teaches: - “and determining an overall fitness for the candidate architecture for the normalization - activation layer from the fitnesses for the two or more different neural network architectures, that each have each of the at least one respective normalization - activation layer within the neural network architecture replaced with respective new normalization-activation layers each having the same candidate architecture for the normalization-activation layer; (Singh 2, “Moreover, the average fitness is also calculated by adding the fitness of all chromosomes from a generation and then dividing it by the total population size.”);” Motivation to combine Wistuba, Dong, Real, Xie with Singh. It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Wistuba, Dong, Real, Xie and Singh. Wistuba, Dong, Real, Xie discloses a survey of the state of the art of neural network architectural search approaches. Singh discloses an in-depth analysis of genetic algorithms. One would be motivated to combine these references because Singh offers an in-depth look at alternatives for operating a subcomponent of the initial disclosure. Further, MPEP 2143 sets forth the Supreme Court rationales for obviousness including: (A) Combining prior art elements according to known methods to yield predictable results. Claims 15 and 16 are substantially similar to claim 1 and are thus rejected using the same art. In reference to claim 2. - “2. The method of claim 1,” (preamble) Real teaches: - “wherein the normalization - activation layers in two or more of the plurality of architectures apply batch normalization followed by a ReLu activation function (Real Figure 12, “batch norm” followed by “ReLU”).” Claims 17 is substantially similar to claim 2 and is thus rejected using the same art. In reference to claim 3. - “3. The method of claim 1,” (preamble) Jin teaches: - “wherein the neural network task, the training data, and the validation data are the same for all of the plurality of neural network architectures (Jin 1952, “For a fair comparison, the same data processing and training procedures are used for all the methods.”; Jin Table 1, Examiner notes the table is depicting network performance for individual tasks).” PNG media_image2.png 198 237 media_image2.png Greyscale Motivation to combine Wistuba, Dong, Real, Xie, Singh with Jin. It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Wistuba, Dong, Real, Xie, Singh and Jin. Wistuba, Dong, Real, Xie, Singh discloses a survey of the state of the art of neural network architectural search approaches and an in-depth analysis of genetic algorithms. Jin discloses a specific neural network architecture search engine. One would be motivated to combine these references because the disclosure of Jin offers a specific implementation of part of the system. Further, MPEP 2143 sets forth the Supreme Court rationales for obviousness including: (A) Combining prior art elements according to known methods to yield predictable results. Claims 18 is substantially similar to claim 3 and is thus rejected using the same art. In reference to claim 4. - “4. The method of claim 1, wherein generating a plurality of candidate architectures for the normalization - activation layer comprises repeatedly performing the following:” (preamble) Wistuba teaches: - “selecting a subset of candidate architectures from the candidate architectures that have already been generated (Wistuba 16, “3. Evaluate the fitness of the new individuals. 4. Select the survivors of the population.”, Each individual is an architecture);” - “selecting a candidate architecture from the subset of candidate architectures based on the overall fitnesses of the candidate architectures (Wistuba 17, “Several different policies are used to achieve this, ranging from selecting only the best (elitist selection) to selecting all individuals.”, Elite selection is based on the fitness of the candidates);” - “and generating a new candidate architecture from the selected candidate architecture (Wistuba 16, “2. Apply recombination and mutation operations to create new individuals.”).” Claims 19 is substantially similar to claim 4 and is thus rejected using the same art. In reference to claim 5. - “5. The method of claim 4, wherein the generating comprises:” (preamble) Wistuba-Liu teaches: - “randomly generating a plurality of initial candidate architectures (Wistuba-Liu 20, “The population is initialized with 200 trivial genotypes which are diversified by applying 1000 random mutations”).” Motivation to combine Wistuba, Dong, Real, Xie, Singh with Wistuba-Liu. It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Wistuba, Dong, Real, Xie, Singh and Wistuba-Liu. Wistuba, Dong, Real, Xie, Singh discloses a survey of the state of the art of neural network architectural search approaches and an in-depth analysis of genetic algorithms. Wistuba-Liu discloses hierarchical representations for efficient neural network architecture search. One would be motivated to combine these references because the disclosure of Wistuba-Liu offers a specific implementation of part of the system. Further, MPEP 2143 sets forth the Supreme Court rationales for obviousness including: (A) Combining prior art elements according to known methods to yield predictable results. Claims 20 is substantially similar to claim 5 and is thus rejected using the same art. In reference to claim 6. - “6. The method of claim 4,” (preamble) Wistuba teaches: - “and wherein selecting a candidate architecture comprises selecting the candidate architecture with a best overall fitness (Wistuba 17, “Several different policies are used to achieve this, ranging from selecting only the best (elitist selection) to selecting all individuals.”).” Singh teaches: - “wherein the overall fitness is an average of the fitnesses for the two or more architectures (Singh 2, “Moreover, the average fitness is also calculated by adding the fitness of all chromosomes from a generation and then dividing it by the total population size.”)” In reference to claim 7. - 7. The method of claim 4, wherein the overall fitness includes all of the fitnesses for the two or more architectures and wherein selecting a candidate architecture comprises:” (preamble) Wistuba teaches: - “identifying one or more candidate architectures from the subset that have overall fitnesses that are not dominated by any overall fitness of any other candidate architecture from the subset (Wistuba 17, “Several different policies are used to achieve this, ranging from selecting only the best (elitist selection) to selecting all individuals.”);” - “and randomly selecting one of the one or more identified candidate architectures (Wistuba 17, “An alternative to tournament selection is fitness proportionate selection. In this approach an individual is selected proportional to its fitness.”).” In reference to claim 8. - “8. The method of claim 1,” (preamble) Zoph teaches: - “wherein the candidate architectures are represented as a computation graph (Zoph Figure 4) that transforms one input tensor into an output tensor of the same shape (Zoph 8, “Operations in a normal cell have a stride of one and do not change the dimensions of the feature maps”).” PNG media_image3.png 536 843 media_image3.png Greyscale Motivation to combine Wistuba, Dong, Real, Xie, Singh with Zoph. It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Wistuba, Dong, Real, Xie, Singh and Zoph. Wistuba, Dong, Real, Xie, Singh discloses a survey of the state of the art of neural network architectural search approaches and an in-depth analysis of genetic algorithms. Zoph discloses a cell-based approach to neural network architecture searching. One would be motivated to combine these references because Zoph offers an in-depth look at alternatives for operating a subcomponent of the initial disclosure. Further, MPEP 2143 sets forth the Supreme Court rationales for obviousness including: (A) Combining prior art elements according to known methods to yield predictable results. In reference to claim 9. - “9. The method of claim 8,” (preamble) Wistuba-Liu teaches: - “wherein the computation graph includes a set of initial nodes representing tensors and a set of intermediate nodes representing outputs of primitive operations from a set of primitive operations (Wistuba-Liu Figure 6).” PNG media_image4.png 488 524 media_image4.png Greyscale Motivation to combine Wistuba, Dong, Real, Xie, Singh, Zoph with Wistuba-Liu. It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Wistuba, Dong, Real, Xie, Singh, Zoph and Wistuba-Liu. Wistuba, Dong, Real, Xie, Singh, Zoph discloses a survey of the state of the art of neural network architectural search approaches and an in-depth analysis of genetic algorithms. Wistuba-Liu discloses hierarchical representations for efficient neural network architecture search. One would be motivated to combine these references because the disclosure of Wistuba-Liu offers a specific implementation of part of the system. Further, MPEP 2143 sets forth the Supreme Court rationales for obviousness including: (A) Combining prior art elements according to known methods to yield predictable results. In reference to claim 10. - “10. The method of claim 9, wherein generating the new architecture comprises:” (preamble) Liu teaches: - “selecting an intermediate node at random (Liu 4 steps 1-4);” PNG media_image5.png 324 485 media_image5.png Greyscale - “selecting a new operation for the selected node from the set of primitive operations at random (Liu 4 step 5);” - “and selecting new predecessors for the selected node at random (Liu 4 “Add a new edge”, “Alter and existing edge”, “Remove an existing edge”, Examiner notes that edge operations alter the predecessor(s) of a node).” Motivation to combine Wistuba, Dong, Real, Xie, Sing, Zoph, Wistuba-Liu with Liu. It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Wistuba, Dong, Real, Xie, Sing, Zoph, Wistuba-Liu and Liu. Wistuba, Dong, Real, Xie, Sing, Zoph, Wistuba-Liu discloses a survey of the state of the art of neural network architectural search approaches and an in-depth analysis of genetic algorithms. Liu discloses hierarchical representations for efficient neural network architecture search. One would be motivated to combine these references because the disclosure of Liu offers a specific implementation of part of the system. Further, MPEP 2143 sets forth the Supreme Court rationales for obviousness including: (A) Combining prior art elements according to known methods to yield predictable results. In reference to claim 11. - “11. The method of 1, wherein the training comprises:” (preamble) Wistuba teaches: - “rejecting any candidate architecture that has not achieved at least a threshold fitness when included in any of the two more neural network architectures after a threshold number of training steps (Wistuba 17, “Several different policies are used to achieve this, ranging from selecting only the best (elitist selection) to selecting all individuals.”).” In reference to claim 12. - “12. The method of claim 1, further comprising:” (preamble) Stamoulis teaches: - “rejecting any candidate architecture that is subject to numerical instability (Stamoulis 618, “For the STE version, while the variance appears smaller than sigmoid, it is important to note that we had to repeat the process multiple times to reach 20 completed searches due to encountered numerical instability issues with STE (exploding gradients).”, Examiner notes that repeating a search is being interpreted as rejecting the search from the previous architecture).” Motivation to combine Wistuba, Dong, Real, Xie, Singh with Stamoulis. It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Wistuba, Dong, Real, Xie, Singh and Stamoulis. Wistuba, Dong, Real, Xie, Singh discloses a survey of the state of the art of neural network architectural search approaches and an in-depth analysis of genetic algorithms. Stamoulis discloses a specific approach to neural network architecture search. One would be motivated to combine these references because the disclosure of Stamoulis offers a specific implementation of part of the system. Further, MPEP 2143 sets forth the Supreme Court rationales for obviousness including: (A) Combining prior art elements according to known methods to yield predictable results. In reference to claim 13. - “13. The method of claim 1, wherein selecting a final architecture for the normalization - activation layer based on the overall fitnesses for the candidate architectures comprises:” (preamble) Liu teaches: - “selecting a subset of candidate architectures having the highest overall fitnesses; and selecting a final architecture by evaluating the subset of candidate architectures on a target neural network task that is more computationally expensive than the corresponding neural network tasks (Liu 6, “Candidate models are trained on the training subset, and evaluated on the validation subset to obtain the fitness. Once the search process is over, the selected cell is plugged into a large model which is trained on the combination of training and validation sub-sets, and the accuracy is reported on the CIFAR-10 test set.”).” Motivation to combine Wistuba, Dong, Real, Xie, Singh with Liu. It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Wistuba, Dong, Real, Xie, Singh and Liu. Wistuba, Dong, Real, Xie, Singh discloses a survey of the state of the art of neural network architectural search approaches and an in-depth analysis of genetic algorithms. Liu discloses hierarchical representations for efficient neural network architecture search. One would be motivated to combine these references because the disclosure of Liu offers a specific implementation of part of the system. Further, MPEP 2143 sets forth the Supreme Court rationales for obviousness including: (A) Combining prior art elements according to known methods to yield predictable results. In reference to claim 21. - “21. The system of claim 19, wherein the overall fitness includes all of the fitnesses for the two or more architectures and wherein selecting a candidate architecture comprises:” (preamble) Wistuba teaches: - “one or more candidate architectures from the subset that have overall fitnesses that are not dominated by any overall fitness of any other candidate architecture from the subset (Wistuba 17, “Several different policies are used to achieve this, ranging from selecting only the best (elitist selection) to selecting all individuals.”);” - “and randomly selecting one of the one or more identified candidate architectures (Wistuba 17, “An alternative to tournament selection is fitness proportionate selection. In this approach an individual is selected proportional to its fitness.”).” Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CODY RYAN GILLESPIE whose telephone number is (571)272-1331. The examiner can normally be reached M-F, 8 AM - 5 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Viker A Lamardo can be reached at 5172705871. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CODY RYAN GILLESPIE/Examiner, Art Unit 2147 /VIKER A LAMARDO/Supervisory Patent Examiner, Art Unit 2147
Read full office action

Prosecution Timeline

Aug 05, 2022
Application Filed
Jul 23, 2025
Non-Final Rejection — §103
Oct 30, 2025
Examiner Interview Summary
Oct 30, 2025
Applicant Interview (Telephonic)
Nov 07, 2025
Response Filed
Feb 20, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12583466
VEHICLE CONTROL MODULES INCLUDING CONTAINERIZED ORCHESTRATION AND RESOURCE MANAGEMENT FOR MIXED CRITICALITY SYSTEMS
2y 5m to grant Granted Mar 24, 2026
Patent 12578751
DATA PROCESSING CIRCUITRY AND METHOD, AND SEMICONDUCTOR MEMORY
2y 5m to grant Granted Mar 17, 2026
Patent 12561162
AUTOMATED INFORMATION TECHNOLOGY INFRASTRUCTURE MANAGEMENT
2y 5m to grant Granted Feb 24, 2026
Patent 12536291
PLATFORM BOOT PATH FAULT DETECTION ISOLATION AND REMEDIATION PROTOCOL
2y 5m to grant Granted Jan 27, 2026
Patent 12393641
METHODS FOR UTILIZING SOLVER HARDWARE FOR SOLVING PARTIAL DIFFERENTIAL EQUATIONS
2y 5m to grant Granted Aug 19, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
50%
Grant Probability
76%
With Interview (+25.8%)
3y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 509 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month