Prosecution Insights
Last updated: April 19, 2026
Application No. 17/401,096

SYSTEM AND METHOD FOR SELECTING COMPONENTS IN DESIGNING MACHINE LEARNING MODELS

Non-Final OA §101§103
Filed
Aug 12, 2021
Examiner
LI, LIANG Y
Art Unit
2143
Tech Center
2100 — Computer Architecture & Software
Assignee
DARWINAI ULC
OA Round
3 (Non-Final)
61%
Grant Probability
Moderate
3-4
OA Rounds
3y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 61% of resolved cases
61%
Career Allow Rate
167 granted / 273 resolved
+6.2% vs TC avg
Strong +69% interview lift
Without
With
+69.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
26 currently pending
Career history
299
Total Applications
across all art units

Statute-Specific Performance

§101
16.9%
-23.1% vs TC avg
§103
48.6%
+8.6% vs TC avg
§102
21.2%
-18.8% vs TC avg
§112
9.4%
-30.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 273 resolved cases

Office Action

§101 §103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is responsive to claims filed 3/24/2025. Claims 1-20 are pending. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 4/29/2025 has been entered. Claim Objections In the second amended portion of claim 20, “one or functions” should read “one or more functions”. Appropriate correction is required. Claim Rejections - 35 USC § 101 [AltContent: textbox (Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.)]35 U.S.C. 101 reads as follows: Claim(s) 1-19 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Claim 1 recites a “computing device” for performing the recited steps. However, a person of ordinary skill in the art may, under BRI, understand “computing device” to include any computing device including virtual or software computing devices. Such an understanding is not excluded based on the use of computing device in the Specifications, e.g., 0069. Applicant may amend to recite hardware elements (e.g., processor, display) to overcome this rejection. The dependent claims are rejected for the same reason. Claim(s) 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. We analyze the claims according to the subject matter eligibility flowchart (MPEP 2106). As all claims recite statutory categories (hardware systems, methods) step 1 is answered affirmatively and we proceed to step 2. Claim 1 recites a technique of testing neural network performance by passing data or signals through a learning machine, gathering information about its performance based on its outputs, and performing substitutions on the learning machine. It is analogous to a mental process, that of making judgments based on observation. Furthermore, the processing of test signals through a neural network is, understood broadly but reasonably, merely the performance of mathematical operations, for example, that of tensor multiplication (applying weights) and applying a non-linear activation function. Hence, the claim is directed towards the mental process of observing, forming judgments on, and modifying a mathematical concept. In particular: a system for selecting components for building graph-based learning machines, comprising: a reference learning machine comprising a set of components (As above, a reference learning machine can be understood as a set of component mathematical operations (tensor multiplication, activation function, etc.)); one or more test signals (test signals are inputs to the mathematical operation); and a computing device comprising a component analyzer configured to (A component analyzer is a mental process, that of forming judgments via observation. The incorporation of computing elements constitutes mere instructions to implement the abstract idea on a computer and hence does not constitute an integration into a practical application (2a-2). Furthermore, the use of computing elements is WURC in the field of classification.): analyze, using the one or more test signals, the components in the reference learning machine (analyzing components is a mental process, that of observation and judgment); wherein the analyzing comprises: processing the one or more test signals through each of the components to generate one or more outputs, wherein each of the components comprises a set of unknown operations (processing test signals through a machine learning model is, as described above, a mathematical operation); extracting knowledge from the components of the reference learning machine, the extracted knowledge comprising at least one of a measure of how much each component changes a configuration of each of the one or more test signals to the one or more outputs or how many dimensions of the outputs for each component are orthogonal to each other (extracting knowledge is observation of how much change a component has performed and hence a mental process); and ranking the respective components in the reference learning machine in terms of the efficiency and effectiveness of the respective components based on the extracted knowledge (ranking components based on performance and effectiveness in view of the observations is a mental process, that of judgment); analyze operations in the reference learning machine and how the operations are interconnected (analyzing operations and interconnections is a mental process); generate a new component based on a first component of the reference learning machine (generation, modification of a new component, such as based on the shortcomings of an old component, is a mental process), the new components having a different architecture or a different set of operations from the first component based on the analysis of the components and the operations in the reference learning machine (altering the new component’s architecture or components is a mental process), the new component having an improved accuracy or efficiency relative to the first component (repairing, modification of components is a mental process); and generating a new learning machine comprising the new component (substituting or generating a learning machine from the new component is a mental process). Regarding claim 2: wherein the component analyzer is further configured to replace one or more of the components in the reference learning machine with one or more components that are more efficient or more effective (substitution of one component for another in a learning machine, such as via altering component parameters, is a mental process or a process performed with pen and paper). Regarding claim 3: wherein the computing device further comprises a learning machine builder configured to generate a new learning machine after replacing the components of the reference learning machine with the components that are more efficient or more effective (substitution of one component for another in a learning machine, such as via altering component parameters, is a mental process or a process performed with pen and paper). Regarding claim 4: a graph component comprising at least one of a node, a set of nodes, or a group of smaller graph components which, given input data, produces a set of outputs (the visualizing and analysis of a set of nodes, a nodes, etc. given input is a mental process that can be visualized in the mind or performed with pen and paper). Regarding claim 5: a combination of graph components that model data (the visualizing, design, and evaluation of graph components modeling data is a mental process). Regarding claim 6: wherein the graph components are included in a pool of components and wherein the component analyzer further configured to add the new component to the pool of components (defining and selecting from a pool of objects is a mental process). Regarding claim 7: wherein the computing device further comprises a graph component analyzer, wherein the component analyzer or the graph component analyzer measures an effectiveness of each component in the reference learning machine in terms of modeling performance (estimating effectiveness of a component is a process of judgment, evaluating, a mental process), and wherein the graph component analyzer ranks the components based on the effectiveness of the respective components (ranking, sorting of components are mental processes). Regarding claim 8: wherein the computing device further comprises a graph component analyzer, wherein the component analyzer or the graph component analyzer evaluates an efficiency of each component in the reference learning machine in terms of computational complexity given the effectiveness of the respective components (evaluating components in terms of effectiveness given or in the context of prior conditions is a mental process, that of judging, evaluation). Regarding claim 9: wherein the computing device further comprises a graph component analyzer, wherein the component analyzer or the graph component analyzer evaluates a performance of graph components listed in a pool of components to be added into the reference learning machine to improve a modeling accuracy and reduce a computational complexity of a new generated learning machine (Selecting parts from a list for inclusion into a set of included components is a mental process; selecting in view of making a component simpler, or reducing mistakes is a mental process, one that can be performed in the mind or with the aid of pen and paper). Regarding claim 10: wherein the computing device further comprises a learning machine builder, wherein the learning machine builder generates a new learning machine with improved efficiency and modeling accuracy compared to the reference learning machine by identifying the best graph components to be replaced in the reference learning machine and a list of potential components (Selecting parts from a list for inclusion into a set of included components is a mental process; selecting in view of making a component simpler, or reducing mistakes is a mental process, one that can be performed in the mind or with the aid of pen and paper). Regarding claim 11: wherein an efficiency of a model may be defined as an inference speed or a memory footprint to process an input signal (estimating performance metrics or size of a model, or how large the model is a mental process, that of evaluating, judgment). Regarding claim 12: wherein the computing device further comprise a learning machine builder, wherein the learning machine builder generates a new graph component from scratch which is not in a pool of components to improve a performance of the reference learning machine (generating a new graph to optimize a learning machine is a mental process, as a human can generate graphs having various properties in the mind or with the aid of pen and paper). Regarding claim 13: wherein a performance of the reference learning machine is measured in terms of functional accuracy or inference speed (estimating performance metrics or size of a model is a mental process, that of evaluating, judgment). Regarding claim 14: wherein the computing device further comprises a learning machine builder, wherein the learning machine builder identifies a number of graph components and layers in the reference learning machine, given specific performance (identifying graph components and layers in a learning machine, such as in the context of restrictions of performance parameters, is a mental process, that of evaluation, judgment, selection). Regarding claim 15: wherein the computing device further comprises a learning machine builder, wherein the learning machine builder builds a new learning machine with an optimized number of components (altering the learning machine graph by adding or removing components is a mental process or a process performed with pen and paper). Regarding claim 16: wherein the computing device further comprises a learning machine builder, wherein the learning machine builder tunes and re-designs the reference learning machine to provide better performance with a given learning machine complexity (optimizing a graph or a portion of a graph for better performance metrics in a certain complexity context is a mental process or one performed with the aid of pen and paper). Regarding claim 17: wherein the system is configured to design a new learning machine for an image classification application (The additional element does not serve to meaningfully limit he application of the mental process as it merely applies the invention to a particular technological field; furthermore, the use of leaning machines for image classification is routine and conventional). Regarding claim 18: wherein the system generates a new learning machine to create a speech recognition system (The additional element does not serve to meaningfully limit he application of the mental process as it merely applies the invention to a particular technological field; furthermore, the use of leaning machines for speech recognition is routine and conventional). Regarding claim 19: wherein an image classifier generated from the reference learning machine is configure to receive an image of a handwritten digit into a network and make a decision on what class the image belongs to (The additional element does not serve to meaningfully limit he application of the mental process as it merely applies the invention to a particular technological field; furthermore, the use of leaning machines for handwriting recognition is routine and conventional). Regarding claim 20: a method of selecting components for building a graph-based learning machine, the method comprising: observing behavior of a component in a reference learning machine (observation of a component of a learning machine is a mental process), the observing behavior comprising processing one or more input signals through the component to generate one or more outputs, wherein the component performs one or more unknown operations on the one or more input signals to generate the one or more outputs (processing test signals through a machine learning model is, as described above, a mathematical operation); ranking the component in the reference learning machine in terms of the efficiency and effectiveness of component (ranking according to various parameters is a mental process), the ranking of the component comprising applying one or more functions to the one or more input signals and the one or more outputs to measure at least one of how much the component changes a configuration of the one or more input signals to the one or more outputs or how many dimensions of the one or more outputs for the component are orthogonal to each other (extracting knowledge is observation of how much change a component has performed and hence a mental process); identifying issues in the component of the reference learning machine (judgment, evaluation is a mental process, such as comparison to threshold performances); and analyzing operations in the reference learning machine and how the operations are interconnected (analyzing form and interconnections of a learning machine is a mental process) generating a new component based on the component in the reference learning machine, the new component having a different architecture of set of operations from the component based on the ranking, the identified issues, and the operation, the new component having an increased accuracy or efficiency relative to the component (generating a new component, such as to repair or improve the learning machine based on performance metrics, such as to address and repair issues, based on the operation of the older learning machine, is a mental process); and generating a graph-based learning machine based on the ranking and the identified issues, the generated graph-based learning machine comprising the new components (synthesizing the revised new learning machine is am mental process). Claim Rejections - 35 USC § 103 [AltContent: textbox (A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.)]The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: Claim(s) 1-17 are rejected under 35 U.S.C. 103 as being unpatentable over Brothers (US 20160358070 A1) in view of Yao ("SM-NAS: Structural-to-modular neural architecture search for object detection", published 11/30/2019) in view of Guo ("When NAS Meets Robustness: In Search of Robust Architectures Against Adversarial Attacks", published 8/5/202). Regarding claim 1, Brothers discloses: a system for selecting components for building graph-based learning machines (0021: contemplates application to artificial neural networks), comprising: a reference learning machine comprising a set of components (fig.3:315, 0044: identifying aspects of a neural network to be tuned, 0046: determining components to be tuned, such as kernels and activations (figs. 4-6), layers (fig.7: pruning layers), etc.); one or more test signals (fig.3:315, 0044-45: the initial network is tested for performance via signals, with fig.3:310, 0040-43 contemplating various metrics including accuracy, power consumption, etc., hence, the learning machine is tested via input signals (structure of the machine, such as for worst-case analysis (0044), input sets for testing accuracy of output, etc.) to the analyzer in order to generate performance metrics); and a computing device comprising a component analyzer configured to: analyze, using the one or more test signals, the components in the reference learning machine (fig.3, shows overview of analyzing components in the learning machine, including steps 320-360 including identifying components for substitution and determining performance of that substitution and hence of that components, hence, analyzing components via test signals; see 0340-345, 0055) ranking the respective components in the reference learning machine in terms of the efficiency and effectiveness of the respective components (0044-45: the various components of the reference learning machine are ranked relative to various metrics and thresholds of 0040-43 (e.g., power consumption, accuracy) in order to determine whether the components fall below or above these thresholds and acting accordingly (0045: continuing or terminating the execution process); furthermore, this process is iteratively performed (320-360), hence, various components of each iteration, such as a particular other modified component (i.e., the component modified in 320) are ranked according to efficiency and effectiveness thresholds of 0040-43, hence, different parts of the reference machine are ranked at various times); analyze operations in the reference learning machine and how the operations are interconnected (performing substitutions (fig.3:320), reversing modifications (fig.3:350) constitute analyzing the order of operations and the location of a component in the neural network for substitution and modifications, hence, analyzing operations and interconnections of the neural network); generate a new component based on a first component of the reference learning machine (fig.4-6: new activations, kernels; fig.7: pruning of components), the new component having an improved accuracy or efficiency relative to the first component (0051-52: validating a improvement in the various metrics above, e.g., prediction value, power consumption, accuracy (0040)) and generating a new learning machine comprising the new component (fig.3: iterative operation of changing various components to generate new learning machines). Brothers does not disclose: the new component having a different architecture or a different set of operations from the first component based on the analysis of the components and the operations in the reference learning machine (Brothers’s disclosure of changing convolution kernels (fig.4, 6), activation functions (fig.5), or removing entire components via pruning (fig.7) cannot be said to be a different architecture or set of operations); wherein the analyzing comprises: processing the one or more test signals through each of the components to generate one or more outputs, wherein each of the components comprises a set of unknown operations; extracting knowledge from the components of the reference learning machine, the extracted knowledge comprising at least one of a measure of how much each component changes a configuration of each of the one or more test signals to the one or more outputs or how many dimensions of the outputs for each component are orthogonal to each other; wherein the ranking occurs based on the extracted knowledge. Yao discloses: the new component having a different architecture or a different set of operations from the first component based on the analysis of the components and the operations in the reference learning machine (Yao contemplates a technique of Neural Architecture Search via a first coarse substitution (fig.2: S1: Structure-level) followed by a more fine-grained modular-level search (fig.2: S2), both stages being evaluated via speed (S1: inference time, S2: FLOPS) and accuracy (mAP, see fig.1), hence, Yao contemplates the substitution of new components having a different architecture and set of operations from an other, first components, based on the analysis of components (e.g., determining speed and accuracy) in the learning machine; see also §3.2.2 contemplating a S2 search space comprising various channel size architectures encoded as a string). It would have been obvious before the effective filing date to one of ordinary skill in the art to modify the system of Brothers by incorporating the architecture search technique of Yao. Both concern the art of neural network optimization, and the incorporation would have, according to Yao, increase optimization efficiency and speed of the network via a coarse-to-fine approach to efficiently lift the Pareto front (§1, last ¶). Brothers modified by Yao does not disclose the remaining limitations. Guo discloses: wherein the analyzing comprises: processing the one or more test signals through each of the components to generate one or more outputs, wherein each of the components comprises a set of unknown operations (§3.2: Robustness evaluation: adversarial test samples are passed through to generate outputs, the components comprising a set of unknown operations when performed on the adversarial samples, i.e., whether a mislabeling will occur, per eq.1 (p.629)); extracting knowledge from the components of the reference learning machine, the extracted knowledge comprising at least one of a measure of how much each component changes a configuration of each of the one or more test signals to the one or more outputs or how many dimensions of the outputs for each component are orthogonal to each other (ibid: knowledge in the form of test result performance is extracted, the knowledge comprising performance under adversarial samples based on a white-box adversary, the measure comprising how much each component changes a configuration of the one or more input test signals since, by eq.1, misclassification based on adversarial perturbations comprises a too large change occurring based on a small ball of less than p distance around x); wherein the ranking occurs based on the extracted knowledge (§3.3 ¶4-5). It would have been obvious before the effective filing date to one of ordinary skill in the art to modify the system of Brothers modified by Yao by adversarial NAS technique of Guo. Both concern the art of neural network optimization, and the incorporation would have, according to Gao, improve the method’s ability to generate networks robust to adversarial attacks (§1). Regarding claim 2, Brothers modified by Yao modified by Guo discloses the system of claim 1, as described above. Brothers modified by Yao further discloses: wherein the component analyzer is further configured to replace one or more of the components in the reference learning machine with one or more components that are more efficient or more effective (0051-52, fig.3:320, 350-355: components are replaced by more effective components based on performance metrics; Yao fig.3-4: new Pareto front of accuracy and speed is reached via iterative training in the S1 structure and S2 modular-level searches). Regarding claim 3, Brothers modified by Yao modified by Guo discloses the system of claim 2, as described above. Brothers modified by Yao further discloses: wherein the computing device further comprises a learning machine builder configured to generate a new learning machine after replacing the components of the reference learning machine with the components that are more efficient or more effective (Brothers fig.3:325: modified neural networks are iteratively generated with the new components causing increase in efficiency and effectiveness; Yao figs. 3-4: map of new leaning machines generated via the evolutionary NAS algorithm). Regarding claim 4, Brothers modified by Yao modified by Guo discloses the system of claim 1, as described above. Brothers modified by Yao further discloses: a graph component comprising at least one of a node, a set of nodes, or a group of smaller graph components which, given input data, produces a set of outputs (Brothers 0026 gives overview of a neural network comprising a set of layers, a layer being a set of nodes or a set of smaller graph components, each of which takes input from a previous layer and feeds output into the next; Yao fig.2 contemplates graph components being modules (S1) or layers (S2), both modules and layers being a node, a set of nodes, and a set of smaller graph components for producing output). Regarding claim 5, Brothers modified by Yao modified by Guo discloses the system of claim 1, as described above. Brothers modified by Yao further discloses: a combination of graph components that model data (Brothers 0026: neural network constitutes a combination of layer graph components that model and transform data; Yao fig.2). Regarding claim 6, Brothers modified by Yao modified by Guo discloses the system of claim 5, as described above. Brothers modified by Yao further discloses: wherein the graph components are included in a pool of components and wherein the component analyzer is further configured to add the new component to the pool of components (fig.3: 320-325: the process of integrating a new component into a neural network pool constitutes defining and adding a new component to a pool; Yao fig.2, §3.2.2: given the coarse selection of modules in S1, new components for each stage encoded via a string (¶4) are added to the pool of components to test). Regarding claim 7, Brothers modified by Yao modified by Guo discloses the system of claim 1, as described above. Brothers modified by Yao modified by Guo further discloses: wherein the computing device further comprises a graph component analyzer, the component analyzer or the graph component analyzer measures an effectiveness of each component in the reference learning machine in terms of modeling performance (Brothers fig.3:315 (initial measure of effectiveness), 335, 350 (iterative measures of performance), hence, components of the reference learning machine is measured with respect to threshold metrics 0040-41; Yao fig.2: each component is analyzed in terms of accuracy and speed in a series of learning machines comprising substitutions of each component in S1; Guo §3.2: Robustness Evaluation), and wherein the graph component analyzer ranks the components based on the effectiveness of the respective components (Brothers fig.3:315, 335, 350, 0040-41: ranking component performance with respect to effectiveness thresholds in order to generate optimal components; Yao figs. 2-4: graph of speed and accuracy constitutes a ranking of each component; Guo §3.3 ¶4-5). Regarding claim 8, Brothers discloses modified by Guo the system of claim 1, as described above. Brothers modified by Yao further discloses: wherein the computing device further comprises a graph component analyzer, wherein the component analyzer or the graph component analyzer evaluates an efficiency of each component in the reference learning machine in terms of computational complexity (Brothers 0040-41 discloses performance analysis that includes runtime measurements, a measure of computational complexity; Yao fig.2-4: inference time or FLOPS in S1 and S2 respectively is a measure of computational complexity, the analysis carried out for a network comprising each component in the reference learning machine; Guo §3.2-3.3: measuring and ranking effectiveness of each component) given the effectiveness of the respective components (Brothers fig.3 discloses an initial measure of effectiveness (315) or iterative measures of effectiveness (335, 350), hence, determining efficiency in the context of given effectiveness levels (e.g., accuracy, runtime, throughput of 0040-41); Yao fig.2-4: given the evolving Pareto front, the effectiveness of each machine is evaluated). Regarding claim 9, Brothers modified by Yao modified by Guo discloses the system of claim 1, as described above. Brothers modified by Yao further discloses: wherein the computing device further comprises a graph component analyzer, the component analyzer or the graph component analyzer evaluates a performance of graph components listed in a pool of components to be added into the reference learning machine to improve a modeling accuracy and reduce a computational complexity of a new generated learning machine (Brothers fig.3:320-360 shows an iterative process where components from a pool of components (e.g., kernels (0047, fig.4), activation functions (fig.5), layers (fig.6, 0068-69)) are substituted into the network and the performance of the component analyzed (fig.3:335, 350), these substitutions reducing the complexity of the network by, for example, reducing components, reducing calculations, etc., with 0040 contemplating thresholds that require increases in accuracy; Yao fig.2: shows pools of components to test in S1; §3.2.2: given the coarse selection of modules in S1, new components for each stage encoded via a string (¶4) are added to the pool of components to test; ). Regarding claim 10, Brothers modified by Yao modified by Guo discloses the system of claim 1, as described above. Brothers modified by Yao further discloses: wherein the computing device further comprises a learning machine builder, wherein the learning machine builder generates a new learning machine with improved efficiency and modeling accuracy compared to the reference learning machine (Brothers 0040-41 contemplates various thresholds for a learning machine to reach including increases in accuracy threshold and increase in efficiency (power consumption, runtime, throughput, etc.) when generating a new leaning machine via iterative process of fig.3:315-360; Yao fig.2 gives overview of learning machine builder) by identifying the best graph components to be replaced in the reference learning machine and a list of potential components (Brothers fig.4-11 contemplate various components to replace from a list of components including kernels, activation functions, kernel groups, hence, identifying candidate best graph components to replace and a list of potential components (e.g., 0064: kernel candidates, 0065: activation candidates, 0077: pruning candidates replaced with updates kernels, activation functions, updated blocks); Yao fig.2, fig.3-4, identifying optimal modules for S1 and S2 based on accuracy and speed mapping). Regarding claim 11, Brothers modified by Yao discloses the system of claim 1, as described above. Brothers modified by Yao modified by Guo further discloses: wherein an efficiency of a model may be defined as an inference speed (Brothers 0041: runtime thresholds, throughput threshold constitute inference speed) or a memory footprint to process an input signal (Yao fig.2-4: mAP and FLOPS or Inference time in the pareto graphs). Regarding claim 12, Brothers modified by Yao modified by Guo discloses the system of claim 1, as described above. Brothers modified by Yao further discloses: wherein the computing device further comprises a learning machine builder, wherein the learning machine builder generates a new graph component from scratch which is not in a pool of components to improve a performance of the reference learning machine (Brothers fig.10 discloses dynamic generation of kernel groupings from analyzing kernels of the network in order to generate a new convolutional kernel / layer with the aim of performing performance metrics of 0040-41; Yao §3.2.2 dynamically generating architectures in S2 via string-encodings based on the modules selected in S1). Regarding claim 13, Brothers modified by Yao modified by Guo discloses the system of claim 1, as described above. Brothers modified by Yao further discloses: wherein a performance of the reference learning machine is measured in terms of functional accuracy or inference speed (Brothers 0040-41 runtime and throughput (inference speed), accuracy; Yao figs. 2-4: mAP and FLOPS, inference time). Regarding claim 14, Brothers modified by Yao modified by Guo discloses the system of claim 1, as described above. Brothers modified by Yao further discloses: wherein the computing device further comprises a leaning machine builder, wherein the learning machine builder identifies a number of graph components and layers in the reference learning machine, given specific performance (Brothers fig.3:315, 335, 350: specific performance of a reference network is iteratively determined in preparation for a substitution stage; fig.3: 320: portions or components (layers, activation functions, kernels, etc., see fig.4-11) of the network are identified as candidates for substitution; Yao figs.3-4 shows specific performance of pareto front for the first structural search layer, §3.3: shows graph components and layers identified in the modular tuning layer, the components identified to push the pareto boundaries of the S1 search in figs. 3-4). Regarding claim 15, Brothers modified by Yao modified by Guo discloses the system of claim 1, as described above. Brothers modified by Yao further discloses: wherein the computing device further comprises a learning machine builder, wherein the learning machine builder builds a new learning machine with an optimized number of components (Brothers 0076: pruning or reducing node components; fig.7-8, 0077-80: pruning components, hence, building a learning machine with an optimized lesser number of components; Yao fig.2: building a learning machine using an optimized number of components for the Pareto graph for S1 and S2 phases). Regarding claim 16, Brothers modified by Yao modified by Guo discloses the system of claim 1, as described above. Brothers modified by Yao further discloses: wherein the computing device further comprises a learning machine builder, wherein the learning machine builder tunes and re-designs the reference learning machine to provide better performance with a given learning machine complexity (Brothers 0042 contemplates improving performance via various adjustments without reducing accuracy; fig.5, 0065 contemplates replacing activation functions, hence, redesigning the machine given a certain machine network complexity, Yao fig.2: given the complexity of the 5 stages of S1 search (input, backbone, etc.) and the complexity of the initial S1 learning machine in the S2 search, a learning machine is tuned given these parameters for speed and accuracy). Regarding claim 17, Brothers modified by Yao modified by Guo discloses the system of claim 1, as described above. Brothers modified by Yao further discloses: wherein the system is configured to design a new learning machine for an image classification application (Yao §1: real-time object detection). Claim(s) 18 are rejected under 35 U.S.C. 103 as being unpatentable over Brothers (US 20160358070 A1) in view of Yao ("SM-NAS: Structural-to-modular neural architecture search for object detection", published 11/30/2019) in view of Hua (US 20200257961 A1). Regarding claim 18, Brothers modified by Yao modified by Guo discloses the system of claim 1, as described above. Brothers modified by Yao modified by Guo does not disclose the limitations of claim 18. Hua discloses: wherein the system generates a new learning machine to create a speech recognition system (0020). It would have been obvious before the effective filing date to one of ordinary skill in the art to modify the system of Brothers modified by Yao modified by Guo by incorporating the image classification application of Hua. Both concern the art of neural network optimization, and the incorporation would have allowed application to common recognition tasks, such as recognizing utterance representation (0020). Claim(s) 19 are rejected under 35 U.S.C. 103 as being unpatentable over Brothers (US 20160358070 A1) in view of Yao ("SM-NAS: Structural-to-modular neural architecture search for object detection", published 11/30/2019) in view of Dolfing (US 20140363074 A1). Regarding claim 19, Brothers modified by Yao modified by Guo discloses the system of claim 1, as described above. Brothers modified by Yao modified by Guo does not disclose the limitations of claim 19. Dolfing discloses: wherein an image classifier generated from the reference learning machine is configured to receive an image of a handwritten digit into a network and make a decision on what class the image belongs to (fig.6, 0149-150). It would have been obvious before the effective filing date to one of ordinary skill in the art to modify the system of Brothers modified by Yao modified by Guo by incorporating the handwriting application of Dolfing. Both concern the art of neural networks, and the incorporation would have, according to Dolfing, provide efficient multi-script handwriting recognition functionality to users to meet user demand (0002-3). Claim(s) 20 are rejected under 35 U.S.C. 103 as being unpatentable over Yao ("SM-NAS: Structural-to-modular neural architecture search for object detection", published 11/30/2019) in view of Guo ("When NAS Meets Robustness: In Search of Robust Architectures Against Adversarial Attacks", published 8/5/202). Regarding claim 20, Yao discloses: a method of selecting components for building a graph-based learning machine (fig.2 gives overview of NAS for generating a neural network), the method comprising: observing behavior of a component in a reference learning machine (fig.2: behaviors of the various components are observed in S1 and S2 and mapped according to a Pareto front in speed and accuracy); ranking the component in the reference learning machine in terms of the efficiency and effectiveness of the component (fig.2: graphing learning machines comprising said components in terms of efficiency (speed) and effectiveness (accuracy) and identifying a Pareto front in S1 structural and S2 modular search constitutes a ranking); identifying issues in the component of the reference learning machine (figs. 2-4 show various learning machines falling below the Pareto front, hence, speed and accuracy issues are identified); and analyzing operations in the reference learning machine and how the operations are interconnected (fig.2: Given a search space defining substitutable components and their connections in S1 and S2, possible changes and substitutions are identified; see also §3.2.2 ¶2 contemplating analysis of modular structures for generating heuristics for S2 search space); generating a new component based on the component in the reference learning machine, the new component having a different architecture or a set of operations from the component based on the ranking (fig.2, §3.2.2: new components are generated for substitution in S2, the new components having the different architectures and operations shown, the new components being generated based on S1 ranking), the identified issues, and the operations, the new component having an increased accuracy or efficiency relative to the component (fig.3-4: based on the identified pareto front in S1, new components with a better front are generated having different operations, the new components having better accuracy and efficiency); and generating the graph-based learning machine based the ranking and the identified issues, the generated graph-based learning machine comprising the new component (fig.2-4: the new components are generated based on the identified issues for the new learning machine for testing and evaluation). Yao does not disclose: the observing the behavior comprising processing one or more input signals through the component to generate one or more outputs, wherein the component performs one or more unknown operations on the one or more input signals to generate the one or more outputs; the ranking of the component comprising applying one or functions to the one or more input signals and the one or more outputs to measure at least one of how much the component changes a configuration of the one or more input signals to the one or more outputs or how many dimensions of the one or more outputs for the component are orthogonal to each other. Guo discloses: wherein the analyzing comprises: the observing the behavior comprising processing one or more input signals through the component to generate one or more outputs, wherein the component performs one or more unknown operations on the one or more input signals to generate the one or more outputs (§3.2: Robustness evaluation: adversarial test samples are passed through to generate outputs, the components comprising a set of unknown operations when performed on the adversarial samples, i.e., whether a mislabeling will occur, per eq.1 (p.629)); the ranking of the component comprising applying one or functions to the one or more input signals and the one or more outputs to measure at least one of how much the component changes a configuration of the one or more input signals to the one or more outputs or how many dimensions of the one or more outputs for the component are orthogonal to each other (ibid: knowledge in the form of test result performance is extracted, the knowledge comprising performance under adversarial samples based on a white-box adversary, the measure comprising how much each component changes a configuration of the one or more input test signals since, by eq.1, misclassification based on adversarial perturbations comprises a too large change occurring based on a small ball of less than p distance around x). It would have been obvious before the effective filing date to one of ordinary skill in the art to modify the system of Yao by adversarial NAS technique of Guo. Both concern the art of neural network optimization, and the incorporation would have, according to Gao, improve the method’s ability to generate networks robust to adversarial attacks (§1). Response to Arguments In the arguments, applicant argued: 1. The application describes a technical problem, which is that of improvising performance and accuracy in learning machines. Claim 1 recites limitations directed to generating a new component wit ha different architecture r set of operations which provides improved accuracy or efficiency. Hence, currently amended limitations include meaningful limitations that incorporate claim 1 into a practical application. Examiner respectfully disagrees, for the reasons given in the rejection above. In particular, the claim as a whole being directed to an abstract idea, the additional elements recite mere computing elements which, being mere general instructions to apply the abstract idea, cannot be said to be incorporation into a practical application. 2. Claims 1 and 20 are not directed to a judicial exception, as the processing of test signals cannot be performed in the mind. Examiner respectfully disagrees. As described above, a learning machine as recited in the claim can be broadly and reasonably understood to be a sequence of mathematical operations, such as tensor multiplication and application of an activation function. As such, they are directed to a mathematical concept. 3. Because the operations are unknown, they cannot be processed by a human mind. Examiner respectfully disagrees. Since knowing the structure of a learning machine is necessary for implementation on a computer, the BRI of unknown must include, “having unknown qualities”. Hence, a human mind can consider a learning machine having unknown qualities or characteristics in order to derive observations and form judgments. 4. Hence, the claim is not directed to a mental process. Examiner submits that, for the reasons given above in the rejection, the claims are directed to a mathematical concept. 5. For the prior art rejections, for claim 20, Yao does not disclose the newly added limitations. Examiner submits that applicant’s arguments are moot in view of newly applied art. 6. For claims 1, 20 and associated dependent claims, the art of record does not disclose the newly added limitations. Examiner submits that applicant’s arguments are moot in view of newly applied art. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Das (US 20210350203 A1) discloses a neural network architecture with dynamically substitutable blocks, see fig.8, 14. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LIANG LI whose telephone number is (303)297-4263. The examiner can normally be reached Mon-Fri 9-12p, 3-11p MT (11-2p, 5-1a ET). If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor Jennifer Welch can be reached on (571)272-7212. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from Patent Center and the Private Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from Patent Center or Private PAIR. Status information for unpublished applications is available through Patent Center or Private PAIR to authorized users only. Should you have questions about access to Patent Center or the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. The examiner is available for interviews Mon-Fri 6-11a, 2-7p MT (8-1p, 4-9p ET). /LIANG LI/ Primary examiner AU 2143
Read full office action

Prosecution Timeline

Aug 12, 2021
Application Filed
Aug 24, 2024
Non-Final Rejection — §101, §103
Oct 17, 2024
Applicant Interview (Telephonic)
Oct 17, 2024
Response Filed
Oct 19, 2024
Examiner Interview Summary
Jan 25, 2025
Final Rejection — §101, §103
Mar 24, 2025
Response after Non-Final Action
Mar 24, 2025
Applicant Interview (Telephonic)
Mar 29, 2025
Examiner Interview Summary
Apr 29, 2025
Request for Continued Examination
May 06, 2025
Response after Non-Final Action
Nov 01, 2025
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596463
METHOD AND APPARATUS FOR IMAGE-BASED NAVIGATION
2y 5m to grant Granted Apr 07, 2026
Patent 12585716
INTELLIGENT RECOMMENDATION METHOD AND APPARATUS, MODEL TRAINING METHOD AND APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12585375
GENERATING SNAPPING GUIDE LINES FROM OBJECTS IN A DESIGNATED REGION
2y 5m to grant Granted Mar 24, 2026
Patent 12580000
MULTITRACK EFFECT VISUALIZATION AND INTERACTION FOR TEXT-BASED VIDEO EDITING
2y 5m to grant Granted Mar 17, 2026
Patent 12561566
NEURAL NETWORK LAYER FOLDING
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
61%
Grant Probability
99%
With Interview (+69.1%)
3y 5m
Median Time to Grant
High
PTA Risk
Based on 273 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month