Prosecution Insights
Last updated: April 19, 2026
Application No. 17/190,724

SELECTING A NEURAL NETWORK BASED ON AN AMOUNT OF MEMORY

Final Rejection §101§102§103
Filed
Mar 03, 2021
Examiner
CHUANG, SU-TING
Art Unit
2146
Tech Center
2100 — Computer Architecture & Software
Assignee
Nvidia Corporation
OA Round
2 (Final)
52%
Grant Probability
Moderate
3-4
OA Rounds
4y 5m
To Grant
91%
With Interview

Examiner Intelligence

Grants 52% of resolved cases
52%
Career Allow Rate
52 granted / 101 resolved
-3.5% vs TC avg
Strong +40% interview lift
Without
With
+39.7%
Interview Lift
resolved cases with interview
Typical timeline
4y 5m
Avg Prosecution
28 currently pending
Career history
129
Total Applications
across all art units

Statute-Specific Performance

§101
27.4%
-12.6% vs TC avg
§103
46.3%
+6.3% vs TC avg
§102
10.8%
-29.2% vs TC avg
§112
11.7%
-28.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 101 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION This action is in response the communications filed on 07/21/2025 in which claims 1-11, 13-17, and 19-23 are amended, and therefore claims 1-26 are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statements (IDS) submitted on 05/27/2025 and 08/25/2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner. Claim Objections Claims 3 and 16 are objected to because of the following informalities: In claim 3, “wherein the one or more second edges and one or more second operations… satisfy one or more second memory constraints that are different from a first constraint” should be “wherein the one or more second edges and one or more second operations… satisfy one or more second memory constraints that are different from a first memory constraint” In claim 16, “select the one or more edges and one or more operations based, at least in part, on one or more that satisfy a memory constraint” should be “select the one or more edges and one or more operations that satisfy a memory constraint” Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. - Claims 1-26 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more Step 1: Claims 1-6 recite one or more processors. Claims 7-14 recite a system comprising processors. Claims 15-20 recite a datacenter comprising processors. Claims 21-26 recite a method. Therefore, claims 1-6 are directed to a manufacture, claims 7-20 are directed to a machine, and claims 21-26 are directed to a process. With respect to claims 1, 7, 15 and 21: 2A Prong 1: the claim recites a judicial exception. select one or more edges from the set of candidate edges and one or more operations from the set of candidate operations based, at least in part, on one or more constraints; and (mental process – evaluation or judgement, select edges and operations based on constraints) generate one or more neural networks with a topology of the plurality of topologies to perform (claims 1, 7 and 21) an image-based task (claim 15) a medical image segmentation task based, at least in part, on the selected one or more edges and the selected one or more operations (mental process – evaluation or judgement, generate neural networks based on the selected edges and operations) 2A Prong 2: This judicial exception is not integrated into a practical application. (claim 1) circuitry (claims 7 and 15) one or more processors (mere instructions to apply an exception - MPEP 2106.05(f), (2) invoking general computers as a tool to perform a process) obtain a set of candidate edges and a set of candidate operations, wherein the set of candidate edges and the set of candidate operations are usable to generate a plurality of neural networks of a plurality of topologies (insignificant extra-solution activity – MPEP 2106.05(g), (3) data gathering, obtain edges and operations) 2B: The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception. (claim 1) circuitry (claims 7 and 15) one or more processors (mere instructions to apply an exception - MPEP 2106.05(f), (2) invoking general computers as a tool to perform a process) obtain a set of candidate edges and a set of candidate operations, wherein the set of candidate edges and the set of candidate operations are usable to generate a plurality of neural networks of a plurality of topologies (insignificant extra-solution activity – MPEP 2106.05(g), (3) data gathering, obtain edges and operations, and WURC: receiving or transmitting data over a network– MPEP 2106.05(d)(II)(i)) With respect to claims 2, 8, 16 and 22: 2A Prong 1: the claim recites a judicial exception. (claim 2) perform a search to select the one or more edges and one or more operations based, at least in part, on one or more memory constraints (claim 8) perform a first search to select the one or more edges and one or more operations based, at least in part, on one or more first memory constraints (claim 16) perform a search to select the one or more edges and one or more operations based, at least in part, on one or more that satisfy a memory constraint (claim 22) selecting a first set of one or more operations and a first set of one or more edges for the one or more neural networks based, at least in part, on one or more first memory constraints (mental process – evaluation or judgement, select edges and operations based on memory constraints) 2A Prong 2: This judicial exception is not integrated into a practical application. (claim 2) the circuitry (claims 8 and 16) the one or more processors (mere instructions to apply an exception - MPEP 2106.05(f), (2) invoking general computers as a tool to perform a process) 2B: The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception. (claim 2) the circuitry (claims 8 and 16) the one or more processors (mere instructions to apply an exception - MPEP 2106.05(f), (2) invoking general computers as a tool to perform a process) With respect to claims 3 and 17: 2A Prong 1: the claim recites a judicial exception. cause one or more second edges and one or more second operations to be selected from the set of candidate edges and the set of candidate operations or from a second set of candidate edges and set of candidate operation, wherein the one or more second edges and one or more second operations are different from the one or more edges and one or more operations and satisfy one or more second memory constraints that are different from a first (memory) constraint satisfied by the one or more edges and one or more operations (mental process – evaluation or judgement, select second edges and operations, which are different from the (first) edges and operations and satisfy second memory constraints) 2A Prong 2: This judicial exception is not integrated into a practical application. (claim 3) the circuitry (claim 17) the one or more processors (mere instructions to apply an exception - MPEP 2106.05(f), (2) invoking general computers as a tool to perform a process) 2B: The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception. (claim 3) the circuitry (claim 17) the one or more processors (mere instructions to apply an exception - MPEP 2106.05(f), (2) invoking general computers as a tool to perform a process) With respect to claim 4: 2A Prong 1: the claim recites a judicial exception. perform a search to select the one or more edges from the set of candidate edges and one or more operations from the set of candidate operations in accordance with a set of one or more search parameters determined at least in part on an amount of memory to be used by the one or more neural networks (mental process – evaluation or judgement, select edges and operations according to search parameters) 2A Prong 2: This judicial exception is not integrated into a practical application. • the circuity (mere instructions to apply an exception - MPEP 2106.05(f), (2) invoking general computers as a tool to perform a process) 2B: The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception. • the circuity (mere instructions to apply an exception - MPEP 2106.05(f), (2) invoking general computers as a tool to perform a process) With respect to claim 5: 2A Prong 1: the claim recites a judicial exception. wherein a percentage of a maximum memory usage of operations associated with one or more candidate feature nodes of a search space comprising the plurality of neural networks is less than or equal to an amount of memory (mental process – evaluation or judgement, a part of a memory usage of operations is less than an amount of memory) With respect to claims 6, 11, 18 and 24: 2A Prong 1: the claim recites a judicial exception. (claim 6) cause the one or more neural networks to be selected by performing a joint two-level search of a topology search space and a cell search space to identify the one or more neural networks for an image-based task (claim 11) cause the one or more edges and one or more operations to be selected by performing a joint two-level search of a topology search space and a cell search space to identify the one or more neural networks for an image-based task (claim 18) perform a joint two-level search of a topology search space and a cell search space to identify the one or more neural networks for the medical image segmentation task (claim 24) performing a joint two- level search of a topology search space and a cell search space to identify the one or more neural networks for an image-based task (mental process – evaluation or judgement, select/identify neural networks (with edges and operations) using a joint search of a topology search and a cell search) 2A Prong 2: This judicial exception is not integrated into a practical application. (claim 6) the circuitry (claims 11 and 18) the one or more processors (mere instructions to apply an exception - MPEP 2106.05(f), (2) invoking general computers as a tool to perform a process) 2B: The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception. (claim 6) the circuitry (claims 11 and 18) the one or more processors (mere instructions to apply an exception - MPEP 2106.05(f), (2) invoking general computers as a tool to perform a process) With respect to claims 9, 14, 19 and 23: 2A Prong 1: the claim recites a judicial exception. (claim 9) select a connection pattern, from a plurality of candidate connection patterns between a first layer and a second layer of the one or more neural networks with a topology of the plurality of topologies, based at least in part on probabilities of each of the plurality of candidate connection patterns (claim 14) select a connection pattern, from a plurality of candidate connection patterns between layers of the plurality of neural networks of a plurality of topologies, based at least in part on probabilities of the plurality of candidate connection patterns (claim 19) perform a search of a search space to cause the one or more edges from the set of candidate edges and one or more operations from the set of candidate operations to be selected, wherein the search comprises selecting a connection pattern between layers of the one or more neural networks, from a plurality of candidate connection patterns, based at least in part on probabilities of the plurality of candidate connection patterns (claim 23) performing a search of a search space to select the one or more edges from the set of candidate edges and one or more operations from the set of candidate operations, wherein performing the search comprises selecting a connection pattern, from a plurality of candidate connection patterns between layers of the one or more neural networks, based at least in part on probabilities of the plurality of candidate connection patterns (mental process – evaluation or judgement, select edges and operations, and select a connection pattern based on probabilities of the connection patterns) 2A Prong 2: This judicial exception is not integrated into a practical application. (claims 9, 14 and 19) the one or more processors (mere instructions to apply an exception - MPEP 2106.05(f), (2) invoking general computers as a tool to perform a process) 2B: The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception. (claims 9, 14 and 19) the one or more processors (mere instructions to apply an exception - MPEP 2106.05(f), (2) invoking general computers as a tool to perform a process) With respect to claim 10: 2A Prong 1: the claim recites a judicial exception. select a feature node from a set of candidate features nodes for one or more layers of the one or more neural networks with a topology of the plurality of topologies, wherein the set of candidate feature nodes comprises feature nodes at different image scales that comprise a plurality of candidate edges that connect to a feature node in a previous layer (mental process – evaluation or judgement, select a node from a set of nodes with different scales) 2A Prong 2: This judicial exception is not integrated into a practical application. the one or more processors (mere instructions to apply an exception - MPEP 2106.05(f), (2) invoking general computers as a tool to perform a process) 2B: The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception. the one or more processors (mere instructions to apply an exception - MPEP 2106.05(f), (2) invoking general computers as a tool to perform a process) With respect to claims 12 and 25: 2A Prong 2: This judicial exception is not integrated into a practical application. wherein the one or more neural networks are to perform an image segmentation task (a particular technological environment or field of use and – MPEP 2106.05(h)) 2B: The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception. • wherein the one or more neural networks are to perform an image segmentation task (a particular technological environment or field of use and – MPEP 2106.05(h) With respect to claims 13 and 20: 2A Prong 1: the claim recites a judicial exception. (claim 13) cause the one or more edges from the set of candidate edges and one or more operations from the set of candidate operations to be selected by performing a search of a topology search space comprising a plurality of candidate edges that connect candidate feature nodes of a plurality of layers and a cell search space comprising a plurality of candidate operations (claim 20) perform a search of a search space to cause the one or more edges from the set of candidate edges and one or more operations from the set of candidate operations to be selected, wherein the search comprises selecting a connection pattern from a feasible set of candidate connection patterns between layers of the one or more neural networks, wherein each feasible connection pattern in the feasible set of candidate connection patterns comprises valid input connections and output connections between the layers (mental process – evaluation or judgement, select edges that connect nodes or feasible patterns and operations) 2A Prong 2: This judicial exception is not integrated into a practical application. (claims 19-20) the one or more processors (mere instructions to apply an exception - MPEP 2106.05(f), (2) invoking general computers as a tool to perform a process) 2B: The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception. (claims 19-20) the one or more processors (mere instructions to apply an exception - MPEP 2106.05(f), (2) invoking general computers as a tool to perform a process) With respect to claim 26: 2A Prong 1: the claim recites a judicial exception. • performing a search of a multi-scale topology search space by converting the multi-scale topology search space into a sequential search space comprising a super node for each respective layer of a plurality of layers, wherein each super node comprises a set of candidate feature nodes at the respective layer (mental process – evaluation or judgement, converting the multi-scale search to a sequential search with a super node for each layer and feature nodes at the layer) Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-2, 6-8, 11-13, 15-16, 18, 20-22 and 24-25 rejected under 35 U.S.C. 102 (a)(1) as being anticipated by Yu ("C2FNAS: Coarse-to-Fine Neural Architecture Search for 3D Medical Image Segmentation" 20200420) In regard to claims 1, 7, 15 and 21, Yu teaches: One or more processors comprising, circuitry to: (Yu, p. 7 "The coarse search stage takes 5 days with 64 NVIDIA V100 GPUs with 16GB memory. In fine stage, the super network training costs 10 hours with 8 GPUs...") obtain a set of candidate edges and a set of candidate operations, wherein the set of candidate edges and the set of candidate operations are usable to generate a plurality of neural networks of a plurality of topologies; (Yu, p. 2 "Figure 2... Each path from the left-most node to the right-most node is a candidate architecture. Each color represents one category of operations, e.g. depthwise conv, dilated conv, or 2D/3D/P3D conv which are more common in medical image area... The macro-level topology is determined by coarse stage search, while the micro-level operations are further selected in fine stage search."; p. 3 "we develop a coarse-to-fine neural architecture search method for automatically designing 3D segmentation networks [generate neural networks of topologies]… the architecture search space A consists of topology search space S, [all the paths in each network topology, obtain a set of candidate edges] which is represented by a directed acyclic graph (DAG), and cell operation space C, [obtain a set of candidate operations] which is represented by the color of each PNG media_image1.png 176 1508 media_image1.png Greyscale node in the DAG. Each network candidate is a sub-graph s ∈ S with color scheme c ∈ C... ") select one or more edges from the set of candidate edges and one or more operations from the set of candidate operations based, at least in part, on one or more constraints; and (Yu, p. 4 3.3. Coarse Stage: Macrolevel Search "Due to memory constraint [based on one or more constraints] and fairness problem... Thus, it is necessary to reduce the search space... We revisit the successful medical image segmentation networks, and we find they all share something in common: (1) a U-shape encoder-decoder topology and (2) skip-connections between the down-sampling paths and the up-sampling paths. We incorporate these priors into our method and prune the search space accordingly. An illustration of how the priors help prune search space is shown in Fig. 3. Therefore, the search space S is pruned to S' [select edges from the set of edges S] ... S' = PriorPrune(s), (4)"; p. 5, 3.4. Fine Stage: Microlevel Search "Given the tense memory requirement [based on one or more constraints] of 3D models... The set of possible operations, O, [select operations from the set of operations C or O] consisting of the following 3 choices: (1) 3x3x3 3D convolution; (2) 3x3x1 followed by... "; p. 5 "Therefore, the final network architecture N(s*; c*; w) is constructed."; Fig. 3 pruning paths in the coarse stage [select edges], and Fig. 5 operations searched in fine stage [select operations]) generate one or more neural networks with a topology of the plurality of topologies to (claims 1, 7 and 21) an image-based task (claim 15) a medical image segmentation task based, at least in part, on the selected one or more edges and the selected one or more operations. (Yu, p. 2 "we propose a coarse-to-fine neural architecture search scheme for 3D medical image segmentation [an image-based task, a medical image segmentation task] (see Fig. 2)."; p. 3 "we develop a coarse-to-fine neural architecture search method for automatically designing 3D segmentation networks [generate neural networks of topologies]"; p. 7 "The final network architecture based on the topology searched in coarse stage and operations searched in fine stage [based on selected edges and operations] is shown in Fig. 5.") Claims 7, 15 and 21 recite substantially the same limitation as claim 1, therefore the rejection applied to claim 1 also apply to claims 7, 15 and 21. In addition, Yu teaches: one or more processors (Yu, p. 7 "The coarse search stage takes 5 days with 64 NVIDIA V100 GPUs with 16GB memory. In fine stage, the super network training costs 10 hours with 8 GPUs...") In regard to claims 2, 8, 16 and 22, Yu teaches: (claim 2) wherein the circuitry is further to perform a search to select the one or more edges and one or more operations based, at least in part, on one or more memory constraints. (claim 8) wherein the one or more processors are further to perform a first search to select the one or more edges and one or more operations based, at least in part, on one or more first memory constraints. (claim 16) wherein the one or more processors are further to perform a search to select the one or more edges and one or more operations based, at least in part, on one or more that satisfy a memory constraint. (claim 22) further comprising selecting a first set of one or more operations and a first set of one or more edges for the one or more neural networks based, at least in part, on one or more first memory constraints. (Yu, p. 4 3.3. Coarse Stage: Macrolevel Search "Due to memory constraint [based on one or more memory constraints, satisfy a memory constraint] and fairness problem... Thus, it is necessary to reduce the search space... We incorporate these priors into our method and prune the search space accordingly. An illustration of how the priors help prune search space is shown in Fig. 3. Therefore, the search space S is pruned to S'... S' = PriorPrune(s), (4)"; p. 5, 3.4. Fine Stage: Microlevel Search "Given the tense memory requirement [based on one or more memory constraints, satisfy a memory constraint] of 3D models... The set of possible operations, O, consisting of the following 3 choices: (1) 3x3x3 3D convolution; (2) 3x3x1 followed by... "; Fig. 3 pruning paths in the coarse stage [select edges], and Fig. 5 operations searched in fine stage [select operations]) In regard to claims 6, 11, 18 and 24, Yu teaches: (claim 6) wherein the circuitry is further to cause the one or more neural networks to be selected by performing a joint two-level search of a topology search space and a cell search space to identify the one or more neural networks for an image-based task. (claim 11) wherein the one or more processors further cause the one or more edges and one or more operations to be selected by performing a joint two-level search of a topology search space and a cell search space to identify the one or more neural networks for an image-based task. (claim 18) wherein the one or more processors are to perform a joint two-level search of a topology search space and a cell search space to identify the one or more neural networks for the medical image segmentation task. (claim 24) further comprising performing a joint two- level search of a topology search space and a cell search space to identify the one or more neural networks for an image-based task. (Yu, p. 2 "The macro-level topology is determined by coarse stage search, while the micro-level operations are further selected in fine stage search. [a joint two-level search of a topology search space and a cell search space]"; p. 3 "we develop a coarse-to-fine neural architecture search method for automatically designing 3D segmentation networks"; p. 5 "Therefore, the final network architecture N(s*; c*; w) is constructed.") In regard to claims 12 and 25, Yu teaches: wherein the one or more neural networks are to perform an image segmentation task. (Yu, p. 2 "we propose a coarse-to-fine neural architecture search scheme for 3D medical image segmentation [a medical image segmentation task] (see Fig. 2).") In regard to claims 13 and 20, Yu teaches: (claim 13) wherein the one or more processors further cause the one or more edges from the set of candidate edges and one or more operations from the set of candidate operations to be selected by performing a search of a topology search space comprising a plurality of candidate edges that connect candidate feature nodes of a plurality of layers and a cell search space comprising a plurality of candidate operations. (claim 20) wherein the one or more processors are to perform a search of a search space to cause the one or more edges from the set of candidate edges and one or more operations from the set of candidate operations to be selected, wherein the search comprises selecting a connection pattern from a feasible set of candidate connection patterns between layers of the one or more neural networks, wherein each feasible connection pattern in the feasible set of candidate connection patterns comprises valid input connections and output connections between the layers. (Yu, p. 4 3.3. Coarse Stage: Macrolevel Search "An illustration of how the priors help prune search space is shown in Fig. 3. Therefore, the search space S is pruned to S' [select edges from the set of edges S] ... S' = PriorPrune(s), (4)"; p. 5, 3.4. Fine Stage: Microlevel Search "The set of possible operations, O, [select operations from the set of operations C or O] consisting of the following 3 choices: (1) 3x3x3 3D convolution; (2) 3x3x1 followed by... "; p. 2 "Figure 2... Each path from the left-most node to the right-most node [edges that connect nodes of layers] is a candidate architecture"; p. 4 "Figure 3... The grey nodes are eliminated entirely from the graph. Besides, many illegal paths have been pruned off as well. An example of illegal path and legal path [a feasible set of connection patterns, valid input and output connections] is shown as the orange line path and green line path separately."; Fig. 3 pruning paths in the coarse stage [select edges], and Fig. 5 operations searched in fine stage [select operations]) Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 3-5 and 17 rejected under 35 U.S.C. 103 as being unpatentable over Yu as applied to claim 1 and 15 and in further view of Fedorov ("SpArSe: Sparse Architecture Search for CNNs on Resource-Constrained Microcontrollers" 20190528) In regard to claims 3 and 17, Yu teaches: wherein the circuitry is further to cause one or more second edges and one or more second operations to be selected from the set of candidate edges and the set of candidate operations or from a second set of candidate edges and set of candidate operation, wherein the one or more second edges and one or more second operations are different from the one or more edges and one or more operations and (Yu, p. 7 "The model is trained based on same settings from scratch for each dataset"; p. 5 "we firstly introduce our implementation details of C2FNAS, and then report our found architecture (searched on MSD Pancreas dataset) with semantic segmentation results on all 10 MSD datasets... It contains 10 segmentation datasets, i.e. Brain Tumours, Cardiac, Liver Tumours, Hippocampus, Prostate, Lung Tumours, Pancreas Tumours, Hepatic Vessels, Spleen, Colon Cancer."; 10 architectures are found for 10 datasets. 10 architectures include different sets of edges and operations [a second set of candidate edges and set of candidate operation]) Yu does not teach, but Fedorov teaches: satisfy one or more second memory constraints that are different from a first (memory) constraint satisfied by the one or more edges and one or more operations. (Fedorov, p. 4 "Our search space is designed to encompass CNNs of varying depth, width, and connectivity. [selecting edges and operations] Each graph consists of optional input downsampling followed by a variable number of blocks... "; p. 3 "(a) Acc = 73.84%, MS = 1.31 KB, WM = 1.28 KB (b) Acc=73.58%, MS = 0.61 KB, WM = 14.3 KB... Figure 1: Model architectures found with best test accuracy on CIFAR10-binary, while optimizing for (a) 2KB for both MODELSIZE (MS) and WORKINGMEMORY (WM) [satisfying a first memory constraint] , and (b) minimum MS. [satisfying a second memory constraint]"; p. 4 "MODELSIZE(ω), or MS, is the number of bits needed to store the model parameters ω, WORKINGMEMORY_l(Ω) is the working memory in bits needed to compute the output of layer l, with the maximum taken over the L layers to account for in-place operations."; see Fig. 1 (a) and (b) for different architectures and meet respective memory constraints) It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Yu to incorporate the teachings of Fedorov by including automatically design CNNs implemented on multiple microcontroller units (MCUs). Doing so would make the CNNs being small enough to meet the strict MCU working memory constraint. (Fedorov, p. 1 "The vast majority of processors in the world are actually microcontroller units (MCUs), which find widespread use performing simple control tasks in applications ranging from automobiles to medical devices and office equipment… The CNNs we find are more accurate and up to 7.4x smaller than previous approaches, while meeting the strict MCU working memory constraint.") In regard to claim 4, Yu teaches: wherein the circuitry is further to perform a search to select the one or more edges from the set of candidate edges and one or more operations from the set of candidate operations (Yu, p. 4 3.3. Coarse Stage: Macrolevel Search "An illustration of how the priors help prune search space is shown in Fig. 3. Therefore, the search space S is pruned to S' [select edges from the set of edges S] ... S' = PriorPrune(s), (4)"; p. 5, 3.4. Fine Stage: Microlevel Search "The set of possible operations, O, [select operations from the set of operations C or O] consisting of the following 3 choices: (1) 3x3x3 3D convolution; (2) 3x3x1 followed by... "; Fig. 3 pruning paths in the coarse stage [select edges], and Fig. 5 operations searched in fine stage [select operations]) Yu does not teach, but Fedorov teaches: in accordance with a set of one or more search parameters determined at least in part on an amount of memory to be used by the one or more neural networks. (Fedorov, p. 4 "Our search space is designed to encompass CNNs of varying depth, width, and connectivity. [selecting edges and operations] Each graph consists of optional input downsampling followed by a variable number of blocks... "; p.5 "Pruning [37] is essential to MCU deployment using SpArSe, as it heavily reduces the model size and working memory without significantly impacting classification accuracy"; p. 6 "Because our search space includes such a diversity of parameters [in accordance with a set of one or more search parameters] , including architectural parameters, pruning hyperparameters, etc., we find it helpful to perform the search in stages..."; p. 2 "C2 : The model parameters must not exceed the ROM (flash memory) capacity. [the amount of memory]"; p. 7 "we address C2 by showing that SpArSe finds CNNs with higher accuracy and fewer parameters than previously published methods"; p. 8 "The results show that including pruning as part of the optimization yields roughly an 80x reduction in number of parameters...") The rationale for combining the teachings of Yu and Fedorov is the same as set forth in the rejection of claim 3. In regard to claim 5, Yu does not teach, but Fedorov teaches: wherein a percentage of a maximum memory usage of operations associated with one or more candidate feature nodes of a search space comprising the plurality of neural networks is less than or equal to an amount of memory. (Fedorov, p. 4 "MODELSIZE(ω), or MS, is the number of bits needed to store the model parameters ω, WORKINGMEMORY_l(Ω) is the working memory in bits needed to compute the output of layer l, with the maximum taken over the L layers to account for in-place operations. [a maximum memory usage of operations associated with candidate feature nodes]"; p. 8 "Table 3: Comparison of Bonsai with SpArSe for WM model (5). The first row shows the highest accuracy model for WM ≤ 2KB [less than or equal to an amount of memory] and the second row shows the highest accuracy model for WM, MS ≤ 2KB."; part of a memory usage of operations is less than an amount of memory) The rationale for combining the teachings of Yu and Fedorov is the same as set forth in the rejection of claim 3. Claims 9-10, 14, 19 and 23 rejected under 35 U.S.C. 103 as being unpatentable over Yu as applied to claims 7, 15 and 21 and in further view of Liu ("Auto-DeepLab: Hierarchical Neural Architecture Search for Semantic Image Segmentation" 20190406) In regard to claims 9, 14, 19 and 23, Yu does not teach, but Liu teaches: (claim 9) wherein the one or more processors are further to select a connection pattern, from a plurality of candidate connection patterns between a first layer and a second layer of the one or more neural networks with a topology of the plurality of topologies, based at least in part on probabilities of each of the plurality of candidate connection patterns. (claim 14) wherein the one or more processors are further to select a connection pattern, from a plurality of candidate connection patterns between layers of the plurality of neural networks of a plurality of topologies, based at least in part on probabilities of the plurality of candidate connection patterns. (claim 19) wherein the one or more processors are further to perform a search of a search space to cause the one or more edges from the set of candidate edges and one or more operations from the set of candidate operations to be selected, wherein the search comprises selecting a connection pattern between layers of the one or more neural networks, from a plurality of candidate connection patterns, based at least in part on probabilities of the plurality of candidate connection patterns. (claim 23) further comprising performing a search of a search space to select the one or more edges from the set of candidate edges and one or more operations from the set of candidate operations, wherein performing the search comprises selecting a connection pattern, from a plurality of candidate connection patterns between layers of the one or more neural networks, based at least in part on probabilities of the plurality of candidate connection patterns. (Liu, p. 86 "the β values can be interpreted as the 'transition probability' between different 'states' (spatial resolution) across different 'time steps' (layer number)... our goal is to find the path with the 'maximum probability' from start to end. [select a connection pattern between layers based on probabilities]"; p. 84, 3.2. Network Level Search Space "We illustrate our network level search space in Fig. 1. Our goal is then to find a good path [select edges] in this L-layer trellis."; p. 84, 3.1. Cell Level Search Space "The set of possible layer types, O, [select operations] consists of the following 8 operators, all prevalent in modern CNNs: 3 × 3 depthwise-separable conv 5 × 5 depthwise-separable conv 3 × 3 atrous conv with rate 2 5 × 5 atrous conv with rate 2 3 × 3 average pooling..."; all the transition probabilities across resolutions and layers are candidate connection patterns) It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Yu to incorporate the teachings of Liu by including a hierarchical architecture search space with probabilities for choosing paths. Doing so would achieve state-of-the-art performance specifically for semantic image segmentation. (Liu, P. 82 "we propose to search the network level structure in addition to the cell level structure, which forms a hierarchical architecture search space... We demonstrate the effectiveness of the proposed method...specifically for semantic image segmentation, attains state-of-the-art performance...") In regard to claim 10, Yu does not teach, but Liu teaches: wherein the one or more processors are further to select a feature node from a set of candidate features nodes for one or more layers of the one or more neural networks with a topology of the plurality of topologies, wherein the set of candidate feature nodes comprises feature nodes at different image scales that comprise a plurality of candidate edges that connect to a feature node in a previous layer. (Liu, p. 86 "each layer l will have at most 4 hidden states {4Hl, 8Hl, 16Hl, 32Hl} [feature nodes at different image scales], with the upper left superscript indicating the spatial resolution."; p. 4 "Figure 1… a path along the blue nodes [a set of candidate features nodes for layers 1..L] represents a candidate network level architecture") PNG media_image2.png 266 638 media_image2.png Greyscale The rationale for combining the teachings of Yu and Liu is the same as set forth in the rejection of claim 9. Claim 26 rejected under 35 U.S.C. 103 as being unpatentable over Yu as applied to claim 21, and in further view of Garg ("Revisiting Neural Architecture Search") PNG media_image3.png 336 318 media_image3.png Greyscale In regard to claim 26, Yu does not teach, but Garg teaches: further comprising performing a search of a multi-scale topology search space by converting the multi-scale topology search space into a sequential search space (Garg, p. 4 "The architecture search is based upon the differentiable architecture search (Liu et al. (2019); Cai et al. (2019)). In this method, we start with an over-parameterized (parent) network having all operations in the search space [converting the multi-scale topology search space] (e.g., convolution, pooling, etc.)."; p. 4 Algorithm 1 ReNAS "for each node v in G do 1. Partition the node into K channel blocks..."; p. 5 "The procedure starts by constructing the parent network by wirin 3.1. Every node’s channels in are partitioned into channel blocks... This process is repeated [a sequential search] for every node of every DAG in the over-parameterized network until convergence.") comprising a super node for each respective layer of a plurality of layers, wherein each super node comprises a set of candidate feature nodes at the respective layer. (Garg, p. 5 "Figure 2 depicts the node level expansion. The operation sampling is based on the path sampling heuristic of (Cai et al. (2019)), in which two candidate paths (operations) are sampled from a multinomial distribution over all operations."; see Figure 2, Nodes u, v, and w are [super nodes for respective layers], and those operation 3x3, 5x5, etc. are [feature nodes at respective layers]) It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Yu to incorporate the teachings of Garg by including a repeated for the connections and operations. Doing so would balance the exploration and exploitation of the search space. (Garg, p. 1 "Our method starts from a complete graph mapped to a neural network and searches for the connections and operations by balancing the exploration and exploitation of the search space. The results are on-par with the SOTA performance with methods that leverage handcrafted blocks.") Response to Arguments Applicant's arguments with respect to the rejection of the claims under 35 U.S.C. 101 have been fully considered but they are not persuasive: Applicant argues: (p. 11) A. The Claims Do Not Correspond to One of the Categories of Abstract Ideas… Applicant submits that ''one or more processors comprising circuitry" that is to "generate one or more neural networks with a topology of the plurality of topologies to perform an image-based task" is not a mental or mathematical process as claim l clearly recites a technical solution to a technical problem by directing a processor comprising circuitry that obtains a set of candidate edges and candidate operations usable to generate a plurality of neural networks with various topologies, selects among these on the basis of one or more constraints, and generates neural networks to perform an image-based task. Such operations cannot be performed by the human mind, as they involve manipulating digital data (for example, performing bit-level comparisons and calculations) and generating neural networks, which are both inherent to a specialized hardware configuration. Examiner answers: Under BRI the limitation “one or more processors comprising circuitry” is the recitation of generic computer components. Claims can recite a mental process even if they are claimed as being performed on a computer - MPEP 2106.04(a)(2)(III)(C). If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it is still in the mental processes grouping - 2019 PEG. Further, the claim does not recite performing bit-level comparisons and calculation or any specialized hardware configuration. Applicant argues: (p. 12) B. The Claims Integrate the Alleged Abstract Idea into a Practical Application… Applicant submits that the claims recite elements that establish a practical application at least because of the claim's recitation of "one or more processors comprising circuitry" that obtains candidate edges and operations, applies constraints, and actively generates neural network architectures serves to integrate the underlying concept into a specific technological environment. In doing so, it overcomes the abstract nature of a mere selection process by providing a concrete method for optimizing memory usage of a neural network design for image- based tasks--thereby solving a real technical problem in neural networks, computer architecture, and image processing. Examiner answers: As explained above, under BRI the limitation “one or more processors comprising circuitry” is the recitation of generic computer components. In step 2A Prong Two, the additional elements “one or more processors” or “circuitry” are mere instructions to apply an exception, (2) invoking general computers as a tool to perform a process - MPEP 2106.05(f). Because those elements are claimed in a generic manner, therefore they are not sufficient to integrate the exception in to a practical application. Applicant argues: (p. 13) C. The Claims Amount to Significantly More Than the Purported Abstract Idea… Applicant respectfully submits that that the pending claims are not being characterized as well-understood, routine and conventional steps because they recite a specific hardware configuration-whereby a processor uses defined circuitry to obtain candidate components, apply predetermined constraints, and generate neural networks with particular topologies for an image-based task yields technical improvements. These improvements include the ability to select an optimal neural network that meets memory constraints while maintaining high performance, which is a nonroutine and nonconventional solution to a recognized technical problem. Examiner answers: As explained above, under BRI the limitation “one or more processors comprising circuitry” is the recitation of generic computer components. The claim does not recite a specific hardware configuration or a defined circuitry. In step 2B, the additional elements “one or more processors” or “circuitry” are mere instructions to apply an exception, (2) invoking general computers as a tool to perform a process - MPEP 2106.05(f). Because those elements are claimed in a generic manner, therefore they are not significantly more than the judicial exception. Applicant's arguments with respect to the rejection of the claims under 35 U.S.C. 102/103 have been fully considered but they are moot: Applicant argues: (p. 15) Claim 1 as amended recites patentable subject matter not shown to be taught in Fedorov. For example, Fedorov as cited, is silent… as Fedorov discusses pruning weights of a neural network model, not "generate one or more neural networks with a topology of the plurality of topologies to perform an image-based task based" as stated by the claims. Further, Fedorov is silent with respect to selecting edges and operations for generating one or more neural networks of corresponding topologies, as Fedorov is instead directed to pruning weights to reduce the size of an already existing neural network. (p. 15 bottom) Applicant respectfully submits that claims 7, 15, and 21 are allowable at least for reasons discussed above in connection with claim 1. (p.17) Claim 15 recites a datacenter… As discussed above regarding claim 1, Fedorov, as cited, does not teach or suggest this process, as Fedorov focuses on pruning existing neural network weights to fit… Furthermore, Yan discusses multi-scale neural architecture… Examiner answers: the arguments do not apply to the references (Yu) being used in the current rejection. Conclusion PNG media_image4.png 210 662 media_image4.png Greyscale The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Tan ("MnasNet: Platform-Aware Neural Architecture Search for Mobile") teaches Factorized Hierarchical Search Space in Fig. 4. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SU-TING CHUANG whose telephone number is (408)918-7519. The examiner can normally be reached Monday - Thursday 8-5 PT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew J. Jung can be reached on (571) 270-3779. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /S.C./Examiner, Art Unit 2146 /ANDREW J JUNG/Supervisory Patent Examiner, Art Unit 2146
Read full office action

Prosecution Timeline

Mar 03, 2021
Application Filed
Feb 18, 2025
Non-Final Rejection — §101, §102, §103
May 07, 2025
Interview Requested
May 15, 2025
Applicant Interview (Telephonic)
May 15, 2025
Examiner Interview Summary
Jul 21, 2025
Response Filed
Oct 17, 2025
Final Rejection — §101, §102, §103
Apr 06, 2026
Applicant Interview (Telephonic)
Apr 08, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12561600
LINEAR TIME ALGORITHMS FOR PRIVACY PRESERVING CONVEX OPTIMIZATION
2y 5m to grant Granted Feb 24, 2026
Patent 12518154
TRAINING MULTIMODAL REPRESENTATION LEARNING MODEL ON UNNANOTATED MULTIMODAL DATA
2y 5m to grant Granted Jan 06, 2026
Patent 12481725
SYSTEMS AND METHODS FOR DOMAIN-SPECIFIC ENHANCEMENT OF REAL-TIME MODELS THROUGH EDGE-BASED LEARNING
2y 5m to grant Granted Nov 25, 2025
Patent 12468951
Unsupervised outlier detection in time-series data
2y 5m to grant Granted Nov 11, 2025
Patent 12412095
COOPERATIVE LEARNING NEURAL NETWORKS AND SYSTEMS
2y 5m to grant Granted Sep 09, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
52%
Grant Probability
91%
With Interview (+39.7%)
4y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 101 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month