The present application, filed on or after 16 March 2013, is being examined under the first inventor to file provisions of the AIA. DETAILED ACTION This office action is in response to Applicant’s submission filed on 30 August 2023. THIS ACTION IS NON-FINAL . Status of Claims Claims 1-20 are pending. Claim 1-20 are rejected under 35 U.S.C. 101 for being directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Claims 1-2, 6- 7 , 14-15 are rejected under 35 U.S.C. 103 as unpatentable. There is no art rejection for claims 3-5, 8 -13, 16-20. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Judicial Exception Claims 1-20 of the claimed invention are directed to a judicial exception, an abstract idea, without significantly more. (Independent Claims) With regards to claim 1 / 15, the claim recites a process / machine, which falls into one of the statutory categories. 2A – Prong 1: the claim, in part, recites “ determining a neural network structure; observing one or more performance metrics of an execution of the neural network structure by one or more target hardware elements; and selecting a module from a library of modules to replace one or more elements of the neural network structure based, at least in part on the observed one or more performance metrics ” (mental process), as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting generic computer elements, nothing in the claim element precludes the step from practically being performed in the mind. For example, but for the language about generic computer elements, “determining ”, “observing”, “selecting” in the limitation citied above encompasses observing, evaluating data processing models to decide on model choice to optimize processing performance, which is based on observation, evaluation, judgement, and/or opinion, that could be performed by human using paper / pen / calculator . If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. 2A – Prong 2: This judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements of generic computer elements (like computing device, memory coupled to processor ) , which is mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)) . There is no additional elements showing integration of the abstract idea into a practical application and/or providing anything significantly more to the abstract idea. The claim is directed to an abstract idea. 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above, the additional element of generic computer element merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)). The claim is not patent eligible. (Dependent claims) Claims 2-14 / 16-20 are dependent on claim 1 / 15 and include all the limitations of claim 1 / 15. Therefore, claims 2-14 / 16-20 recite the same abstract ideas. With regards to claim 2 , the claim recites further limitation “ the neural network structure is represented by a graph comprising operators to communicate according to edges in the graph; and the operators are arranged in layers, wherein the edges represent tensors connecting operators in adjacent layers ”, which is further details on abstract models & processing resource evaluation / selection to optimize processing performance, which is directed to a mental process. Except citing generic computer elements to implement the abstract idea, there is no additional element showing integration into a practical application or adding something significantly more to the abstract idea. The claim is not patent eligible. With regards to claim 3 , the claim recites further limitation “… at least one of the layers is bound according to at least one constrained hardware resource; and selecting the module from the library of modules comprises selecting a module to replace at least a portion of the at least one of the layers such that the selected module reduces a load on the at least one constrained hardware resource ”, which is further details on abstract models & processing resource evaluation / selection to optimize processing performance, which is directed to a mental process. Except citing generic computer elements to implement the abstract idea, there is no additional element showing integration into a practical application or adding something significantly more to the abstract idea. The claim is not patent eligible. With regards to claim 4 , the claim recites further limitation “… wherein the at least one constrained hardware resource comprises a particular arithmetic logic unit usage attribute or a memory usage attribute, or a combination thereof …. ”, which is further details on abstract models & processing resource evaluation / selection to optimize processing performance, which is directed to a mental process. Except citing generic computer elements to implement the abstract idea, there is no additional element showing integration into a practical application or adding something significantly more to the abstract idea. The claim is not patent eligible. With regards to claim 5 , the claim recites further limitation “… sorting the layers according to at least one cost metric; and prioritizing replacement of an element at particular sorted layers according to relative contributions to the at least one cost metric ”, which is further details on abstract models & processing resource evaluation / selection to optimize processing performance, which is directed to a mental process. Except citing generic computer elements to implement the abstract idea, there is no additional element showing integration into a practical application or adding something significantly more to the abstract idea. The claim is not patent eligible. With regards to claim 6 , the claim recites further limitation “… wherein the one or more target hardware elements comprise one or more arithmetic logic units (ALUs) and/or execution units ”, which is further details on abstract models & processing resource evaluation / selection to optimize processing performance, which is directed to a mental process. Except citing generic computer elements to implement the abstract idea, there is no additional element showing integration into a practical application or adding something significantly more to the abstract idea. The claim is not patent eligible. With regards to claim 7 , the claim recites further limitation “… wherein the one or more target hardware elements comprise one or more central processing units (CPUs), one or more neural processing units (NPUs) or one or more graphics processing units (GPUs), or a combination thereof ”, which is further details on abstract models & processing resource evaluation / selection to optimize processing performance, which is directed to a mental process. Except citing generic computer elements to implement the abstract idea, there is no additional element showing integration into a practical application or adding something significantly more to the abstract idea. The claim is not patent eligible. With regards to claim 8 , the claim recites further limitation “… wherein the one or more performance metrics comprise a usage of an arithmetic logic unit (ALU) and/or a level of memory traffic, or a combination thereof ”, which is further details on abstract models & processing resource evaluation / selection to optimize processing performance, which is directed to a mental process. Except citing generic computer elements to implement the abstract idea, there is no additional element showing integration into a practical application or adding something significantly more to the abstract idea. The claim is not patent eligible. With regards to claim 9 , the claim recites further limitation “… the one or more elements of the neural network structure to be replaced are isolated to a single layer in the neural network structure; and the selected module is to specify: affecting sparsity of weights of operators associated with nodes in the neural network structure; affecting quantization of the weights of operators; or affecting a clustering of the weights of operators, or a combination thereof ”, which is further details on abstract models & processing resource evaluation / selection to optimize processing performance, which is directed to a mental process. Except citing generic computer elements to implement the abstract idea, there is no additional element showing integration into a practical application or adding something significantly more to the abstract idea. The claim is not patent eligible. With regards to claim 10 , the claim recites further limitation “… wherein the one or more elements of the neural network structure to be replaced span multiple connected layers in the neural network structure ”, which is further details on abstract models & processing resource evaluation / selection to optimize processing performance, which is directed to a mental process. Except citing generic computer elements to implement the abstract idea, there is no additional element showing integration into a practical application or adding something significantly more to the abstract idea. The claim is not patent eligible. With regards to claim 11 , the claim recites further limitation “… wherein the one or more elements of the neural network structure to be replaced are isolated to an interface between adjacent layers of the neural network structure ”, which is further details on abstract models & processing resource evaluation / selection to optimize processing performance, which is directed to a mental process. Except citing generic computer elements to implement the abstract idea, there is no additional element showing integration into a practical application or adding something significantly more to the abstract idea. The claim is not patent eligible. With regards to claim 12 , the claim recites further limitation “… wherein the selected module affects a quantization of a feature map and/or activation tensor ”, which is further details on abstract models & processing resource evaluation / selection to optimize processing performance, which is directed to a mental process. Except citing generic computer elements to implement the abstract idea, there is no additional element showing integration into a practical application or adding something significantly more to the abstract idea. The claim is not patent eligible. With regards to claim 13 , the claim recites further limitation “… wherein the selected module is to specify: skipping at least one edge connection between the adjacent layers; affecting quantization in an intermediate tensor between the adjacent layers; or affecting operators in at least one of the adjacent layers, or a combination thereof ”, which is further details on abstract models & processing resource evaluation / selection to optimize processing performance, which is directed to a mental process. Except citing generic computer elements to implement the abstract idea, there is no additional element showing integration into a practical application or adding something significantly more to the abstract idea. The claim is not patent eligible. With regards to claim 14 , the claim recites further limitation “… wherein at least one of the one or more performance metrics comprises an execution latency or a memory bandwidth usage, or a combination thereof ”, which is further details on abstract models & processing resource evaluation / selection to optimize processing performance, which is directed to a mental process. Except citing generic computer elements to implement the abstract idea, there is no additional element showing integration into a practical application or adding something significantly more to the abstract idea. The claim is not patent eligible. With regards to claim 16 , the claim recites further limitation “… identify execution passes mapped to a source operation of the one or more hardware elements; and combine execution cycles for the execution passes to estimate an execution latency of the source operation to obtain at least one of the one or more observations ”, which is further details on abstract models & processing resource evaluation / selection to optimize processing performance, which is directed to a mental process. Except citing generic computer elements to implement the abstract idea, there is no additional element showing integration into a practical application or adding something significantly more to the abstract idea. The claim is not patent eligible. With regards to claim 17 , the claim recites further limitation “… identify execution passes mapped to a matrix operation, convolution and/or vector operation of the one or more hardware elements; for at least one of the execution passes, obtain a count of execution cycles for the matrix operation, convolution operation and/or vector operation; and compare the count of execution cycles with a total number of cycles for the execution of the neural network structure to obtain at least one of the one or more observations ”, which is further details on abstract models & processing resource evaluation / selection to optimize processing performance, which is directed to a mental process. Except citing generic computer elements to implement the abstract idea, there is no additional element showing integration into a practical application or adding something significantly more to the abstract idea. The claim is not patent eligible. With regards to claim 18 , the claim recites further limitation “… wherein the selected module is to reduce execution cycles of the matrix operation, convolution operation and/or vector operation based, at least in part, on the comparison of the count of execution cycles with the total number of cycles for the execution of the neural network structure ”, which is further details on abstract models & processing resource evaluation / selection to optimize processing performance, which is directed to a mental process. Except citing generic computer elements to implement the abstract idea, there is no additional element showing integration into a practical application or adding something significantly more to the abstract idea. The claim is not patent eligible. With regards to claim 19 , the claim recites further limitation “… identify execution passes of one or more hardware elements mapped to a source operation of the neural network structure; for at least one of the execution passes, compare a number of cycles to transfer a quantity of content with a total number of execution cycles for the source operation; and quantify traffic cycles of the number of traffic cycles as being associated with operator weights to obtain at least one of the one or more observations of the one or more performance metrics of the execution of the neural network structure by the one or more target hardware elements ”, which is further details on abstract models & processing resource evaluation / selection to optimize processing performance, which is directed to a mental process. Except citing generic computer elements to implement the abstract idea, there is no additional element showing integration into a practical application or adding something significantly more to the abstract idea. The claim is not patent eligible. With regards to claim 20 , the claim recites further limitation “… identify compiled tensors mapped to a source tensor of the one or more target hardware elements; and for at least one of the compiled tensors, determine whether the at least one of the compiled tensors is active during a maximum memory footprint to obtain at least one of the one or more observations ”, which is further details on abstract models & processing resource evaluation / selection to optimize processing performance, which is directed to a mental process. Except citing generic computer elements to implement the abstract idea, there is no additional element showing integration into a practical application or adding something significantly more to the abstract idea. The claim is not patent eligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 6- 7 , 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over Shen et al, US-PGPUB NO.20230077258A1 [hereafter Shen] in view of Prasanna et al., “TensorRT 3: Faster TensorFlow Inference and Volta Support”, https://developer.nvidia.com/blog/tensorrt-3-faster-tensorflow-inference/ , Dec.4, 2017 [hereafter Prasanna ]. With regards to claim 15, Shen teaches “ A computing device, the computing device comprising: a memory comprising one or more memory devices; and one or more processors coupled to the memory to (Shen, FIG.6-16, ) : determine a neural network structure (Shen, FIG.4-5, [0065] ‘ a first step … can involve pre-training a model or network …a result of this initial training can be a pre-trained model 408 …’, ) ; obtain one or more observations of one or more performance metrics of an execution of the neural network structure by one or more target hardware elements (Shen, FIG.4-5, [0067] ‘… one or more performance metrics can be determined 552 for one or more neural networks, as may relate to latency or energy usage among other performance metrics …’, [00040] ‘operator-level latency values can be pre-analyzed by creating a look-up table for every layer of a model on target hardware ..’ ) ; and … to replace one or more elements of the neural network structure based, at least in part on the obtained one or more observations. (Shen, FIG.4-5, FIG.31-33, [00386] ‘… model training 3114 may include retraining or updating an initial model 3304 …. to retrain, or update, initial model 3304, output or loss layer(s) of initial model 3304 may be reset, or deleted, and / or replace with an updated or new output or loss layer(s) …’, ) ”. Shen does not explicitly detail “ select a module from a library of modules”. However Prasanna teaches “ select a module from a library of modules ( Prasanna , FIG.1, TensorRT Optimizations , ‘ During the optimization phase TensorRT also chooses from hundreds of specialized kernels …TensorRT will pick the implementation from a library of kernels …’ ) ”. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, and having the teachings of Shen and Prasanna before him or her, to modify the HW performance aware NN adaptation system and method of Shen to include selecting from library of modules as shown in Prasanna . The motivation for doing so would have been for to deliver the best performance for the target GPU ( Prasanna , TensorRT Optimizations). Claim 1 is substantially similar to claim 15. The arguments as given above for claim 15 are applied, mutatis mutandis, to claim 1, therefore the rejection of claims 15 are applied accordingly. With regards to claim 2, Shen in view of Prasanna teaches “ The method of claim 1, wherein: the neural network structure is represented by a graph comprising operators to communicate according to edges in the graph (Shen, FIG.1, ) ; and the operators are arranged in layers, wherein the edges represent tensors connecting operators in adjacent layers (Shen, FIG.1-2, [0333] ‘… tensor cores are configured to perform deep learning matrix arithmetic, such as convolution operations for neural network training and inferencing … and performs a matrix multiply and accumulate operation …’).” With regards to claim 6, Shen in view of Prasanna teaches “ The method of claim 1, wherein the one or more target hardware elements comprise one or more arithmetic logic units (ALUs) and/or execution units (Shen, FIG.8-15, ).” With regards to claim 7, Shen in view of Prasanna teaches “ The method of claim 1, wherein the one or more target hardware elements comprise one or more central processing units (CPUs), one or more neural processing units (NPUs) or one or more graphics processing units (GPUs), or a combination thereof (Shen, FIG.8-15, FIG.22, ).” With regards to claim 14, Shen in view of Prasanna teaches “ The method of claim 1, wherein at least one of the one or more performance metrics comprises an execution latency or a memory bandwidth usage, or a combination thereof (Shen, FIG. 4- 5, [0067] ‘ one or more performance metric can be determined 552 for one or more neural networks, as may relate to latency or energy usage among other performance metrics …’, [0046] ‘… a size of such a network can be reduced in order to improve performance of this network, such as to reduce an amount of latency in producing inferences …’ , ).” Additional Relevant Art The prior art made of record is considered pertinent to applicant’s disclosure and is recorded on Form PTO-892. Applicant is required under 37 C.F.R. § 1.111 (c) to consider these references fully when responding to this action, with particular attention paid to: Lankford, et al., “Open-source neural architecture search with ensemble and pre-trained networks”, Internal Journal of Modeling and Optimization, Vol.11, No.2, May 2021 [hereafter Lankford] shows NN architecture search from library of models. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to TSU-CHANG LEE whose telephone number is 571-272-3567. The fax number is 571-273-3567. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Omar Fernandez Rivas, can be reached 571-272-2589. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TSU-CHANG LEE/ Primary Examiner, Art Unit 2128