Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-7 rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claims do not fall within at least one of the four categories of patent eligible subject matter because Applicant claims a system of units, but the specification paragraph 31 makes it clear that “ the units may be implemented as hardware or software or as a combination of hardware and software. ” Therefore, Applicant has directed their claims to software per se with no structural recitation. When applicant claims some sort of structure, Claims 1-7 will be rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea of a mental concept without significantly more. The claims recite assembling a model topology , dividing a model, analyzing a model, profiling a model, generating new models, determining the higher performance index of a plurality of candidate models, and training the model . Applicant’s training is not the usually training of a neural network, here applicant’s training could be as simple as modifying the topology to generate different “performance index” (claim 6) until the index is above a threshold. In a small topology, which is covered by the scope of this claim, that could be as simple as deleting or adding nodes until the performance is at a desired level . This is not error backpropagation – which the office has decided is not something that can be performed in the human mind . This judicial exception is not integrated into a practical application becau se, currently, these elements aren’t connected at all to a practical application . The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because obtaining data to build a model is insignificant extra-solution activity . Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale , or otherwise available to the public before the effective filing date of the claimed invention. Claims 1, 2 and 6 a re rejected under 35 U.S.C. 102 (a)(1) as being described by US20220114495A1 to Nurvitadhi et al (Nur ). Nur teaches claim 1. A module-based prefabricated artificial intelligence development system comprising: an adaptive artificial intelligence development unit; and (Nur fig. 8) an AI module hub, wherein the adaptive artificial intelligence development unit comprises: analysis unit configured to obtain adaptive autonomous agent requirement information and receive an AI topology and AI modules corresponding to the adaptive autonomous agent requirement information from the AI module hub; (Nur para 153 “ 802 , at which the ML system configuration circuitry 300 receives a request to execute a machine-learning (ML) workload … For example, the interface circuitry 310 (FIG. 3) can receive a request to identify a combination of hardware and/or software to execute the workload(s) …”) an assembly unit configured to generate a candidate artificial intelligence model by assembling the AI modules based on the AI topology; and (Nur para 156 “ execute performance modeling (e.g., emulation(s), simulation(s), debugging, etc.) associated with the GPU executing the CNN. ”) a training unit configured to train the candidate artificial intelligence model. (Nur para 157 “ At block 810 , the ML system configuration circuitry 300 determines whether the evaluation parameter satisfies a threshold. For example, the configuration evaluation circuitry 340 can determine whether an evaluation parameter, such as an accuracy parameter, has a value that satisfies an evaluation parameter threshold …” Nur para 158 “ If, at block 810 , the ML system configuration circuitry 300 determines that the evaluation parameter does not satisfy a threshold, then, at block 812 , the ML system configuration circuitry 300 updates an ontology database based on the evaluation parameter .” Nur para 159 “ 814 , the ML system configuration circuitry 300 adjusts the first configuration based on the evaluation parameter. ”) Nur teaches claim 2. The module-based prefabricated artificial intelligence development system of claim 1, wherein the adaptive autonomous agent requirement information comprises at least one of environment information, state information, and purpose information for an adaptive autonomous agent. (Nur para 153 “802, at which the ML system configuration circuitry 300 receives a request to execute a machine-learning (ML) workload. ” Nur para 140 “ a request, etc., indicative of a desired AI/ML operation (e.g., a desire to do image processing without specifying the initial AI model). In some such examples, the controller 202 can identify the initial AI model based on the function input, the request, etc. ”) Nur teaches claim 6. The module-based prefabricated artificial intelligence development system of claim 1, wherein the training unit evaluates a performance index of the candidate artificial intelligence model, and determines the candidate artificial intelligence model as a final artificial intelligence model based on a determination that the performance index exceeds a preset threshold, and (Nur para 157 “ 810 , the ML system configuration circuitry 300 determines whether the evaluation parameter satisfies a threshold. For example, the configuration evaluation circuitry 340 can determine whether an evaluation parameter, such as an accuracy parameter, has a value that satisfies an evaluation parameter threshold, such as an accuracy threshold (e.g., an accuracy parameter threshold). ”) evaluates a performance index of the candidate artificial intelligence model, stores the candidate artificial intelligence model together with the performance index based on a determination that the performance index is less than or equal to a preset threshold, and requests the assembly unit to generate a new candidate artificial intelligence model, and (Nur para 158 “ 810 , the ML system configuration circuitry 300 determines that the evaluation parameter does not satisfy a threshold, then, at block 812 , the ML system configuration circuitry 300 updates an ontology database based on the evaluation parameter. For example, the ontology generation circuitry 350 (FIG. 3) can update the ontology database 208 of FIG. 2 based on the evaluation parameters 226 , the proposed HW/SW instance 222 that are associated with the evaluation parameters 226 , etc., and/or any combination(s) thereof. ” Nur fig. 8 shows the request to generate new configs.) the assembly unit generates the new candidate artificial intelligence model in response to the requesting. (Nur par 159 “ 814 , the ML system configuration circuitry 300 adjusts the first configuration based on the evaluation parameter. For example, the ML software configuration circuitry 320 can replace the CNN with a different AI/ML model, add another AI/ML model, change a configuration of the CNN, etc., ”) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim s 3-5 are rejected under 35 U.S.C. 103 as being unpatentable over US20220114495A1 to Nurvitadhi et al (Nur) and US20220083386A1 to Almeida et al Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over US20220114495A1 to Nurvitadhi et al (Nur) and US20170222960A1 to Agarwal et al. Nur teaches claim 3. The module-based prefabricated artificial intelligence development system of claim 1, wherein the AI module hub comprises an AI topology storage, an AI module storage, an artificial intelligence modularization unit, and an artificial intelligence module profiler, the artificial intelligence modularization unit receives an artificial intelligence model, divides the artificial intelligence model into a plurality of modules, and stores structural information about the artificial intelligence model in the AI topology storage, and (Nur para 154 “ At block 804 , the ML system configuration circuitry 300 generates a first configuration of one or more ML models based on the ML workload. ”) the artificial intelligence module profiler analyzes relevant information of each of the plurality of modules , generates profile information including related topology information, and input/output information, position , and characteristic information of the module , and stores the information in the AI module storage. (Nur para 157 “ 808 , the ML system configuration circuitry 300 generates an evaluation parameter based on an execution of the workload based on the first configuration and the second configuration. For example, the configuration evaluation circuitry 340 (FIG. 3) can execute performance modeling (e.g., emulation(s), simulation(s), debugging, etc.) associated with the GPU executing the CNN. ” The workload information is the claimed input/output information. Topology and characteristic information is the first and second configurations. The config circuitry has the memory where the information is stored.) Nur doesn’t teach dividing the model. However, Almeida teaches the artificial intelligence modularization unit receives an artificial intelligence model, divides the artificial intelligence model into a plurality of modules, and stores structural information about the artificial intelligence model in the AI topology storage, and (Almeida para 60 “ Once the subset of computing resources which satisfy the optimisation constraints has been identified, the method may comprise partitioning the neural network into a number of partitions based on the determined subset of computing resources that are able to satisfy the at least one optimisation constraint (step S 108 ). ”) the artificial intelligence module profiler analyzes relevant information of each of the plurality of modules, generates profile information including related topology information, and input/output information, position, and characteristic information of the module, and stores the information in the AI module storage. (Almeida para 60 “ if two computing resources are identified, the method may divide the neural network into three partitions—one to be implemented by the user device, and two to be implemented by the two computing resources. ” The Relevant information is the resources where the partition is executed.” Almeida para 61 “ a computing resource may itself be able to share part of the computation of a partition of the neural network with a further computing resource. In such cases, the partition may factor this further subdivision/further distribution into account when distributing the computation across the identified subset of computing resources. The partitions may also be determined based on the computing capability/processing power of a computing resource, the load of the computing resource, and the speed of data transmission to/from the computing resource, for example. ” Position information is the location where the partition is executed.) Almeida, Nur and the claims all track execution of a model on hardware. It would have been obvious to a person having ordinary skill in the art, at the time of filing, to use Almeida in order to “ identify computing resources that are able to satisfy all of the optimisation constraints. ” Almeida para 5. Almeida teaches claim 4. The module-based prefabricated artificial intelligence development system of claim 3, wherein the artificial intelligence module profiler generates meta information corresponding to each of the plurality of modules, and stores the plurality of modules and the meta information together in the AI module storage. (Almeida para 60 “ Once the subset of computing resources which satisfy the optimisation constraints has been identified, the method may comprise partitioning the neural network into a number of partitions based on the determined subset of computing resources that are able to satisfy the at least one optimisation constraint (step S 108 ) .” The subset of resources is the meta information. The module with the subset information and the partitions is the AI module storage.) Almeida teaches claim 5. The module-based prefabricated artificial intelligence development system of claim 3, wherein the analysis unit receives the AI topology corresponding to the adaptive autonomous agent requirement information from the AI topology storage, and receives the AI modules corresponding to the adaptive autonomous agent requirement information from the AI module storage. (Almeida par 54 “ This step may comprise obtaining one or more of: a time constraint (e.g. the neural network must take no longer than 1 ms to execute/output a result), a cost constraint (e.g. a fixed cost value, or specified in terms of how long a cloud server can be used for per device), inference throughput, …” And Almeida fig 1 s104 and s108 where resources are identified and the neural network is partitioned.) Nur teaches claim 7. The module-based prefabricated artificial intelligence development system of claim 6, wherein the training unit, when performance indices of a plurality of candidate artificial intelligence models generated by the assembly unit are all equal to or less than the preset threshold value, determines a candidate artificial intelligence model having a highest performance index from among the plurality of candidate artificial intelligence models as a final artificial intelligence model. ( Nur para 158 “ If, at block 810 , the ML system configuration circuitry 300 determines that the evaluation parameter does not satisfy a threshold, then, at block 812 , the ML system configuration circuitry 300 updates an ontology database based on the evaluation parameter. ”) Nur doesn’t teach picking the best model. However, Agarwal teaches determines a candidate artificial intelligence model having a highest performance index from among the plurality of candidate artificial intelligence models as a final artificial intelligence model. (Agarwal par 52 “ The candidate model with the highest performance score is selected to be compared with the performance score of the current active model at operation 430 . ”) Nur, Agarwal and the claims all compare the models performances. It would have been obvious to a person having ordinary skill in the art, at the time of filing, to pick the highest scoring candidate because that is the best available model. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT Austin Hicks whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)270-3377 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT Monday - Thursday 8-4 PST . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT Mariela Reyes can be reached at FILLIN "SPE Phone?" \* MERGEFORMAT (571) 270-1006 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AUSTIN HICKS/ Primary Examiner, Art Unit 2142