DETAILED ACTION
This action is in response to the claims filed October 23, 2023. Claims 1-19 are pending. Claims 1 and 14 are independent claims.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claim 14 is objected to because of the following informalities:
Claim 14 recites “the model inference module 104B for serving the trained model and to the project store server 108 for storing the trained model”. This should likely read “the model inference module for serving the trained model and to the project store server for storing the trained model”.
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 11 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 11 recites “wherein the model inference module provides for serving over the communications network of the neural network-based models that have been trained”. Regarding “provides for serving over the communication network of the neural network-based models”, it is unclear what is being transmitted where. “The communication network of the neural network” implies that the communication network is part of the neural network model. “Serving over the communication network” may be interpreted to mean either that information is being served to the neural network, or that the neural network is being transmitted elsewhere. For the purposes of examination, the claim is interpreted to mean that the trained neural network is being transmitted over the communication network after being trained.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-13 are rejected under 35 U.S.C. 103 as being unpatentable over US 20210124988 A1 (hereinafter “Kadowaki”), in view of US 20210304073 A1 (hereinafter “Li”), further in view of US 20220067575 A1 (hereinafter “Saha”).
Regarding claim 1, Kadowaki discloses:
A system for implementing a method for the automated development of artificial intelligence (AI) projects comprising (Paragraphs [0008]-[0009]):
- a processor-based AI project server executing instructions that implement a model training and validation module, …, and an input module (Paragraph [0050], “The input unit 19 a enables users to operate the at least one input device to enter various information items, receives the entered information items, and sends the received information items to the processing unit 12 [an input module]; Paragraph [0092], “For example, the training unit 22 trains the candidate models fkr of the candidate integrated model Sc using a training dataset (X, Y) comprised of input data items X and output data items, i.e. ground-truth data items, Y that are respectively paired to the input data items X [a processor-based AI project server executing instructions that implement a model training]”; Paragraph [0119], “The evaluation metric calculator 24 performs an evaluation-metric calculation task of applying the test dataset (V, W) to the candidate integrated model Sc comprised of the candidate models fkr to thereby calculate an evaluation metric for evaluating the output of the candidate integrated model Sc [and validation module]”).
Kadowaki does not explicitly disclose:
- Model inference module
- a project store server providing for the storage, searching, and retrieval of existing AI projects;
- a data store server providing for the storage, searching, and retrieval of data sets;
- a processor-based search engine executing instructions that implement a feature selection module providing for manual or automated selection of features applied to neural networks as part of the existing AI projects; and
However, Li discloses:
- Model inference module (Paragraph [0059], “The server devices 204(1)-204(n) may be hardware or software or may represent a system with multiple servers in a pool, which may include internal or external networks. The server devices 204(1)-204(n) hosts the databases 206(1)-206(n) that are configured to store data that relates to model development lifecycle metadata as well as input data source and corresponding final code”) [Examiner’s remarks: Model inference module is interpreted based on claim 9 and the specification. The completed (trained model) is saved on server devices, so must include a communication network over which the model may be sent.]
- a processor-based search engine executing instructions that implement a feature selection module providing for manual or automated selection of features applied to neural networks as part of the existing AI projects (Paragraph [0113], “In another exemplary embodiment, the modularized framework may include a selection component such as, for example, a feature selection component. The feature selection component may correspond to an optimization methodology for intelligently discovering features from input and/or source data that provides optimal predictive performance. The optimization methodology may include a selection algorithm that receives an input such as, for example, a data set, a specified constraint such as a monotonic data constraint and/or a feature interaction constraint that the selection algorithm utilizes to inform an output, as well as a specified rule [a processor-based search engine executing instructions that implement a feature selection module providing for manual or automated selection of features applied to neural networks as part of the existing AI projects]”) [Examiner’s remarks: A module for feature selection is disclosed and the features correspond to a model to optimize predictive performance.]; and
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Li into the teachings of Kadowaki to include “model inference module”, and “a processor-based search engine executing instructions that implement a feature selection module providing for manual or automated selection of features applied to neural networks as part of the existing AI projects”. As stated in Li, “Therefore, there is a need for a centralized modularized framework with specialized modules utilizing model development best practices to automate and simplify the machine learning model development process” (Paragraph [0005]). Li describes a modularized development framework which eases the development process of AI projects. Including modules for feature selection provides an easier way to achieve commonly required model building steps. Therefore, it would be obvious to one of ordinary skill in the art to combine modularized feature selection using known methods with an automated AI project developer and server.
The combination of Kadowaki and Li does not explicitly disclose:
- a project store server providing for the storage, searching, and retrieval of existing AI projects;
- a data store server providing for the storage, searching, and retrieval of data sets;
However, Saha discloses:
- a project store server providing for the storage, searching, and retrieval of existing AI projects (Paragraph [0006], “According to an aspect of an embodiment, operations may include storing existing machine learning (ML) projects in a corpus, the existing ML projects including ML pipelines with functional blocks. The operations may also include generating a search query for a new ML project based on a new dataset for the new ML project and a new ML task for the new ML project. In addition, the operations may include searching through the existing ML projects stored in the corpus, based on the search query, for a set of existing ML projects”) [Examiner’s remarks: There is a project store for storing existing ML projects, and allowing for search and retrieval of such projects.];
- a data store server providing for the storage, searching, and retrieval of data sets (Paragraph [0006], “According to an aspect of an embodiment, operations may include storing existing machine learning (ML) projects in a corpus, the existing ML projects including ML pipelines with functional blocks. The operations may also include generating a search query for a new ML project based on a new dataset for the new ML project and a new ML task for the new ML project. In addition, the operations may include searching through the existing ML projects stored in the corpus, based on the search query, for a set of existing ML projects”; Paragraph [0080], “The method 600 may include, at block 602, ranking all datasets of all ML projects from the one or more repositories of ML projects based on a quality of the datasets. For example, the curation module 114 may rank all datasets of the existing ML projects 204 from the OSS ML project databases 102 a-102 n based on a quality of the datasets. In some embodiments, the quality of the datasets may be determined based on votes by other users (e.g., votes in Kaggle)”) [Examiner’s remarks: The project store (e.g. Kaggle) allows for datasets to be stored, searched (and ranked) and retrieved for use by users.];
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Saha into the combined teachings of Kadowaki and Li to include “a project store server providing for the storage, searching, and retrieval of existing AI projects” and “a data store server providing for the storage, searching, and retrieval of data sets”. As stated in Saha, “Automated ML (AutoML) is the process of automating the process of applying ML to real-world problems. AutoML may allow non-experts to make use of ML models and techniques without requiring them to first become ML experts” (Paragraph [0004]). Saving projects to shared servers allows for better collaboration, and for increased resource usage for users with less machine learning experience. Therefore, it would be obvious to one of ordinary skill in the art to combine storing projects to a server with an automated AI project developer and server.
Regarding claim 2, the rejection of claim 1 is incorporated; and Kadowaki does not explicitly disclose:
- further comprising an external feature store providing model training services over the communication network, including predefined feature engineering modules.
However, Li discloses:
- further comprising an external feature store providing model training services over the communication network, including predefined feature engineering modules (Paragraph [0109], “The processed input source may then be analyzed in a data explorer by a data diagnostic API. Finally, the processed and analyzed input source may then be optimized in a data model optimizer by a model development API [further comprising an external feature store providing model training services over the communication network]”; Paragraph [0113], “In another exemplary embodiment, the modularized framework may include a selection component such as, for example, a feature selection component. The feature selection component may correspond to an optimization methodology for intelligently discovering features from input and/or source data that provides optimal predictive performance. The optimization methodology may include a selection algorithm that receives an input such as, for example, a data set, a specified constraint such as a monotonic data constraint and/or a feature interaction constraint that the selection algorithm utilizes to inform an output, as well as a specified rule [including predefined feature engineering modules]”) [Examiner’s remarks: As interpreted through the specification, the external feature store is a feature provided through a third party. Li discloses providing model training and feature engineering through APIs.].
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Li into the teachings of Kadowaki to include “further comprising an external feature store providing model training services over the communication network, including predefined feature engineering modules”. As stated in Li, “Therefore, there is a need for a centralized modularized framework with specialized modules utilizing model development best practices to automate and simplify the machine learning model development process” (Paragraph [0005]). Li describes a modularized development framework which eases the development process of AI projects. Including modules for feature selection and training provides an easier way to achieve commonly required model building steps. Therefore, it would be obvious to one of ordinary skill in the art to combine modularized feature selection and model training with an automated AI project developer and server.
Regarding claim 3, the rejection of claim 2 is incorporated; and Kadowaki discloses:
- … the processor-based AI project server (Paragraph [0050], “The input unit 19 a enables users to operate the at least one input device to enter various information items, receives the entered information items, and sends the received information items to the processing unit 12 [an input module]; Paragraph [0092], “For example, the training unit 22 trains the candidate models fkr of the candidate integrated model Sc using a training dataset (X, Y) comprised of input data items X and output data items, i.e. ground-truth data items, Y that are respectively paired to the input data items X [a processor-based AI project server executing instructions that implement a model training]”; Paragraph [0119], “The evaluation metric calculator 24 performs an evaluation-metric calculation task of applying the test dataset (V, W) to the candidate integrated model Sc comprised of the candidate models fkr to thereby calculate an evaluation metric for evaluating the output of the candidate integrated model Sc”) …
Kadowaki does not explicitly disclose:
- further comprising a communication network electronically interconnecting the project store server, the data store server, the processor based search engine, …, and the external feature store.
However, Li discloses:
- further comprising a communication network electronically interconnecting the [servers], the processor based search engine, …, and the external feature store (Paragraph [0037], “The computer system 102 may operate as a standalone device or may be connected to other systems or peripheral devices. For example, the computer system 102 may include, or be included within, any one or more computers, servers, systems, communication networks, cloud environment, or container systems”; Paragraph [0109], “The processed input source may then be analyzed in a data explorer by a data diagnostic API. Finally, the processed and analyzed input source may then be optimized in a data model optimizer by a model development API”; Paragraph [0113], “In another exemplary embodiment, the modularized framework may include a selection component such as, for example, a feature selection component. The feature selection component may correspond to an optimization methodology for intelligently discovering features from input and/or source data that provides optimal predictive performance. The optimization methodology may include a selection algorithm that receives an input such as, for example, a data set, a specified constraint such as a monotonic data constraint and/or a feature interaction constraint that the selection algorithm utilizes to inform an output, as well as a specified rule”) [Examiner’s remarks: Li discloses electronically connected servers, computers, and storages for automated AI development. One of ordinary skill in the art may combine the network with any number of components and servers.].
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Li into the teachings of Kadowaki to include “further comprising a communication network electronically interconnecting the [servers], the processor based search engine, …, and the external feature store”. As stated in Li, “Therefore, there is a need for a centralized modularized framework with specialized modules utilizing model development best practices to automate and simplify the machine learning model development process” (Paragraph [0005]). Communication amongst different modules allows for better integration of different components. Therefore, it would be obvious to one of ordinary skill in the art to combine electronic networks with an automated AI project developer and server
The combination of Kadowaki and Li does not explicitly disclose:
- … the project store server, the data store server…
However, Saha discloses:
- … the project store server, the data store server (Paragraph [0006], “According to an aspect of an embodiment, operations may include storing existing machine learning (ML) projects in a corpus, the existing ML projects including ML pipelines with functional blocks. The operations may also include generating a search query for a new ML project based on a new dataset for the new ML project and a new ML task for the new ML project. In addition, the operations may include searching through the existing ML projects stored in the corpus, based on the search query, for a set of existing ML projects”) …
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Saha into the combined teachings of Kadowaki and Li to include “the project store server, the data store server”. As stated in Saha, “Automated ML (AutoML) is the process of automating the process of applying ML to real-world problems. AutoML may allow non-experts to make use of ML models and techniques without requiring them to first become ML experts” (Paragraph [0004]). Saving projects to shared servers allows for better collaboration, and for increased resource usage for users with less machine learning experience. Therefore, it would be obvious to one of ordinary skill in the art to combine storing projects to a server with an automated AI project developer and server.
Regarding claim 4, the rejection of claim 1 is incorporated; and Kadowaki does not explicitly disclose:
- wherein each of the existing AI projects stored on the project store server includes a detailed description of the AI project.
However, Li discloses:
- wherein each of the existing AI projects stored on the project store server includes a detailed description of the AI project (Paragraph [0011], “…and automatically generating, via a model explainer, at least one explanation document, the explanation document may include behavior information and interaction information that corresponds to at least one from among an input process, an interim process, and an output process of the at least one final model [wherein each of the existing AI projects stored on the project store server includes a detailed description of the AI project]”).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Li into the teachings of Kadowaki to include “wherein each of the existing AI projects stored on the project store server includes a detailed description of the AI project”. As stated in Li, “Therefore, there is a need for a centralized modularized framework with specialized modules utilizing model development best practices to automate and simplify the machine learning model development process” (Paragraph [0005]). Providing information on the function and development of an AI project allows the project to be more easily understood and used by collaborators. Therefore, it would be obvious to one of ordinary skill in the art to combine descriptions of a project with an automated AI project developer and server.
Regarding claim 5, the rejection of claim 4 is incorporated; and Kadowaki further discloses:
- wherein each of the existing AI projects … further includes accompanying models, …and configuration parameters (Paragraph [0048], “Each model represents, for example, a data structure that is comprised of a functional form and model parameters”).
Kadowaki does not explicitly disclose:
- … stored on the project store server …, feature engineering modules, and …
However, Li discloses:
- … feature engineering modules (Paragraph [0113], “In another exemplary embodiment, the modularized framework may include a selection component such as, for example, a feature selection component. The feature selection component may correspond to an optimization methodology for intelligently discovering features from input and/or source data that provides optimal predictive performance.”)…
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Li into the teachings of Kadowaki to include “feature engineering modules”. As stated in Li, “Therefore, there is a need for a centralized modularized framework with specialized modules utilizing model development best practices to automate and simplify the machine learning model development process” (Paragraph [0005]). Li describes a modularized development framework which eases the development process of AI projects. Including modules for feature selection provides an easier way to achieve commonly required model building steps. Therefore, it would be obvious to one of ordinary skill in the art to combine modularized feature selection using known methods with an automated AI project developer and server.
The combination of Kadowaki and Li does not explicitly disclose:
- … stored on the project store server …
However, Saha discloses:
- … stored on the project store server (Paragraph [0043], “The OSS ML project databases 102 a-102 n may be large-scale repositories of existing ML projects, with each ML project including include electronic data that includes at least a dataset, an ML task defined on the dataset, and an ML pipeline (e.g., a script or program code) that is configured to implement a sequence of operations to train an ML model for the ML task and to use the ML model for new predictions”)…
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Saha into the combined teachings of Kadowaki and Li to include “stored on the project store server ”. As stated in Saha, “Automated ML (AutoML) is the process of automating the process of applying ML to real-world problems. AutoML may allow non-experts to make use of ML models and techniques without requiring them to first become ML experts” (Paragraph [0004]). Saving projects to shared servers allows for better collaboration, and for increased resource usage for users with less machine learning experience. Therefore, it would be obvious to one of ordinary skill in the art to combine storing projects to a server with an automated AI project developer and server.
Regarding claim 6, the rejection of claim 1 is incorporated; and Kadowaki further discloses:
- wherein the datasets stored on the data store server have a common pattern (Paragraph [0111], “The information processing apparatus 10 of the exemplary embodiment is configured to evaluate the performance of the candidate integrated model trained by the training unit 22 using a test dataset (V, W) that have the same type as the type of the training dataset (X, Y); the test dataset (V, W) is comprised of input data items V and output data items W that are respectively paired to each other [wherein the datasets stored on the data store server have a common pattern]”).
Regarding claim 7, the rejection of claim 1 is incorporated; and Kadowaki does not explicitly disclose:
- wherein the feature selection module implements a forward feature selection method, a backward feature elimination method, or any other selection method known to one of ordinary skill in the art.
However, Li discloses:
- wherein the feature selection module implements a forward feature selection method, a backward feature elimination method, or any other selection method known to one of ordinary skill in the art (Paragraph [0113], “In another exemplary embodiment, the modularized framework may include a selection component such as, for example, a feature selection component. The feature selection component may correspond to an optimization methodology for intelligently discovering features from input and/or source data that provides optimal predictive performance. The optimization methodology may include a selection algorithm that receives an input such as, for example, a data set, a specified constraint such as a monotonic data constraint and/or a feature interaction constraint that the selection algorithm utilizes to inform an output, as well as a specified rule [wherein the feature selection module implements a forward feature selection method, a backward feature elimination method, or any other selection method known to one of ordinary skill in the art]”).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Li into the teachings of Kadowaki to include “wherein the feature selection module implements a forward feature selection method, a backward feature elimination method, or any other selection method known to one of ordinary skill in the art”. As stated in Li, “Therefore, there is a need for a centralized modularized framework with specialized modules utilizing model development best practices to automate and simplify the machine learning model development process” (Paragraph [0005]). Li describes a modularized development framework which eases the development process of AI projects. Including modules for feature selection provides an easier way to achieve commonly required model building steps. Therefore, it would be obvious to one of ordinary skill in the art to combine modularized feature selection using known methods with an automated AI project developer and server.
Regarding claim 8, the rejection of claim 1 is incorporated; and Kadowaki does not explicitly disclose:
- wherein the search engine further executes instructions that implement a grid search, random search, Bayesian search, or any other standard search method known to a person of ordinary skill in the art.
However, Li discloses:
- wherein the search engine further executes instructions that implement a grid search, random search, Bayesian search, or any other standard search method known to a person of ordinary skill in the art (Paragraph [0089], “In another exemplary embodiment, the data model optimizer may utilize a global optimization routine to optimize the model. The global optimization routine may include optimization routines such as, for example, a Bayesian optimization routine, a grid search optimization routine [wherein the search engine further executes instructions that implement a grid search, random search, Bayesian search, or any other standard search method known to a person of ordinary skill in the art]”; Paragraph [0092], “In another exemplary embodiment, the data model optimizer may enable the handling of a search space with both continuous parameter dimensions as well as discrete parameter dimensions. The data model optimizer may adapt a special search algorithm to better navigate the search space and to improve performance”; Paragraph [0094], “In another exemplary embodiment, the data model optimizer may automatically utilize error density analysis to re-optimize the feature selection choices and the engineering operation choices to improve model performance”).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Li into the teachings of Kadowaki to include “wherein the search engine further executes instructions that implement a grid search, random search, Bayesian search, or any other standard search method known to a person of ordinary skill in the art”. As stated in Li, “Therefore, there is a need for a centralized modularized framework with specialized modules utilizing model development best practices to automate and simplify the machine learning model development process” (Paragraph [0005]). Searching parameters for model optimization helps to develop a more accurate model. Therefore, it would be obvious to one of ordinary skill in the art to combine parameter search using known methods with an automated AI project developer and server.
Regarding claim 9, the rejection of claim 1 is incorporated; and Kadowaki further discloses:
- wherein the model training and validation module provides for the creation, training, and validation of neural network-based models within the new AI projects (Paragraph [0092], “For example, the training unit 22 trains the candidate models fkr of the candidate integrated model Sc using a training dataset (X, Y) comprised of input data items X and output data items, i.e. ground-truth data items, Y that are respectively paired to the input data items X [wherein the model training and validation module provides for the creation, training]”; Paragraph [0119], “The evaluation metric calculator 24 performs an evaluation-metric calculation task of applying the test dataset (V, W) to the candidate integrated model Sc comprised of the candidate models fkr to thereby calculate an evaluation metric for evaluating the output of the candidate integrated model Sc [validation of neural network-based models within the new AI projects]”).
Regarding claim 10, the rejection of claim 9 is incorporated; and Kadowaki further discloses:
- wherein this model training and validation module may implement a cross- validation method for the training and validation of the neural network-based models within new AI projects (Paragraph [0115-0117], “For example, the training unit 22 of the exemplary embodiment is configured to perform the training task of 1. Inputting the m gray-scale image data items to each of the candidate models fkr for training the corresponding one of the candidate models
f
k
r
2. Using the remaining (M−m) image data items as test data items for testing the performance of the candidate integrated model Sc of the candidate models
f
k
r
trained by the training unit 22 [wherein this model training and validation module may implement a cross- validation method for the training and validation of the neural network-based models within new AI projects]”).
Regarding claim 11, the rejection of claim 9 is incorporated; and Kadowaki does not explicitly disclose:
- wherein the model inference module provides for serving over the communications network of the neural network-based models that have been trained.
However, Li discloses:
- wherein the model inference module provides for serving over the communications network of the neural network-based models that have been trained (Paragraph [0059], “The server devices 204(1)-204(n) may be hardware or software or may represent a system with multiple servers in a pool, which may include internal or external networks. The server devices 204(1)-204(n) hosts the databases 206(1)-206(n) that are configured to store data that relates to model development lifecycle metadata as well as input data source and corresponding final code”) [Examiner’s remarks: The completed (trained model) is saved on server devices, so must include a communication network over which the model may be sent.].
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Li into the teachings of Kadowaki to include “wherein the model inference module provides for serving over the communications network of the neural network-based models that have been trained”. As stated in Li, “Therefore, there is a need for a centralized modularized framework with specialized modules utilizing model development best practices to automate and simplify the machine learning model development process” (Paragraph [0005]). Providing trained models to a shared server allows the model to be used by multiple people without requiring extensive resources for retraining. Therefore, it would be obvious to one of ordinary skill in the art to combine saving a trained model with an automated AI project developer and server.
Regarding claim 12, the rejection of claim 1 is incorporated; and Kadowaki further discloses:
- wherein the input module provides for receiving user inputs directly or over the communication network (Paragraph [0050], “The input unit 19 a enables users to operate the at least one input device to enter various information items, receives the entered information items, and sends the received information items to the processing unit 12”).
Regarding claim 13, the rejection of claim 12 is incorporated; and Kadowaki further discloses:
- wherein the input module provides a graphical user interface over the communication network (Paragraph [0051], “The informing unit 19 b includes, for example, a display and/or a speaker. The informing unit 19 b is configured to provide, to users, visible and/or audible information through the display and/or speaker.”).
Claims 14-18 are rejected under 35 U.S.C. 103 as being unpatentable over US 20210304073 A1 (hereinafter “Li”), in view of US 20220188700 A1 (hereinafter “Khavronin”), further in view of US 20220067575 A1 (hereinafter “Saha”).
Regarding claim 14, Li discloses:
A method for implementing the automated development of AI projects, the steps comprising (Paragraph [0008]):
- receiving at an input module a submission for a new AI project (Paragraph [0007], “The method includes receiving, from a user via a graphical user interface [receiving at an input module], at least one input that relates to a workflow, the workflow may include at least one from among a data engineering workflow and a feature development workflow; …and generating, via the data assembler, the at least one model for the at least one input by using at least one design matrix that relates to at least one explanatory variable from the at least one modeling strategy [a submission for a new AI project]”);
…
- submission to a search engine of submitted feature engineering modules included in the submission for a new AI project and the existing AI projects selected from the project store server (Paragraph [0092], “In another exemplary embodiment, the data model optimizer may enable the handling of a search space with both continuous parameter dimensions as well as discrete parameter dimensions. The data model optimizer may adapt a special search algorithm to better navigate the search space and to improve performance. In another exemplary embodiment, the continuous parameter dimension may include a numeric parameter that may hold any value in a specified interval. The discrete parameter dimension may include a numeric parameter that, for any value in a range of values that the parameter is permitted to hold, includes a positive minimum distance to the nearest other permissible value. In another exemplary embodiment, the search space may include a set of all possible points of an optimization problem that satisfy the problem's constraints such as, for example, an inequality constraint, an equality constraint, and an integer constraint”; Paragraph [0113], “The feature selection component may correspond to an optimization methodology for intelligently discovering features from input and/or source data that provides optimal predictive performance. The optimization methodology may include a selection algorithm that receives an input such as, for example, a data set, a specified constraint such as a monotonic data constraint and/or a feature interaction constraint that the selection algorithm utilizes to inform an output, as well as a specified rule” [submission to a search engine of submitted feature engineering modules included in the submission for a new AI project and the existing AI projects selected from the project store server]);
- forming by the search engine of a search space that comprises all configuration parameters included in the submitted feature engineering modules (Paragraph [0092], “In another exemplary embodiment, the data model optimizer may enable the handling of a search space with both continuous parameter dimensions as well as discrete parameter dimensions. The data model optimizer may adapt a special search algorithm to better navigate the search space and to improve performance. In another exemplary embodiment, the continuous parameter dimension may include a numeric parameter that may hold any value in a specified interval. The discrete parameter dimension may include a numeric parameter that, for any value in a range of values that the parameter is permitted to hold, includes a positive minimum distance to the nearest other permissible value. In another exemplary embodiment, the search space may include a set of all possible points of an optimization problem that satisfy the problem's constraints such as, for example, an inequality constraint, an equality constraint, and an integer constraint”; Paragraph [0113], “The feature selection component may correspond to an optimization methodology for intelligently discovering features from input and/or source data that provides optimal predictive performance. The optimization methodology may include a selection algorithm that receives an input such as, for example, a data set, a specified constraint such as a monotonic data constraint and/or a feature interaction constraint that the selection algorithm utilizes to inform an output, as well as a specified rule” [forming by the search engine of a search space that comprises all configuration parameters included in the submitted feature engineering modules]) [Examiner’s remarks: The search engine determines all possible configuration parameters within certain permissible values.];
- defining by the search engine of a set of candidate configuration parameters selected from within the search space (Paragraph [0092], “In another exemplary embodiment, the data model optimizer may enable the handling of a search space with both continuous parameter dimensions as well as discrete parameter dimensions. The data model optimizer may adapt a special search algorithm to better navigate the search space and to improve performance. In another exemplary embodiment, the continuous parameter dimension may include a numeric parameter that may hold any value in a specified interval. The discrete parameter dimension may include a numeric parameter that, for any value in a range of values that the parameter is permitted to hold, includes a positive minimum distance to the nearest other permissible value. In another exemplary embodiment, the search space may include a set of all possible points of an optimization problem that satisfy the problem's constraints such as, for example, an inequality constraint, an equality constraint, and an integer constraint [defining by the search engine of a set of candidate configuration parameters selected from within the search space]”) [Examiner’s remarks: Search algorithms are used to navigate subsections of the search space.];
- defining by the feature selection module of appropriate features from the submitted feature engineering modules (Paragraph [0113], “The feature selection component may correspond to an optimization methodology for intelligently discovering features from input and/or source data that provides optimal predictive performance. The optimization methodology may include a selection algorithm that receives an input such as, for example, a data set, a specified constraint such as a monotonic data constraint and/or a feature interaction constraint that the selection algorithm utilizes to inform an output, as well as a specified rule [defining by the feature selection module of appropriate features from the submitted feature engineering modules]”) [Examiner’s remarks: The feature selection component selects appropriate features based on given constraints for the model.];
…
- … and sending the trained model to the model inference module 104B for serving the trained model and to the project store server 108 for storing the trained model (Paragraph [0059], “The server devices 204(1)-204(n) may be hardware or software or may represent a system with multiple servers in a pool, which may include internal or external networks. The server devices 204(1)-204(n) hosts the databases 206(1)-206(n) that are configured to store data that relates to model development lifecycle metadata as well as input data source and corresponding final code [and sending the trained model to the model inference module 104B for serving the trained model and to the project store server 108 for storing the trained model]”) [Examiner’s remarks: The completed (trained model) is saved on server devices.]; and
Li does not explicitly disclose:
…
- receiving at a project store server a selection of relevant existing AI projects;
…
- submitting the appropriate features and the candidate configuration parameters to a model training and validation module of an AI project server;
- gathering by the model training and validation module of resulting data from the neural network's output layer;
- transmitting by the AI project server of the resulting data to the search engine;
- determining by the search engine of whether or not the neural network configured with the candidate configuration parameters converges based on the resulting data received from the AI project store server;
- if there is convergence, creating by the AI project server of a trained model based on the neural network configured with the candidate configuration parameters …
- if convergence does not exist, defining by the search engine of an alternate set of configuration parameters selected from within the search pool and determining if convergence exists based on the alternate set of configuration parameters.
However, Khavronin disclose:
- submitting the appropriate features and the candidate configuration parameters to a model training and validation module of an AI project server (Paragraph [0032], “The distributed model generation system includes a manager node (“manager”) and a plurality of training nodes (or “workers”). The manager operates a model parameter and/or hyperparameter (“(H)P”) optimization process, and at each instance or epoch of the training process, the manager directs each worker to run model training with respective sets of (H)Ps. Each of the workers trains and tests a local ML model using their respective (H)P sets, in parallel. Each worker independently provides their tested (H)P sets with calculated performance scores back to the manager, which then performs additional optimizations on the (H)P sets to produce more optimal (H)P sets. These more optimal (H)P sets are then sent to available workers to train and test their local models using the updated (H)P sets. This process continues until convergence is met [submitting the appropriate features and the candidate configuration parameters to a model training and validation module of an AI project server]”) [Examiner’s remarks: Appropriate features and candidate configuration parameters ((H)P) are sent to models to be tested via training.];
- gathering by the model training and validation module of resulting data from the neural network's output layer (Paragraph [0032], “The distributed model generation system includes a manager node (“manager”) and a plurality of training nodes (or “workers”). The manager operates a model parameter and/or hyperparameter (“(H)P”) optimization process, and at each instance or epoch of the training process, the manager directs each worker to run model training with respective sets of (H)Ps. Each of the workers trains and tests a local ML model using their respective (H)P sets, in parallel. Each worker independently provides their tested (H)P sets with calculated performance scores back to the manager, which then performs additional optimizations on the (H)P sets to produce more optimal (H)P sets. These more optimal (H)P sets are then sent to available workers to train and test their local models using the updated (H)P sets. This process continues until convergence is met [gathering by the model training and validation module of resulting data from the neural network's output layer]”) [Examiner’s remarks: Through the training process, data gathered involving error calculated from the model output is gathered and sent back to a manager.];
- transmitting by the AI project server of the resulting data to the search engine (Paragraph [0032], “The distributed model generation system includes a manager node (“manager”) and a plurality of training nodes (or “workers”). The manager operates a model parameter and/or hyperparameter (“(H)P”) optimization process, and at each instance or epoch of the training process, the manager directs each worker to run model training with respective sets of (H)Ps. Each of the workers trains and tests a local ML model using their respective (H)P sets, in parallel. Each worker independently provides their tested (H)P sets with calculated performance scores back to the manager, which then performs additional optimizations on the (H)P sets to produce more optimal (H)P sets. These more optimal (H)P sets are then sent to available workers to train and test their local models using the updated (H)P sets. This process continues until convergence is met [transmitting by the AI project server of the resulting data to the search engine]”) [Examiner’s remarks: The resulting data is sent back to a manager managing the search process.];
- determining by the search engine of whether or not the neural network configured with the candidate configuration parameters converges based on the resulting data received from the AI project store server (Paragraph [0246], “At operation 1930, the manager node 1724 determines if the result 1740 optimized (or includes an optimal (H)P set 1728). In some embodiments, the manager node 1724 may determine that the (H)P set 1728 included in the result 1740 converges. An ML model reaches convergence when it achieves a state during training in which loss settles to within an error range around a final value. In other words, a model converges when additional training will not improve the predictions/inferences produced by the model [determining by the search engine of whether or not the neural network configured with the candidate configuration parameters converges based on the resulting data received from the AI project store server]”) [Examiner’s remarks: The manager node managing the search determines whether a certain hyperparameter set has reached convergence based on data received from the running nodes training models.];
- if there is convergence, creating by the AI project server of a trained model based on the neural network configured with the candidate configuration parameters …(Paragraph [0032], “The distributed model generation system includes a