Detailed Action
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-20 are pending for examination. Claims 1, 8, and 15 are independent.
Response to Amendment
The office action is responsive to the amendments filed on 09/23/2025. As
directed by the amendments claims 1, 8, 15, and 19 are amended.
Response to Arguments
Applicant's arguments filed 09/23/2025 have been fully considered but they are not fully persuasive.
Applicant arguments regarding 35 U.S.C. § 101:
Applicant’s arguments, with respect to 35 U.S.C. § 101 have been fully considered and are persuasive. The 35 U.S.C. § 101 has been withdrawn.
Applicant arguments regarding 35 U.S.C. § 103:
Examiner response: Applicant’s arguments with respect to claim(s) have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1, 8, and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Convertino et al. (US 2020/0097847 A1, hereinafter "Convertino") in view of Nandakumar (US 2022/0138004 A1, hereinafter "Nandakumar"), Vaid et al. (US 20220004822 A1, hereinafter "Vaid"), and Conort et al. (US 20220076164 A1, hereinafter "Conort ").
Regarding Claim 1
Convertino discloses: A computer-implemented method comprising:
loading ([Fig 3, Fig 6-8] disclose a computing environment to perform the method.), wherein the set of test case statistical data is based on a set of test cases corresponding to the machine learning model and comprises a plurality of input parameter sets and a corresponding set of output quality measurements ([Para 0044, 0048, Para 0088, Fig 1(109) and Fig 9] describe experiments optimizing machine learning model hyperparameter values (i.e. input parameter sets) with resulting performance metrics (i.e., corresponding set of output quality measurements). [Fig 19 (1906)] also disclose test case statistical data.), and wherein the machine learning model is built by:
building the machine learning model based on the ([Para 0042-0044 0047-0048 0118-0119 Fig 1-2] describes building machine learning model based on performance metrics and hyperparameter values.)
comparing user data on the user system against the set of test case statistical data ([Para 0089] states “Each experiment may represent the results of training an ML model configured using a particular combination of hyperparameter values using a training data set.” [Para 0123-127] Para 0123 states “user can view and define files to include in an ML model development project (e.g., datasets, scripts, etc.),”. Examiner interprets a user defined training dataset as user data, that is input to the ML model and used to identify hyperparameters that optimize the ML model.), wherein the comparison identifies one of the plurality of input parameter sets to optimize the machine learning model based on the set of output quality measurements ([Para 0113-0115, Para 0142, Fig 15, and Fig 22(2222)] describes a recommendation for selecting a hyperparameter value or combination of hyperparameter values (i.e. identified input parameters) that maximizes one or more ML model performance metrics (i.e. output quality measurements).);
generating, at the user system, an optimized machine learning model using the machine learning model and the identified input parameter set ([Para 0113-0115, Para 0119-0120 and Fig 16] describe generating model with recommend hyperparameter values to optimize ML model.);
displaying one or more similar sets of test data on a user interface ([Para 0151-0153, Para 0165, Para 0103-0105, Fig 11, and Fig 25B] describes displaying to a user an interactive listing of proposed experiments to run to test the various hyperparameter combinations (i.e. test data).);
responsive to receiving through the user interface a selection choice from the one or more similar sets of test data, applying the selection choice to the machine learning model and predicting an accuracy of the optimized machine learning model; ([Para 0153, Para 0114-0118, Fig 15, and Fig. 25C] describes presenting the user results of the experiments (i.e. selection choice) and various performance metrics (i.e. accuracy see fig 25C) resulting from application of the resulting ML model.)
outputting the optimized machine learning model to the user through the user interface ([Para 0042-0043 0049-0050 0119-0120 0165 Fig 1-2and Fig 16] describes deploying and sharing the generated ML model.); and
responsive to receiving acceptance of the optimized machine learning model through the user interface executing the optimized machine learning model and ([Para 0041-0042, 0049-0050, 0057-0059, 0119-0120, 0137 Fig 1-2 and Fig 16] describes deploying and sharing the ML model after being accepted by a user. Also describes collecting information such annotations.)
Convertino does not explicitly disclose: loading a machine learning model into a user system;
However, Nandakumar discloses in the same field of endeavor: loading a machine learning model into a user system ([Para 0123, 0125, 0127, Fig 9 and Fig 12] describes downloading a machine learning from a container onto a computer.);
Convertino and Nandakumar are both analogous art to the present invention because both are from the same field of endeavor directed to machine learning.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the method for Hyper Parameter Tuning disclosed by Convertino with the method for Downloading a Machine learning model disclosed by Nandakumar. One of ordinary skill in the art would have been motivated to make this modification in order to receive a machine learning model for execution (Para 0123, Nandakumar).
Convertino in view of Nandakumar does not explicitly disclose: transforming univariate and bivariate statistics into a scaled format such that the statistics are combined through feature scaling; and building the machine learning model based on the transformed univariate and bivariate statistics, and the plurality of input parameter sets;
However, Vaid discloses in the same field of endeavor: transforming univariate and bivariate statistics into a scaled format such that the statistics are combined through feature scaling ([Para 0054, 0068 0073, 0076, Fig 2and Fig 5] describe a composite (i.e. combined) normalized value (i.e. scaled) of univariate and bivariate data features.); and building the machine learning model based on the transformed univariate and bivariate statistics, and the plurality of input parameter sets; ([Para 0054-0060, 0069-0070 and Fig 2B] describes generating a model based on the normalized statistics.)
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the method for Hyper Parameter Tuning disclosed by Convertino with the method for Downloading a Machine learning model disclosed by Nandakumar with the method for Data Quality Analysis disclosed by Vaid. One of ordinary skill in the art would have been motivated to make this modification in order to measures quality based on a composite normalized value.
Convertino in view of Nandakumar and Vaid does not explicitly disclose: refreshing the optimized machine learning model, by collecting information to add a new pre-trained machine learning model.
However, Conort discloses in the same field of endeavor: refreshing the optimized machine learning model, by collecting information to add a new pre-trained machine learning model. ([Para 0318-0322 and Fig 6] describes refreshing a model and collecting data.)
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the method for Hyper Parameter Tuning disclosed by Convertino with the method for Downloading a Machine learning model disclosed by Nandakumar with the method for Data Quality Analysis disclosed by Vaid with the method for automated machine learning models disclosed by Conort. One of ordinary skill in the art would have been motivated to make this modification in order further improve a prediction model.
Regarding Claim 8
Convertino in view of Nandakumar, Vaid, and Conort discloses: An information handling system comprising: one or more processors; a memory coupled to at least one of the processors; a set of computer program instructions stored in the memory and executed by at least one of the processors ([Fig 3, Fig 6-8, and Fig 29], Convertino disclose a computing system.) in order to perform actions of: (Claim 8 is a system claim that corresponds to claim 1 and the rest of the limitations are rejected on the same ground)
Regarding Claim 15
Convertino in view of Nandakumar, Vaid, and Conort discloses: A computer program product stored in a computer readable storage medium, comprising computer program code that, when executed by an information handling system, causes the information handling system ([Fig 3, Fig 6-8, and Fig 29], Convertino disclose a computing environment.) to perform actions comprising: (Claim 15 is a program product claim that corresponds to claim 1 and the rest of the limitations are rejected on the same ground)
Claim(s) 2-4, 9-11, and 16-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Convertino et al. (US 2020/0097847 A1, hereinafter "Convertino") in view of Nandakumar (US 2022/0138004 A1, hereinafter "Nandakumar") Vaid et al. (US 20220004822 A1, hereinafter "Vaid") Conort et al. (US 20220076164 A1, hereinafter "Conort ")and Jiang et al. (US 2006/0161403 A1, hereinafter "Jiang").
Regarding Claim 2
Convertino in view of Nandakumar, Vaid, and Conort discloses: The computer-implemented method of claim 1 wherein, at a developer system, the method further comprises:
constructing the set of test case statistical data by combining the set of ([Para 0044, 0048, Para 0088, Fig 1(109) and Fig 9], Convertino describe experiments optimizing machine learning model hyperparameter values (i.e., input parameter sets) with resulting performance metrics (i.e., corresponding set of output quality measurements). [Fig 19 (1906)] also disclose test case statistical data.);
running the set of test cases with the plurality of input parameter sets to generate the set of output quality measurements ([Para 0044, 0048, Para 0088, Fig 1(109) and Fig 9] Convertino describe experiments (i.e., test cases) for hyperparameter values (i.e., input parameter sets) with resulting performance metrics (i.e., generated output quality measurements.).;
Convertino in view of Nandakumar, Vaid, and Conort does not explicitly disclose: collecting a set of test data corresponding to the set of test cases; transforming the set of test data into a set of transformed descriptive statistics, wherein the transforming comprises a set of analytic computations, a set of scaling computations, and a set of sorting computations;
However, Jiang discloses in the same field of endeavor: collecting a set of test data corresponding to the set of test cases ([Para 0011, Para 0018, Para 0106, Para 0111, and Claim 15] describe test data for testing the model.);
transforming the set of test data into a set of transformed descriptive statistics ([Para 0106, 0064-0065, 0071, Appendix A, Fig 4, Fig 6-7] disclose univariate statistics (i.e., descriptive statistics) for test/validation data.), wherein the transforming comprises a set of analytic computations ([Para 0064-0065] describe calculations such as averaging, dispersion, and skewness (i.e. analytic computations).), a set of scaling computations ([Para 0108-0109 and Fig 6] describes scaling and normalization computations.), and a set of sorting computations ([Para 0068, Appendix A, Claim 5] disclose sorting computations.);
Convertino, Nandakumar Vaid Conort and Jiang are analogous art to the present invention because both are from the same field of endeavor directed to machine learning.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the method for Hyper Parameter Tuning disclosed by Convertino with the method for Downloading a Machine learning model disclosed by Nandakumar with the method for Data Quality Analysis disclosed by Vaid with the method for automated machine learning models disclosed by Conort with the method for Hyperparameter Tuning disclosed by Jiang. One of ordinary skill in the art would have been motivated to make this modification in order to analyze data and validate a model (Para 0020, Jiang).
Regarding Claim 3
Convertino in view of Nandakumar, Vaid, Conort, and Jiang discloses: The computer-implemented method of claim 2 further comprising:
packaging the test case statistical data, the machine learning model, and an initial parameter optimizer into a deployment package ([Para 0026, 0058, 0125, and Fig 9-10], Nandakumar describes Packaged AI including a template comprising hyperparameters tuning (e.g., test case statistical data), algorithm selection (i.e., machine learning model), and data preparation/transformations (i.e., initial parameter optimizer).); and
deploying the deployment package from the developer system to the user system ([Para 0120] Convertino “the ML model can be deployed, for example, into an enterprise computer system to perform business processes, or may be shared with other users such as other data science professionals or ML professionals.”).
Regarding Claim 4
Convertino in view of Nandakumar, Vaid, Conort, and Jiang discloses: The computer-implemented method of claim 3 wherein, at the user system, the method further comprises:
collecting, by the initial parameter optimizer, a set of user data characteristics of the user data ([Para 0020, 0064, and Fig 2(108)] Jiang Para 0064 states “Exploratory data analysis is the process of examining features of a dataset prior to model building.” Examiner interprets extracting features from the training data as collecting data characteristics of the user data.);
transforming the set of user data characteristics into a set of transformed user data statistics ([Para 0046] Jiang “At step 108, exploratory analysis of the data is performed. At step 110, automatic analysis of the data to build a statistical model is performed.” [Para 0028, 0065] Jiang describes calculating univariate statistics on the training data (i.e., user data statistics).);
receiving a set of user parameters from a user ([Para 0150 and Fig 25] Convertino describes receiving user hyperparameter values.);
predicting a model quality of the machine learning model based on the set of user parameters, the set of transformed user data statistics, and the set of output quality measurements ([Para 0153 and Fig 25C(2514) and Fig 28] Convertino depicts various performance metrics included in the results of the experiment as well as a graphical representation of the predictions versus actual values resulting from application of the resulting ML model.); and
displaying the predicted model quality of the machine learning model to the user at the user system ([Para 0153 and Fig 25C(2514) and Fig 28] Convertino describes an interface for presenting the performance metrics.).
Regarding Claim 9
(Claim 9 recites analogous limitations to claim 2 and therefore is rejected on the same ground as claim 2.)
Regarding Claim 10
(Claim 10 recites analogous limitations to claim 3 and therefore is rejected on the same ground as claim 3.)
Regarding Claim 11
(Claim 11 recites analogous limitations to claim 4 and therefore is rejected on the same ground as claim 4.)
Regarding Claim 16
(Claim 16 recites analogous limitations to claim 2 and therefore is rejected on the same ground as claim 2.)
Regarding Claim 17
(Claim 17 recites analogous limitations to claim 3 and therefore is rejected on the same ground as claim 3.)
Regarding Claim 18
(Claim 18 recites analogous limitations to claim 4 and therefore is rejected on the same ground as claim 4.)
Claim(s) 5-7, 12-14, and 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Convertino et al. (US 2020/0097847 A1, hereinafter "Convertino") in view of Nandakumar (US 2022/0138004 A1, hereinafter "Nandakumar"), Vaid et al. (US 20220004822 A1, hereinafter "Vaid"), Conort et al. (US 20220076164 A1, hereinafter "Conort ") Jiang et al. (US 2006/0161403 A1, hereinafter "Jiang"), and Feurer et al. ("Initializing Bayesian Hyperparameter Optimization via Meta-Learning", hereinafter "Feurer").
Regarding Claim 5
Convertino in view of Nandakumar Vaid and Jiang discloses: The computer-implemented method of claim 3 further comprising:
collecting, by the initial parameter optimizer, a set of user data characteristics of the user data ([Para 0020, 0064, and Fig 2(108)] Jiang Para 0064 states “Exploratory data analysis is the process of examining features of a dataset prior to model building.” Examiner interprets extracting features from the training data as collecting data characteristics of the user data.);
transforming the set of user data characteristics into a set of transformed user data statistics ([Para 0046] Jiang “At step 108, exploratory analysis of the data is performed. At step 110, automatic analysis of the data to build a statistical model is performed.” [Para 0028, 0065] Jiang describes calculating univariate statistics on the training data (i.e., user data statistics).);
Convertino in view of Nandakumar Vaid and Jiang does not explicitly disclose: calculating a set of data similarity values between the set of transformed user data statistics and the set of transformed descriptive statistics; and selecting a subset of the plurality of input parameter sets from the test case statistical data based on the set of data similarity values.
However, Feurer discloses in the same field of endeavor: calculating a set of data similarity values between the set of transformed user data statistics and the set of transformed descriptive statistics ([Page 1120 and Algorithm 1-2] discloses calculating a distance metric between datasets (i.e., data similarity values).); and
selecting a subset of the plurality of input parameter sets from the test case statistical data based on the set of data similarity values ([Abstract, Page 1120 and Algorithm 1-2] discloses determining the best hyperparameter configuration θ based on the distance metrics (i.e., data similarity values).).
Convertino, Nandakumar, Vaid, Conort, Jiang, and Feurer are analogous art to the present invention because both are from the same field of endeavor directed to machine learning.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the method for Hyper Parameter Tuning disclosed by Convertino with the method for Downloading a Machine learning model disclosed by Nandakumar with the method for Data Quality Analysis disclosed by Vaid with the method for automated machine learning models disclosed by Conort with the method for Hyperparameter Tuning disclosed by Jiang with the method for Hyperparameter Optimization disclosed by Feurer. One of ordinary skill in the art would have been motivated to make this modification in order to speed up optimizing hyperparameters by starting from promising configurations that performed well on similar datasets (Abstract, Feurer).
Regarding Claim 6
Convertino in view of Nandakumar, Vaid, Conort, Jiang, and Feurer discloses: The computer-implemented method of claim 5 further comprising:
predicting a set of model quality values of the machine learning model based on the subset of input parameter sets, the set of transformed user data statistics, and the set of output quality measurements ([Para 0153 Fig 22(2220) Fig 25C(2514) and Fig 28] Convertino describes various performance metrics included in the results of the experiment of the predictions versus actual values resulting from application of the resulting ML model.);
displaying the set of model quality values and corresponding subset of input parameter sets to the user at the user system ([Para 0153 Fig 22(2220) Fig 25C(2514) and Fig 28] Convertino describes an interface for presenting model performance metrics.);
receiving a selection from the user that selects one of the subsets of input parameter sets concurrently ([Para 0131 Claim 3, Claim 5], Convertino describes a user selecting a subset of experiments and dynamically updating. [Para 0059 0061 0090 and Fig 9] also describes running multiple versions of experiments.); and
creating the optimized machine learning model using the selected subset of input parameter sets ([Para 0120 and Fig 16] Convertino discloses an optimized ML model generated based on the tuned hyperparameter values.).
Regarding Claim 7
Convertino in view of Nandakumar, Vaid , Conort, Jiang, and Feurer discloses: The computer-implemented method of claim 6 further comprising:
receiving, from the user, a different selection that selects a different one of the subsets of input parameter sets, wherein the selection and the different selection are received concurrently ([Para 0131 Claim 3, Claim 5], Convertino discloses different user selection subsets of experiments and dynamically updating based on the selections. [Para 0059 0061 0090 and Fig 9] also describes running different versions of experiments in parallel.); and
creating a different optimized machine learning model using the different subset of input parameter sets, wherein the different optimized machine learning model is created concurrently with the optimized machine learning model ([Para 0059 0061 0090-0091, 0136, Fig 9, Fig 20 and Fig 28], Convertino describes running different versions of experiments in parallel and a line graph 2002c that plots accuracy values for an optimized model returned for each experiment.).
Regarding Claim 12
(Claim 12 recites analogous limitations to claim 5 and therefore is rejected on the same ground as claim 5.)
Regarding Claim 13
(Claim 13 recites analogous limitations to claim 6 and therefore is rejected on the same ground as claim 6.)
Regarding Claim 14
(Claim 14 recites analogous limitations to claim 7 and therefore is rejected on the same ground as claim 7.)
Regarding Claim 19
(Claim 19 recites analogous limitations to claim 5 and therefore is rejected on the same ground as claim 5.)
Regarding Claim 20
(Claim 20 recites analogous limitations to claim 6 and therefore is rejected on the same ground as claim 6.)
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Zhang et al. (US 2023/0132064 A1) describes an automated machine learning framework..
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TEWODROS E MENGISTU whose telephone number is (571)270-7714. The examiner can normally be reached Mon-Fri 9:30-5:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ABDULLAH KAWSAR can be reached at (571)270-3169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/TEWODROS E MENGISTU/Examiner, Art Unit 2127