Prosecution Insights
Last updated: April 19, 2026
Application No. 18/341,120

PIPELINE SELECTION FOR MACHINE LEARNING MODEL BUILDING

Final Rejection §103
Filed
Jun 26, 2023
Examiner
DAUD, ABDULLAH AHMED
Art Unit
2164
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
2 (Final)
54%
Grant Probability
Moderate
3-4
OA Rounds
4y 0m
To Grant
88%
With Interview

Examiner Intelligence

Grants 54% of resolved cases
54%
Career Allow Rate
91 granted / 167 resolved
-0.5% vs TC avg
Strong +34% interview lift
Without
With
+33.6%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
32 currently pending
Career history
199
Total Applications
across all art units

Statute-Specific Performance

§101
13.4%
-26.6% vs TC avg
§103
69.0%
+29.0% vs TC avg
§102
4.4%
-35.6% vs TC avg
§112
7.0%
-33.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 167 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement IDS submitted on 7/28/2025 has been considered by the examiner. Response to Amendment This Office action is in response to Applicant's amendment filed on 10/17/2025. Claim 1-20 are pending. Claim 1, 10 and 16 are amended. Claim 1-20 are rejected. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 1-3, 7-8, 10-11, 13-14, 16-17 and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Shi, Hai-bo et al (Chinese Patent document No. CN 116128065 A), hereafter referred as to “Shi”, in view of Durvasula, Vsm et al (PGPUB Document No. 20230117893), hereafter, referred to as “Durvasula”, in view of Heeseung, Choi et al (Korean Patent Document No. KR 20230138605), hereafter, referred to as “Heeseung”, in further view of Nazir, Mubbashir et al (US Patent No. 11798090), hereafter, referred to as “Nazir”. Regarding Claim 1 (Currently Amended), Shi teaches A computer-implemented method comprising: performing cross-validation runs for a plurality of dataset-pipeline combinations combined from a plurality of datasets and a plurality of machine learning pipelines, building from the cross-validation runs a matrix of accuracy scores, the accuracy scores being first accuracy scores, including a respective accuracy score for each dataset-pipeline combination of the plurality of dataset-pipeline combinations(Shi, in para 0013 discloses a validation matrix of ML pipeline vs dataset that represents performance of accuracy in each corresponding cell “The performance matrix A represents the classification accuracy of each machine learning pipeline on each data set: the row vector of the performance matrix A represents the data set, and the column vector represents each machine learning pipeline”); But Shi does not explicitly teach the cross-validation runs including, for each dataset-pipeline combination of the plurality of dataset-pipeline combinations, generating and training a respective machine learning model using a respective dataset and a respective pipeline of the dataset-pipeline combination. factoring the matrix of accuracy scores into pipeline latent factors and dataset latent factors; augmenting the matrix of accuracy scores, the augmenting comprising: selecting a subset of machine learning pipelines of the plurality of machine learning pipelines based on the factoring the matrix of accuracy scores; for a new dataset, running the subset of machine learning pipelines with the new dataset to build and test respective machine learning models, and obtaining second accuracy scores for combinations of the new dataset and the subset of machine learning pipelines; and augmenting the matrix of accuracy scores with the second accuracy scores reflected for the new dataset to produce an augmented matrix of accuracy scores; factoring the augmented matrix of accuracy scores into refined pipeline latent factors and refined dataset latent factors. However, in the same field of endeavor of machine learning model training for data pipeline Durvasula teaches the cross-validation runs including, for each dataset-pipeline combination of the plurality of dataset-pipeline combinations, generating and training a respective machine learning model using a respective dataset and a respective pipeline of the dataset-pipeline combination(Durvasula, Fig. 6E and para 0117 discloses generating and training machine learning model data pipeline and further combining data pipeline (element 64a and 644b of Fig. 6E) “The ML model may learn to organize data 642 into a data pipeline resembling the training examples. Block 644b is an examples of meta-machine learning, wherein machine learning techniques are used to build other machine learning models. …… learning models to train a machine learning model at block 644b to generate data pipelines, based on the data 642, that may be used to train additional machine learning models. The models trained at blocks 644 may be combined into a data pipeline pattern engine at block 646”; where Shi, in para 0013 discloses a validation matrix of ML pipeline vs dataset that)). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the feature of combining data pipelines of Durvasula into cross validation matrix from machine learning pipelines with respect to different datasets of Shi to produce an expected result of having the machine learning models trained with relevant dataset . The modification would be obvious because one of ordinary skill in the art would be motivated to improve the machine learning system by keeping model training and operation separated (Durvasula, para 0062). But Shi and Durvasula not explicitly teach factoring the matrix of accuracy scores into pipeline latent factors and dataset latent factors; augmenting the matrix of accuracy scores, the augmenting comprising: selecting a subset of machine learning pipelines of the plurality of machine learning pipelines based on the factoring the matrix of accuracy scores; for a new dataset, running the subset of machine learning pipelines with the new dataset to build and test respective machine learning models, and obtaining second accuracy scores for combinations of the new dataset and the subset of machine learning pipelines; and augmenting the matrix of accuracy scores with the second accuracy scores reflected for the new dataset to produce an augmented matrix of accuracy scores; factoring the augmented matrix of accuracy scores into refined pipeline latent factors and refined dataset latent factors. However, in the same field of endeavor of machine learning model evaluation Heeseung teaches factoring the matrix of accuracy scores into pipeline latent factors and dataset latent factors(Heeseung, para 0104 discloses a matrix can factored or decomposed into their constituent latent factors “the user matrix and the tourist attraction matrix may each be produced by matrix factorization processing the correlation matrix. In the above user matrix, some elements (e.g., rows) represent tourists and other elements (e.g., columns) represent latent factors. In the above tourist destination matrix, some elements (e.g., columns) represent tourist destinations and other elements (e.g., rows) represent the latent elements”; by applying this disclosed technique, matrix of accuracy scores comprising ML pipeline and dataset pipeline can be decomposed into their latent factors taught by Shi ); augmenting the matrix of accuracy scores, the augmenting comprising: and augmenting the matrix of accuracy scores with the second accuracy scores reflected for the new dataset to produce an augmented matrix of accuracy scores(Heeseung, para 0135 discloses augmenting a matrix for improving the accuracy/recommendation value “the correlation matrix may be dimensionally augmented, such as by SVD, prior to matrix factorization. Once the augmented correlation matrix is factorized into a user matrix and a destination matrix, a recommendation model is trained based on them”); factoring the augmented matrix of accuracy scores into refined pipeline latent factors and refined dataset latent factors(Heeseung, para 0104 discloses any matrix can factored or decomposed into their constituent latent factors “the user matrix and the tourist attraction matrix may each be produced by matrix factorization processing the correlation matrix. In the above user matrix, some elements (e.g., rows) represent tourists and other elements (e.g., columns) represent latent factors. In the above tourist destination matrix, some elements (e.g., columns) represent tourist destinations and other elements (e.g., rows) represent the latent elements”; by applying this disclosed technique, matrix of accuracy scores comprising refined ML pipeline and refined dataset pipeline can be decomposed into their latent factors taught by Shi ); Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the feature of augmenting and factoring performance matrix of machine learning model of Heeseung into cross validation matrix from machine learning pipelines with respect to different datasets of Shi and Durvasula to produce an expected result of reducing learning model errors. The modification would be obvious because one of ordinary skill in the art would be motivated to augment the performance matrix for reducing the root mean square error (RSME) of the performance evaluation matrix(Heeseung, para 0102). But Shi, Durvasula and Heeseung don’t explicitly teach selecting a subset of machine learning pipelines of the plurality of machine learning pipelines based on the factoring the matrix of accuracy scores; for a new dataset, running the subset of machine learning pipelines with the new dataset to build and test respective machine learning models, and obtaining second accuracy scores for combinations of the new dataset and the subset of machine learning pipelines; However, in the same field of endeavor of machine learning model evaluation Nazir teaches selecting a subset of machine learning pipelines of the plurality of machine learning pipelines based on the factoring the matrix of accuracy scores; for a new dataset, running the subset of machine learning pipelines with the new dataset to build and test respective machine learning models, and obtaining second accuracy scores for combinations of the new dataset and the subset of machine learning pipelines(Nazir, col 23:1~9 discloses selection learning models and running test data to obtain model accuracies “the best n performing models on the whole test dataset are selected. If an autotuner is used, the best performing models generated by the autotuner are selected. …. The best performing models may be selected based on criteria including, e.g., RMSE (root mean square error), MSE (mean square error), MAE (mean absolute error), accuracy, precision, recall, etc., between test data and values predicted by the model for the test data”); Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the feature of selecting a set of learning models and running test data on them of Nazir into cross validation matrix from machine learning pipelines with respect to different datasets of Shi. Durvasula and Heeseung to produce an expected result of increasing the model accuracy. The modification would be obvious because one of ordinary skill in the art would be motivated to increase the model’s accuracy by periodically retrain the model when new data becomes available (Nazir, col 22:26~28). Regarding claim 2 (Original), Shi, Durvasula, Heeseung and Nazir teach all the limitations of claim 1 and Nazir further teaches wherein the selected subset of machine learning pipelines comprises k highest-performing machine learning pipelines based on the accuracy scores of combinations of those machine learning pipelines with datasets of the plurality of datasets (Nazir, col 23:1~9 discloses selection of best n performing learning models based on accuracy scores/RMSE “the best n performing models on the whole test dataset are selected. If an autotuner is used, the best performing models generated by the autotuner are selected. …. The best performing models may be selected based on criteria including, e.g., RMSE (root mean square error), MSE (mean square error), MAE (mean absolute error), accuracy, precision, recall, etc., between test data and values predicted by the model for the test data”). Regarding claim 3(Original), Shi, Durvasula, Heeseung and Nazir teach all the limitations of claim 1 and Shi further teaches wherein the factoring the augmented matrix of accuracy scores comprises using an objective function that includes a loss function and regularization penalty (Shi, para 0038 further discloses optimization performance accuracy by loss function and regularization “The fusion loss function module adopts regularization of the parameter matrix and the total loss function composed of four loss functions, repeatedly trains and iterates to calculate and update the performance matrix, optimizes the pipeline recommendation unit in the automatic machine learning system, and obtains the machine learning pipeline to be recommended according to the performance matrix……”). Regarding claim 7(Original), Shi, Durvasula, Heeseung and Nazir teach all the limitations of claim 1 and Shi further teaches further comprising building and outputting a machine learning model using a selected machine learning pipeline of the at least one machine learning pipeline(Shi, para 0004 further discloses outputting/recommending most efficient ML pipeline for new dataset based on latent features of ML pipeline and dataset “a novel bidirectional stacked autoencoder is utilized to simultaneously learn the latent feature representations of the dataset and the machine learning pipeline, which can effectively learn the synergy of the dataset and the machine learning pipeline, and can recommend a suitable machine learning pipeline for a new dataset……”). Regarding claim 8(Original), Shi, Durvasula, Heeseung and Nazir teach all the limitations of claim 7 and Shi further teaches further comprising selecting the selected machine learning pipeline based on a time and compute budget(Shi, para 0004 discloses selection ML pipeline based on running time performance “the system is implemented to select the base pipeline from candidate machine learning pipelines based on running time, classification performance……”). Regarding Claim 10 (Currently Amended), Shi teaches A computer system comprising: a memory; and a processor in communication with the memory, wherein the computer system is configured to perform a method comprising: performing cross-validation runs for a plurality of dataset-pipeline combinations combined from a plurality of datasets and a plurality of machine learning pipelines, and building from the cross-validation runs a matrix of accuracy scores, the accuracy scores being first accuracy scores, including a respective accuracy score for each dataset-pipeline combination of the plurality of dataset-pipeline combinations (Shi, in para 0013 discloses a validation matrix of ML pipeline vs dataset that represents performance of accuracy in each corresponding cell “The performance matrix A represents the classification accuracy of each machine learning pipeline on each data set: the row vector of the performance matrix A represents the data set, and the column vector represents each machine learning pipeline”); and identifying, based on the refined pipeline latent factors and the refined dataset latent factors, at least one machine learning pipeline, of the plurality of machine learning pipelines, as most optimal for model building based on the new dataset (Shi, para 0004 further discloses identifying/recommending a ML pipeline for new dataset based on latent features of ML pipeline and dataset “a novel bidirectional stacked autoencoder is utilized to simultaneously learn the latent feature representations of the dataset and the machine learning pipeline, which can effectively learn the synergy of the dataset and the machine learning pipeline, and can recommend a suitable machine learning pipeline for a new dataset. Finally, an efficient parallel selective ensemble system is embedded in a parallel automatic machine learning system based on bidirectional autoencoders, and the system is implemented to select the base pipeline from candidate machine learning pipelines based on running time, classification performance”). But Shi does not explicitly teach the cross-validation runs including, for each dataset-pipeline combination of the plurality of dataset-pipeline combinations, generating and training a respective machine learning model using a respective dataset and a respective pipeline of the dataset-pipeline combinations; factoring the matrix of accuracy scores into pipeline latent factors and dataset latent factors; augmenting the matrix of accuracy scores, the augmenting comprising: selecting a subset of machine learning pipelines of the plurality of machine learning pipelines based on the factoring the matrix of accuracy scores; for a new dataset, running the subset of machine learning pipelines with the new dataset to build and test respective machine learning models, and obtaining second accuracy scores for combinations of the new dataset and the subset of machine learning pipelines; augmenting the matrix of accuracy scores with the second accuracy scores reflected for the new dataset to produce an augmented matrix of accuracy scores; factoring the augmented matrix of accuracy scores into refined pipeline latent factors and refined dataset latent factors; However, in the same field of endeavor of machine learning model training for data pipeline Durvasula teaches the cross-validation runs including, for each dataset-pipeline combination of the plurality of dataset-pipeline combinations, generating and training a respective machine learning model using a respective dataset and a respective pipeline of the dataset-pipeline combination(Durvasula, Fig. 6E and para 0117 disclose generating and training machine learning model data pipeline and further combining data pipeline (element 64a and 644b for Fig. 6E) “The ML model may learn to organize data 642 into a data pipeline resembling the training examples. Block 644b is an examples of meta-machine learning, wherein machine learning techniques are used to build other machine learning models. …… learning models to train a machine learning model at block 644b to generate data pipelines, based on the data 642, that may be used to train additional machine learning models. The models trained at blocks 644 may be combined into a data pipeline pattern engine at block 646”; where Shi, in para 0013 discloses a validation matrix of ML pipeline vs dataset that)). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the feature of combining data pipelines of Durvasula into cross validation matrix from machine learning pipelines with respect to different datasets of Shi to produce an expected result of having the machine learning models trained with relevant dataset . The modification would be obvious because one of ordinary skill in the art would be motivated to improve the machine learning system by keeping model training and operation separated (Durvasula, para 0062). But Shi and Durvasula not explicitly teach factoring the matrix of accuracy scores into pipeline latent factors and dataset latent factors; augmenting the matrix of accuracy scores, the augmenting comprising: selecting a subset of machine learning pipelines of the plurality of machine learning pipelines based on the factoring the matrix of accuracy scores; for a new dataset, running the subset of machine learning pipelines with the new dataset to build and test respective machine learning models, and obtaining second accuracy scores for combinations of the new dataset and the subset of machine learning pipelines; augmenting the matrix of accuracy scores with the second accuracy scores reflected for the new dataset to produce an augmented matrix of accuracy scores; factoring the augmented matrix of accuracy scores into refined pipeline latent factors and refined dataset latent factors; However, in the same field of endeavor of machine learning model evaluation Heeseung teaches factoring the matrix of accuracy scores into pipeline latent factors and dataset latent factors (Heeseung, para 0104 discloses a matrix can factored or decomposed into their constituent latent factors “the user matrix and the tourist attraction matrix may each be produced by matrix factorization processing the correlation matrix. In the above user matrix, some elements (e.g., rows) represent tourists and other elements (e.g., columns) represent latent factors. In the above tourist destination matrix, some elements (e.g., columns) represent tourist destinations and other elements (e.g., rows) represent the latent elements”; by applying this disclosed technique, matrix of accuracy scores comprising ML pipeline and dataset pipeline can be decomposed into their latent factors taught by Shi ); augmenting the matrix of accuracy scores, the augmenting comprising: augmenting the matrix of accuracy scores with the second accuracy scores reflected for the new dataset to produce an augmented matrix of accuracy scores (Heeseung, para 0135 discloses augmenting a matrix for improving the accuracy/recommendation value “the correlation matrix may be dimensionally augmented, such as by SVD, prior to matrix factorization. Once the augmented correlation matrix is factorized into a user matrix and a destination matrix, a recommendation model is trained based on them”); factoring the augmented matrix of accuracy scores into refined pipeline latent factors and refined dataset latent factors (Heeseung, para 0104 discloses any matrix can factored or decomposed into their constituent latent factors “the user matrix and the tourist attraction matrix may each be produced by matrix factorization processing the correlation matrix. In the above user matrix, some elements (e.g., rows) represent tourists and other elements (e.g., columns) represent latent factors. In the above tourist destination matrix, some elements (e.g., columns) represent tourist destinations and other elements (e.g., rows) represent the latent elements”; by applying this disclosed technique, matrix of accuracy scores comprising refined ML pipeline and refined dataset pipeline can be decomposed into their latent factors taught by Shi ); Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the feature of augmenting and factoring performance matrix of machine learning model of Heeseung into cross validation matrix from machine learning pipelines with respect to different datasets of Shi and Durvasula to produce an expected result of reducing learning model errors. The modification would be obvious because one of ordinary skill in the art would be motivated to augment the performance matrix for reducing the root mean square error (RSME) of the performance evaluation matrix(Heeseung, para 0102). But Shi, Durvasula and Heeseung don’t explicitly teach selecting a subset of machine learning pipelines of the plurality of machine learning pipelines based on the factoring the matrix of accuracy scores; for a new dataset, running the subset of machine learning pipelines with the new dataset to build and test respective machine learning models, and obtaining second accuracy scores for combinations of the new dataset and the subset of machine learning pipelines; However, in the same field of endeavor of machine learning model evaluation Nazir teaches selecting a subset of machine learning pipelines of the plurality of machine learning pipelines based on the factoring the matrix of accuracy scores; for a new dataset, running the subset of machine learning pipelines with the new dataset to build and test respective machine learning models, and obtaining second accuracy scores for combinations of the new dataset and the subset of machine learning pipelines(Nazir, col 23:1~9 discloses selection learning models and running test data to obtain model accuracies “the best n performing models on the whole test dataset are selected. If an autotuner is used, the best performing models generated by the autotuner are selected. …. The best performing models may be selected based on criteria including, e.g., RMSE (root mean square error), MSE (mean square error), MAE (mean absolute error), accuracy, precision, recall, etc., between test data and values predicted by the model for the test data”); Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the feature of selecting a set of learning models and running test data on them of Nazir into cross validation matrix from machine learning pipelines with respect to different datasets of Shi, Durvasula and Heeseung to produce an expected result of increasing the model accuracy. The modification would be obvious because one of ordinary skill in the art would be motivated to increase the model’s accuracy by periodically retrain the model when new data becomes available (Nazir, col 22:26~28). Regarding claim 11 (Original), Shi, Durvasula, Heeseung and Nazir teach all the limitations of claim 10 and Shi further teaches wherein the factoring the augmented matrix of accuracy scores comprises using an objective function that includes a loss function and regularization penalty (Shi, para 0038 further discloses optimization performance accuracy by loss function and regularization “The fusion loss function module adopts regularization of the parameter matrix and the total loss function composed of four loss functions, repeatedly trains and iterates to calculate and update the performance matrix, optimizes the pipeline recommendation unit in the automatic machine learning system, and obtains the machine learning pipeline to be recommended according to the performance matrix……”). Regarding claim 13(Original), Shi, Durvasula, Heeseung and Nazir teach all the limitations of claim 10 and Shi further teaches wherein the method further comprises building and outputting a machine learning model using a selected machine learning pipeline of the at least one machine learning pipeline (Shi, para 0004 further discloses outputting/recommending most efficient ML pipeline for new dataset based on latent features of ML pipeline and dataset “a novel bidirectional stacked autoencoder is utilized to simultaneously learn the latent feature representations of the dataset and the machine learning pipeline, which can effectively learn the synergy of the dataset and the machine learning pipeline, and can recommend a suitable machine learning pipeline for a new dataset……”). Regarding claim 14(Original), Shi, Durvasula, Heeseung and Nazir teach all the limitations of claim 13 and Shi further teaches wherein the method further comprises selecting the selected machine learning pipeline based on a time and compute budget (Shi, para 0004 discloses selection ML pipeline based on running time performance “the system is implemented to select the base pipeline from candidate machine learning pipelines based on running time, classification performance……”). Regarding Claim 16 (Currently Amended), Shi teaches A computer program product comprising: a computer readable storage medium readable by a processing circuit and storing instructions for execution by the processing circuit to: performing cross-validation runs for a plurality of dataset-pipeline combinations combined from a plurality of datasets and a plurality of machine learning pipelines, and building from the cross-validation runs a matrix of accuracy scores, the accuracy scores being first accuracy scores, including a respective accuracy score for each dataset-pipeline combination of the plurality of dataset-pipeline combinations (Shi, in para 0013 discloses a validation matrix of ML pipeline vs dataset that represents performance of accuracy in each corresponding cell “The performance matrix A represents the classification accuracy of each machine learning pipeline on each data set: the row vector of the performance matrix A represents the data set, and the column vector represents each machine learning pipeline”); identifying, based on the refined pipeline latent factors and the refined dataset latent factors, at least one machine learning pipeline, of the plurality of machine learning pipelines, as most optimal for model building based on the new dataset (Shi, para 0004 further discloses identifying/recommending a ML pipeline for new dataset based on latent features of ML pipeline and dataset “a novel bidirectional stacked autoencoder is utilized to simultaneously learn the latent feature representations of the dataset and the machine learning pipeline, which can effectively learn the synergy of the dataset and the machine learning pipeline, and can recommend a suitable machine learning pipeline for a new dataset. Finally, an efficient parallel selective ensemble system is embedded in a parallel automatic machine learning system based on bidirectional autoencoders, and the system is implemented to select the base pipeline from candidate machine learning pipelines based on running time, classification performance”). But Shi does not explicitly teach the cross-validation runs including, for each dataset-pipeline combination of the plurality of dataset-pipeline combinations, generating and training a respective machine learning model using a respective dataset and a respective pipeline of the dataset-pipeline combination; factoring the matrix of accuracy scores into pipeline latent factors and dataset latent factors; augmenting the matrix of accuracy scores, the augmenting comprising: selecting a subset of machine learning pipelines of the plurality of machine learning pipelines based on the factoring the matrix of accuracy scores; for a new dataset, running the subset of machine learning pipelines with the new dataset to build and test respective machine learning models, and obtaining second accuracy scores for combinations of the new dataset and the subset of machine learning pipelines; and augmenting the matrix of accuracy scores with the second accuracy scores reflected for the new dataset to produce an augmented matrix of accuracy scores; factoring the augmented matrix of accuracy scores into refined pipeline latent factors and refined dataset latent factors; However, in the same field of endeavor of machine learning model training for data pipeline Durvasula teaches the cross-validation runs including, for each dataset-pipeline combination of the plurality of dataset-pipeline combinations, generating and training a respective machine learning model using a respective dataset and a respective pipeline of the dataset-pipeline combination(Durvasula, Fig. 6E and para 0117 discloses generating and training machine learning model data pipeline and further combining data pipeline (element 64a and 644b of Fig. 6E) “The ML model may learn to organize data 642 into a data pipeline resembling the training examples. Block 644b is an examples of meta-machine learning, wherein machine learning techniques are used to build other machine learning models. …… learning models to train a machine learning model at block 644b to generate data pipelines, based on the data 642, that may be used to train additional machine learning models. The models trained at blocks 644 may be combined into a data pipeline pattern engine at block 646”; where Shi, in para 0013 discloses a validation matrix of ML pipeline vs dataset that)). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the feature of combining data pipelines of Durvasula into cross validation matrix from machine learning pipelines with respect to different datasets of Shi to produce an expected result of having the machine learning models trained with relevant dataset . The modification would be obvious because one of ordinary skill in the art would be motivated to improve the machine learning system by keeping model training and operation separated (Durvasula, para 0062). But Shi and Durvasula not explicitly teach factoring the matrix of accuracy scores into pipeline latent factors and dataset latent factors; augmenting the matrix of accuracy scores, the augmenting comprising: selecting a subset of machine learning pipelines of the plurality of machine learning pipelines based on the factoring the matrix of accuracy scores; for a new dataset, running the subset of machine learning pipelines with the new dataset to build and test respective machine learning models, and obtaining second accuracy scores for combinations of the new dataset and the subset of machine learning pipelines; and augmenting the matrix of accuracy scores with the second accuracy scores reflected for the new dataset to produce an augmented matrix of accuracy scores; factoring the augmented matrix of accuracy scores into refined pipeline latent factors and refined dataset latent factors; However, in the same field of endeavor of machine learning model evaluation Heeseung teaches factoring the matrix of accuracy scores into pipeline latent factors and dataset latent factors(Heeseung, para 0104 discloses a matrix can factored or decomposed into their constituent latent factors “the user matrix and the tourist attraction matrix may each be produced by matrix factorization processing the correlation matrix. In the above user matrix, some elements (e.g., rows) represent tourists and other elements (e.g., columns) represent latent factors. In the above tourist destination matrix, some elements (e.g., columns) represent tourist destinations and other elements (e.g., rows) represent the latent elements”; by applying this disclosed technique, matrix of accuracy scores comprising ML pipeline and dataset pipeline can be decomposed into their latent factors taught by Shi ); augmenting the matrix of accuracy scores, the augmenting comprising: and augmenting the matrix of accuracy scores with the second accuracy scores reflected for the new dataset to produce an augmented matrix of accuracy scores (Heeseung, para 0135 discloses augmenting a matrix for improving the accuracy/recommendation value “the correlation matrix may be dimensionally augmented, such as by SVD, prior to matrix factorization. Once the augmented correlation matrix is factorized into a user matrix and a destination matrix, a recommendation model is trained based on them”); factoring the augmented matrix of accuracy scores into refined pipeline latent factors and refined dataset latent factors (Heeseung, para 0104 discloses any matrix can factored or decomposed into their constituent latent factors “the user matrix and the tourist attraction matrix may each be produced by matrix factorization processing the correlation matrix. In the above user matrix, some elements (e.g., rows) represent tourists and other elements (e.g., columns) represent latent factors. In the above tourist destination matrix, some elements (e.g., columns) represent tourist destinations and other elements (e.g., rows) represent the latent elements”; by applying this disclosed technique, matrix of accuracy scores comprising refined ML pipeline and refined dataset pipeline can be decomposed into their latent factors taught by Shi ); Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the feature of augmenting and factoring performance matrix of machine learning model of Heeseung into cross validation matrix from machine learning pipelines with respect to different datasets of Shi and Durvasula to produce an expected result of reducing learning model errors. The modification would be obvious because one of ordinary skill in the art would be motivated to augment the performance matrix for reducing the root mean square error (RSME) of the performance evaluation matrix(Heeseung, para 0102). But Shi, Durvasula and Heeseung don’t explicitly teach selecting a subset of machine learning pipelines of the plurality of machine learning pipelines based on the factoring the matrix of accuracy scores; for a new dataset, running the subset of machine learning pipelines with the new dataset to build and test respective machine learning models, and obtaining second accuracy scores for combinations of the new dataset and the subset of machine learning pipelines; However, in the same field of endeavor of machine learning model evaluation Nazir teaches selecting a subset of machine learning pipelines of the plurality of machine learning pipelines based on the factoring the matrix of accuracy scores; for a new dataset, running the subset of machine learning pipelines with the new dataset to build and test respective machine learning models, and obtaining second accuracy scores for combinations of the new dataset and the subset of machine learning pipelines (Nazir, col 23:1~9 discloses selection learning models and running test data to obtain model accuracies “the best n performing models on the whole test dataset are selected. If an autotuner is used, the best performing models generated by the autotuner are selected. …. The best performing models may be selected based on criteria including, e.g., RMSE (root mean square error), MSE (mean square error), MAE (mean absolute error), accuracy, precision, recall, etc., between test data and values predicted by the model for the test data”); Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the feature of selecting a set of learning models and running test data on them of Nazir into cross validation matrix from machine learning pipelines with respect to different datasets of Shi, Durvasula and Heeseung to produce an expected result of increasing the model accuracy. The modification would be obvious because one of ordinary skill in the art would be motivated to increase the model’s accuracy by periodically retrain the model when new data becomes available (Nazir, col 22:26~28). Regarding claim 17 (Original), Shi, Durvasula, Heeseung and Nazir teach all the limitations of claim 16 and Shi further teaches wherein the factoring the augmented matrix of accuracy scores comprises using an objective function that includes a loss function and regularization penalty (Shi, para 0038 further discloses optimization performance accuracy by loss function and regularization “The fusion loss function module adopts regularization of the parameter matrix and the total loss function composed of four loss functions, repeatedly trains and iterates to calculate and update the performance matrix, optimizes the pipeline recommendation unit in the automatic machine learning system, and obtains the machine learning pipeline to be recommended according to the performance matrix……”). Regarding claim 19(Original), Shi, Durvasula, Heeseung and Nazir teach all the limitations of claim 16 and Shi further teaches wherein the method further comprises building and outputting a machine learning model using a selected machine learning pipeline of the at least one machine learning pipeline(Shi, para 0004 further discloses outputting/recommending most efficient ML pipeline for new dataset based on latent features of ML pipeline and dataset “a novel bidirectional stacked autoencoder is utilized to simultaneously learn the latent feature representations of the dataset and the machine learning pipeline, which can effectively learn the synergy of the dataset and the machine learning pipeline, and can recommend a suitable machine learning pipeline for a new dataset……”). Regarding claim 20(Original), Shi, Durvasula, Heeseung and Nazir teach all the limitations of claim 19 and Shi further teaches wherein the method further comprises selecting the selected machine learning pipeline based on a time and compute budget (Shi, para 0004 discloses selection ML pipeline based on running time performance “the system is implemented to select the base pipeline from candidate machine learning pipelines based on running time, classification performance……”). Claim 4, 6, 12 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Shi, Hai-bo et al (Chinese Patent document No. CN 116128065 A), hereafter referred as to “Shi”, in view of Durvasula, Vsm et al (PGPUB Document No. 20230117893), hereafter, referred to as “Durvasula”, in view of Heeseung, Choi et al (Korean Patent Document No. KR 20230138605), hereafter, referred to as “Heeseung”, in view of Nazir, Mubbashir et al (US Patent No. 11798090), hereafter, referred to as “Nazir”, in further view of Franke, Vedran et al (PGPUB Document No 20240087677 ), hereafter, referred to as “Franke”. Regarding claim 4(Original), Shi, Durvasula, Heeseung and Nazir teach all the limitations of claim 3 and Shi further teaches wherein the regularization penalty comprises a similarity term, the similarity term being a function of (i) dataset similarity between datasets (Shi, para 0115 discloses measurement of similarity among datasets “represents the meta-features of the new dataset, represents the performance vector of the dataset (first, by comparing the similarity between the meta-features with the 550 training datasets, the first 50 neighbor datasets of the data are calculated”) But Shi, Durvasula, Heeseung and Nazir don’t explicitly teach and (ii) distance between latent factors of the refined dataset latent factors, wherein greater similarity results in a lower latent factor distance and lesser similarity results in a higher latent factor distance. However, in the same field of endeavor of machine learning and latent feature analysis Franke teaches and (ii) distance between latent factors of the refined dataset latent factors, wherein greater similarity results in a lower latent factor distance and lesser similarity results in a higher latent factor distance (Franke, para 0046 discloses measuring similarity distances between latent factors (closer distance indicates higher similarity and vice versa) “comparing each latent factor in each of the latent spaces with each latent factor from each of the other latent spaces using a similarity or distance metric, wherein a latent factor from a latent space is said to be present in another latent space if the similarity metric is above a defined threshold or the distance metric below a defined threshold, for some latent factor in that other latent space”); Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the feature of measuring similarity distance between latent factors of Franke into cross validation matrix from machine learning pipelines with respect to different datasets of Shi, Durvasula, Heeseung and Nazir to produce an expected result of improving the prediction model. The modification would be obvious because one of ordinary skill in the art would be motivated to improve the label prediction of a learning model by repeated determination of latent spaces using plurality of neural network (Franke, para 0020). Regarding claim 6(Original), Shi, Durvasula, Heeseung, Nazir and Franke teach all the limitations of claim 4 and Franke further teaches wherein the distance between latent factors comprises a Euclidean distance (Franke, para 0041 discloses distance between latent factors can be Euclidean distance “latent spaces and selecting the latent factors which are present in at least a defined number N (N=integer number) of latent spaces within a given similarity range; Similarity can be for example, an Euclidean distance, cosine similarity, or any distance measure ……”). Regarding claim 12(Original), Shi, Durvasula, Heeseung and Nazir teach all the limitations of claim 11 and Shi further teaches wherein the regularization penalty comprises a similarity term, the similarity term being a function of (i) dataset similarity between datasets (Shi, para 0115 discloses measurement of similarity among datasets “represents the meta-features of the new dataset, represents the performance vector of the dataset (first, by comparing the similarity between the meta-features with the 550 training datasets, the first 50 neighbor datasets of the data are calculated”) But Shi, Durvasula, Heeseung and Nazir don’t explicitly teach and (ii) distance between latent factors of the refined dataset latent factors, wherein greater similarity results in a lower latent factor distance and lesser similarity results in a higher latent factor distance. However, in the same field of endeavor of machine learning and latent feature analysis Franke teaches and (ii) distance between latent factors of the refined dataset latent factors, wherein greater similarity results in a lower latent factor distance and lesser similarity results in a higher latent factor distance (Franke, para 0046 discloses measuring similarity distances between latent factors (closer distance indicates higher similarity and vice versa) “comparing each latent factor in each of the latent spaces with each latent factor from each of the other latent spaces using a similarity or distance metric, wherein a latent factor from a latent space is said to be present in another latent space if the similarity metric is above a defined threshold or the distance metric below a defined threshold, for some latent factor in that other latent space”); Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the feature of measuring similarity distance between latent factors of Franke into cross validation matrix from machine learning pipelines with respect to different datasets of Shi, Durvasula, Heeseung and Nazir to produce an expected result of improving the prediction model. The modification would be obvious because one of ordinary skill in the art would be motivated to improve the label prediction of a learning model by repeated determination of latent spaces using plurality of neural network (Franke, para 0020). Regarding claim 18(Original), Shi, Durvasula, Heeseung and Nazir teach all the limitations of claim 17 and Shi further teaches wherein the regularization penalty comprises a similarity term, the similarity term being a function of (1) dataset similarity between datasets (Shi, para 0115 discloses measurement of similarity among datasets “represents the meta-features of the new dataset, represents the performance vector of the dataset (first, by comparing the similarity between the meta-features with the 550 training datasets, the first 50 neighbor datasets of the data are calculated”) But Shi, Durvasula, Heeseung and Nazir don’t explicitly teach and (ii) distance between latent factors of the refined dataset latent factors, wherein greater similarity results in a lower latent factor distance and lesser similarity results in a higher latent factor distance. However, in the same field of endeavor of machine learning and latent feature analysis Franke teaches and (ii) distance between latent factors of the refined dataset latent factors, wherein greater similarity results in a lower latent factor distance and lesser similarity results in a higher latent factor distance (Franke, para 0046 discloses measuring similarity distances between latent factors (closer distance indicates higher similarity and vice versa) “comparing each latent factor in each of the latent spaces with each latent factor from each of the other latent spaces using a similarity or distance metric, wherein a latent factor from a latent space is said to be present in another latent space if the similarity metric is above a defined threshold or the distance metric below a defined threshold, for some latent factor in that other latent space”); Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the feature of measuring similarity distance between latent factors of Franke into cross validation matrix from machine learning pipelines with respect to different datasets of Shi, Durvasula, Heeseung and Nazir to produce an expected result of improving the prediction model. The modification would be obvious because one of ordinary skill in the art would be motivated to improve the label prediction of a learning model by repeated determination of latent spaces using plurality of neural network (Franke, para 0020). Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Shi, Hai-bo et al (Chinese Patent document No. CN 116128065 A), hereafter referred as to “Shi”, in view of Durvasula, Vsm et al (PGPUB Document No. 20230117893), hereafter, referred to as “Durvasula”, in view of Heeseung, Choi et al (Korean Patent Document No. KR 20230138605), hereafter, referred to as “Heeseung”, in view of Nazir, Mubbashir et al (US Patent No. 11798090), hereafter, referred to as “Nazir”, in view of Franke, Vedran et al (PGPUB Document No. 20240087677 ), hereafter, referred to as “Franke”, in further view of Verdejo, Dominique et al (PGPUB Document No. 20180150750), hereafter, referred to as “Verdejo”. Regarding claim 5 (Original), Shi, Heeseung, Heeseung, Nazir and Franke teach all the limitations of claim 4 but don’t explicitly teach wherein the dataset similarity is determined using canonical correlation analysis. However, in the same field of endeavor of similarity determination Verdejo teaches wherein the dataset similarity is determined using canonical correlation analysis (Verdejo, para 0082 discloses measuring similarity using canonical correlation technique “analytics system 205 may use the canonical correlation technique using semantically similar terms that identify similar objects shown in images associated with the current event and the historical event”); Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the feature of measuring similarity by canonical correlation technique of Verdejo into cross validation matrix from machine learning pipelines with respect to different datasets of Shi, Heeseung, Heeseung, Nazir and Franke to produce an expected result of finding contextual similarity. The modification would be obvious because one of ordinary skill in the art would be motivated to improve similarity identification by using canonical correlation (Franke, para 0082). Claim 9 and 15 is rejected under 35 U.S.C. 103 as being unpatentable over Shi, Hai-bo et al (Chinese Patent document No. CN 116128065 A), hereafter referred as to “Shi”, in view of Durvasula, Vsm et al (PGPUB Document No. 20230117893), hereafter, referred to as “Durvasula”, in view of Heeseung, Choi et al (Korean Patent Document No. KR 20230138605), hereafter, referred to as “Heeseung”, in view of Nazir, Mubbashir et al (US Patent No. 11798090), hereafter, referred to as “Nazir”, in further view of Gao, Lei et al (PGPUB Document No. 20240184567 ), hereafter, referred to as “Gao”. Regarding claim 9 (Original), Shi, Durvasula, Heeseung and Nazir teach all the limitations of claim 1 and Shi further teaches wherein the identifying the at least one machine learning pipeline comprises validating results for the at least one machine learning pipeline, the validating the results comprising at least one selected from the group consisting of(Shi, para 0055 discloses selection of a candidate ML pipeline “an efficient parallel selective ensemble system is embedded, which enables the system to select the base pipeline from the candidate machine learning pipelines according to the running time, classification performance and diversity of the validation set, which enhances the stability of the system for most data sets”): Nazir teaches and clustering the plurality of datasets and new dataset into dataset clusters and verifying similarity of clustered datasets(Nazir, col 4:67~63 discloses clustering datasets “The data clustering system 180 is adapted to perform cluster analysis on the input variables of the historical dataset to determine how the data is clustered. After the historical dataset has been clustered, separate models may be trained for each cluster according to the methods disclosed herein”). But Shi, Durvasula, Heeseung and Nazir don’t explicitly teach clustering the plurality of machine learning pipelines into pipeline clusters and verifying similarity of clustered pipelines; However, in the same field of endeavor of clustering ML pipelines Gao teaches clustering the plurality of machine learning pipelines into pipeline clusters and verifying similarity of clustered pipelines (Gao, para 0011 discloses clustering of ML pipelines and verification of computed similarity scores “automatically extract a series of key features from detected versions of the target machine learning pipeline, automatically cluster the detected versions of the target machine learning model pipeline by the extracted series of key features, automatically identify a highest-quality version within each of a series of generated clusters, and automatically compute similarity scores for subsets of the detected versions within each of the series of generated cluster”); Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the feature of clustering ML Pipelines of Gao into cross validation matrix from machine learning pipelines with respect to different datasets of Shi, Durvasula, Heeseung and Nazir to produce an expected result of improving cluster quality. The modification would be obvious because one of ordinary skill in the art would be motivated to save time during machine learning pipeline solution by utilizing automated saving of versions based on measurable quality improvement (Gao, para 0011). Regarding claim 15 (Original), Shi, Durvasula, Heeseung and Nazir teach all the limitations of claim 10 and Shi further teaches wherein the identifying the at least one machine learning pipeline comprises validating results for the at least one machine learning pipeline, the validating the results comprising at least one selected from the group consisting of (Shi, para 0055 discloses selection of a candidate ML pipeline “an efficient parallel selective ensemble system is embedded, which enables the system to select the base pipeline from the candidate machine learning pipelines according to the running time, classification performance and diversity of the validation set, which enhances the stability of the system for most data sets”): Nazir teaches and clustering the plurality of datasets and new dataset into dataset clusters and verifying similarity of clustered datasets (Nazir, col 4:67~63 discloses clustering datasets “The data clustering system 180 is adapted to perform cluster analysis on the input variables of the historical dataset to determine how the data is clustered. After the historical dataset has been clustered, separate models may be trained for each cluster according to the methods disclosed herein”). But Shi, Durvasula, Heeseung and Nazir don’t explicitly teach clustering the plurality of machine learning pipelines into pipeline clusters and verifying similarity of clustered pipelines; However, in the same field of endeavor of clustering ML pipelines Gao teaches clustering the plurality of machine learning pipelines into pipeline clusters and verifying similarity of clustered pipelines(Gao, para 0011 discloses clustering of ML pipelines and verification of computed similarity scores “automatically extract a series of key features from detected versions of the target machine learning pipeline, automatically cluster the detected versions of the target machine learning model pipeline by the extracted series of key features, automatically identify a highest-quality version within each of a series of generated clusters, and automatically compute similarity scores for subsets of the detected versions within each of the series of generated cluster”); Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the feature of clustering ML Pipelines of Gao into cross validation matrix from machine learning pipelines with respect to different datasets of Shi, Durvasula, Heeseung and Nazir to produce an expected result of improving cluster quality. The modification would be obvious because one of ordinary skill in the art would be motivated to save time during machine learning pipeline solution by utilizing automated saving of versions based on measurable quality improvement (Gao, para 0011). Response to Arguments I. 35 U.S.C §101 The 35 U.S.C §101 abstract rejection to claim 1-20 has been withdrawn in light of claim amendments and applicant argument consideration. II. 35 U.S.C §103 Applicant’s arguments filed on 10/17/2025 have been fully considered but are moot because the independent claim 1, 10 and 16 have been amended with newly added features which applicant’s arguments are directed towards. Since claims have been amended with new features, a new ground of rejection is presented. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ABDULLAH A DAUD whose telephone number is (469)295-9283. The examiner can normally be reached M~F: 9:30 am~6:30 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amy Ng can be reached at 571-270-1698. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ABDULLAH A DAUD/Examiner, Art Unit 2164 /AMY NG/Supervisory Patent Examiner, Art Unit 2164
Read full office action

Prosecution Timeline

Jun 26, 2023
Application Filed
Jul 12, 2025
Non-Final Rejection — §103
Oct 17, 2025
Response Filed
Oct 17, 2025
Applicant Interview (Telephonic)
Oct 17, 2025
Examiner Interview Summary
Feb 09, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602292
TENANT COPY USING INCREMENTAL DATABASE RECOVERY
2y 5m to grant Granted Apr 14, 2026
Patent 12566809
GRAPH LEARNING AND AUTOMATED BEHAVIOR COORDINATION PLATFORM
2y 5m to grant Granted Mar 03, 2026
Patent 12487887
FILESET PARTITIONING FOR DATA STORAGE AND MANAGEMENT
2y 5m to grant Granted Dec 02, 2025
Patent 12299037
GRAPH-BASED FEATURE ENGINEERING FOR MACHINE LEARNING MODELS
2y 5m to grant Granted May 13, 2025
Patent 12293262
ADAPTIVE MACHINE LEARNING TRAINING VIA IN-FLIGHT FEATURE MODIFICATION
2y 5m to grant Granted May 06, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
54%
Grant Probability
88%
With Interview (+33.6%)
4y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 167 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month