Prosecution Insights
Last updated: April 19, 2026
Application No. 18/324,115

EXPLORATORY OFFLINE GENERATIVE ONLINE MACHINE LEARNING

Non-Final OA §101§103
Filed
May 25, 2023
Examiner
MAHARAJ, DEVIKA S
Art Unit
2123
Tech Center
2100 — Computer Architecture & Software
Assignee
Fujitsu Limited
OA Round
1 (Non-Final)
55%
Grant Probability
Moderate
1-2
OA Rounds
5y 0m
To Grant
63%
With Interview

Examiner Intelligence

Grants 55% of resolved cases
55%
Career Allow Rate
43 granted / 78 resolved
At TC average
Moderate +8% lift
Without
With
+7.7%
Interview Lift
resolved cases with interview
Typical timeline
5y 0m
Avg Prosecution
28 currently pending
Career history
106
Total Applications
across all art units

Statute-Specific Performance

§101
27.4%
-12.6% vs TC avg
§103
42.8%
+2.8% vs TC avg
§102
10.1%
-29.9% vs TC avg
§112
16.6%
-23.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 78 resolved cases

Office Action

§101 §103
DETAILED ACTION 1. This communication is in response to the Application No. 18/324,115 filed on May 25, 2023 in which Claims 1-20 are presented for examination. Notice of Pre-AIA or AIA Status 2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement 3. The information disclosure statement submitted on 05/25/2023 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 101 4. 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 5. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding Claim 1: Step 1: Claim 1 is a method type claim. Therefore, Claims 1-14 are directed to either a process, machine, manufacture, or composition of matter. 2A Prong 1: If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation by mathematical calculation but for the recitation of generic computer components, then it falls within the “Mathematical Concepts” grouping of abstract ideas. predicting […] performance of a plurality of candidate ML pipelines for performing the tasks on the candidate tabular dataset (mental process – other than reciting “using the prediction model”, predicting performance of a plurality of candidate ML pipelines for performing tasks on a candidate tabular dataset may be performed manually by a user observing/analyzing the plurality of candidate ML pipelines and the candidate tabular dataset and accordingly using judgement/evaluation to cast a prediction regarding performance of the plurality of candidate ML pipelines based on said analysis. For example, the user may observe/analyze features of the plurality of pipelines (such as accuracy, mean absolute error, etc. as supported by Applicant’s specification Par. [0049]) and accordingly cast a prediction on the performance of each ML pipeline) selecting a threshold number of top-performing candidates of the plurality of candidate ML pipelines as predicted by the prediction model for training to perform the task (mental process – selecting a threshold number of top-performing candidates may be performed manually by a user observing/analyzing the predicted performance of each ML pipeline and accordingly using judgement/evaluation to rank the ML pipelines based on performance and then selecting a threshold number (such as the top three performing models, based on observed accuracy) of top-performing candidates) identifying a top-performing ML pipeline based on performance of the trained top-performing candidates (mental process – identifying a top-performing ML pipeline may be performed manually by a user observing/analyzing the predicted performance of each of the selected top-performing candidates (for example, three top-performing candidates) and accordingly using judgement/evaluation to identify the top-performing ML pipeline (for example, the pipeline with the highest accuracy/lowest mean absolute error, etc.) based on said analysis) 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: obtaining a set of preliminary tabular datasets and tasks to be performed by preliminary machine-learning (ML) pipelines (Adding insignificant extra-solution activity to the judicial exception – see MPEP 2106.05(g)) training a prediction model that predicts performance of ML pipelines in performing the tasks using the preliminary ML pipelines, the preliminary ML pipelines synthesized as different approaches for performing the tasks (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: high level recitation of training a machine learning model without significantly more. Furthermore, the preliminary ML pipelines are merely stated as being ‘synthesized different approaches for performing the tasks without significantly more) obtaining a candidate tabular dataset (Adding insignificant extra-solution activity to the judicial exception – see MPEP 2106.05(g)) […] using the prediction model […] (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: high level recitation of using a trained machine learning model/prediction model without significantly more) 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: obtaining a set of preliminary tabular datasets and tasks to be performed by preliminary machine-learning (ML) pipelines (MPEP 2106.05(d)(II) indicates that merely “Receiving or transmitting data over a network” is a well-understood, routine, conventional function when it is claimed in a merely generic manner (as it is in the present claim). Thereby, a conclusion that the claimed limitation is well-understood, routine, conventional activity is supported under Berkheimer) training a prediction model that predicts performance of ML pipelines in performing the tasks using the preliminary ML pipelines, the preliminary ML pipelines synthesized as different approaches for performing the tasks (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: high level recitation of training a machine learning model without significantly more. Furthermore, the preliminary ML pipelines are merely stated as being ‘synthesized different approaches for performing the tasks without significantly more. This cannot provide an inventive concept) obtaining a candidate tabular dataset (MPEP 2106.05(d)(II) indicates that merely “Receiving or transmitting data over a network” is a well-understood, routine, conventional function when it is claimed in a merely generic manner (as it is in the present claim). Thereby, a conclusion that the claimed limitation is well-understood, routine, conventional activity is supported under Berkheimer) […] using the prediction model […] (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: high level recitation of using a trained machine learning model/prediction model without significantly more. This cannot provide an inventive concept) For the reasons above, Claim 1 is rejected as being directed to an abstract idea without significantly more. This rejection applies equally to dependent claims 2-14. The additional limitations of the dependent claims are addressed below. Regarding Claim 2: Step 2A Prong 1: See the rejection of Claim 1 above, which Claim 2 depends on. splitting the set of preliminary tabular datasets into a training subset and a validation subset (mental process – splitting the set of preliminary tabular datasets may be performed manually by a user observing/analyzing the preliminary tabular datasets and accordingly using judgement/evaluation to split the dataset into a training subset and a validation subset, with the aid of pen and paper) recording the performance of the preliminary ML pipelines (mental process – recording the performance of the preliminary ML pipelines may be performed manually by a user observing/analyzing the performance of the preliminary ML pipelines and accordingly using judgement/evaluation to record said performance of the preliminary ML pipelines, with the aid of pen and paper) Step 2A Prong 2 & Step 2B: training each of the preliminary ML pipelines with the training subset (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner's note: high level recitation of training a machine learning model with previously determined data without significantly more. This cannot provide an inventive concept) confirming performance of the preliminary ML pipelines with the validation subset (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner' s note: high level recitation of confirming/validating performance of the preliminary ML pipelines with a validation subset without significantly more. This cannot provide an inventive concept) training the prediction model using the performance of the preliminary ML pipelines and dataset meta-features of the set of preliminary tabular datasets and pipeline meta-features of the preliminary ML pipelines (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner' s note: high level recitation of training a machine learning model with previously determined data without significantly more. This cannot provide an inventive concept) Accordingly, under Step 2A Prong 2 and Step 2B, these additional elements do not integrate the abstract idea into practical application because they do not impose any meaningful limits on practicing the abstract idea, as discussed above in the rejection of claim 1. Regarding Claim 3: Step 2A Prong 1: See the rejection of Claim 2 above, which Claim 3 depends on. extracting the dataset meta-features from the set of preliminary tabular datasets, the dataset meta-features including characteristics of a given tabular dataset of the preliminary tabular datasets (mental process – extracting dataset meta-features from the set of preliminary tabular datasets may be performed manually by a user observing/analyzing the tabular datasets and accordingly using judgement/evaluation to extract meta-features such as characteristics of the tabular datasets (i.e., number of rows, presence of missing values, presence of a number, etc.)) extracting the pipeline meta-features from the preliminary ML pipelines, the pipeline meta-features including characteristics of a given ML pipeline of the preliminary ML pipelines (mental process – extracting pipeline meta-features from the set of preliminary ML pipelines may be performed manually by a user observing/analyzing the set of preliminary ML pipelines and accordingly using judgement/evaluation to extract pipeline meta-features, such as characteristics of the ML pipelines (i.e., presence of preprocessing components, number of ML models included in the pipeline, etc.)) Step 2A Prong 2 & Step 2B: Accordingly, under Step 2A Prong 2 and Step 2B, these additional elements do not integrate the abstract idea into practical application because they do not impose any meaningful limits on practicing the abstract idea, as discussed above in the rejection of claim 1. Regarding Claim 4: Step 2A Prong 1: See the rejection of Claim 3 above, which Claim 4 depends on. Step 2A Prong 2 & Step 2B: wherein the characteristics of the given tabular dataset of the preliminary tabular datasets include one or more of a number of rows, a number of features, a presence of a number, a presence of missing values, a presence of a number category, a presence of a string category, a presence of text, a median, a mean, a mode, a distribution, a maximum value, a minimum value, and a label for categories of information (Field of Use – limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception does not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application; in this case specifying the characteristics of the tabular dataset does not integrate the exception into a practical application nor amount to significantly more – See MPEP 2106.05(h)) Accordingly, under Step 2A Prong 2 and Step 2B, these additional elements do not integrate the abstract idea into practical application because they do not impose any meaningful limits on practicing the abstract idea, as discussed above in the rejection of claim 1. Regarding Claim 5: Step 2A Prong 1: See the rejection of Claim 3 above, which Claim 5 depends on. Step 2A Prong 2 & Step 2B: wherein the characteristics of the given ML pipeline of the preliminary ML pipelines include a set of preprocessing components present in the given ML pipeline, one or more ML models included in the preliminary ML pipelines, or both (Field of Use – limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception does not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application; in this case specifying the characteristics of an ML pipeline does not integrate the exception into a practical application nor amount to significantly more – See MPEP 2106.05(h)) Accordingly, under Step 2A Prong 2 and Step 2B, these additional elements do not integrate the abstract idea into practical application because they do not impose any meaningful limits on practicing the abstract idea, as discussed above in the rejection of claim 1. Regarding Claim 6: Step 2A Prong 1: See the rejection of Claim 2 above, which Claim 6 depends on. Step 2A Prong 2 & Step 2B: wherein the recorded performance of each of the preliminary ML pipelines comprise one or more scores and an execution time (Field of Use – limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception does not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application; in this case specifying the recorded performance comprises one or more scores and an execution time does not integrate the exception into a practical application nor amount to significantly more – See MPEP 2106.05(h)) Accordingly, under Step 2A Prong 2 and Step 2B, these additional elements do not integrate the abstract idea into practical application because they do not impose any meaningful limits on practicing the abstract idea, as discussed above in the rejection of claim 1. Regarding Claim 7: Step 2A Prong 1: See the rejection of Claim 3 above, which Claim 7 depends on. Step 2A Prong 2 & Step 2B: wherein inputs to the prediction model include the dataset meta-features and the pipeline meta-features (Field of Use – limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception does not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application; in this case specifying the inputs to the prediction model does not integrate the exception into a practical application nor amount to significantly more – See MPEP 2106.05(h)) Accordingly, under Step 2A Prong 2 and Step 2B, these additional elements do not integrate the abstract idea into practical application because they do not impose any meaningful limits on practicing the abstract idea, as discussed above in the rejection of claim 1. Regarding Claim 8: Step 2A Prong 1: See the rejection of Claim 1 above, which Claim 8 depends on. extracting second dataset meta-features from the candidate tabular dataset (mental process – extracting second dataset meta-features from the candidate tabular dataset may be performed manually by a user observing/analyzing the tabular datasets and accordingly using judgement/evaluation to extract meta-features such as characteristics of the tabular datasets (i.e., number of rows, presence of missing values, presence of a number, etc.)) extracting second pipeline meta-features from each of the plurality of candidate ML pipelines (mental process – extracting second pipeline meta-features from the plurality of candidate ML pipelines may be performed manually by a user observing/analyzing the plurality of candidate ML pipelines and accordingly using judgement/evaluation to extract pipeline meta-features, such as characteristics of the ML pipelines (i.e., presence of preprocessing components, number of ML models included in the pipeline, etc.)) combining the second pipeline meta-features and the second dataset meta-features (mental process – combining the second pipeline meta-features and the second dataset meta-features may be performed manually by a user observing/analyzing the second pipeline meta-features and the second dataset meta-features and using judgement/evaluation to combine/concatenate the set of meta-features, with the aid of pen and paper) Step 2A Prong 2 & Step 2B: obtaining the plurality of candidate ML pipelines (MPEP 2106.05(d)(II) indicates that merely “Receiving or transmitting data over a network” is a well-understood, routine, conventional function when it is claimed in a merely generic manner (as it is in the present claim). Thereby, a conclusion that the claimed limitation is well-understood, routine, conventional activity is supported under Berkheimer) Accordingly, under Step 2A Prong 2 and Step 2B, these additional elements do not integrate the abstract idea into practical application because they do not impose any meaningful limits on practicing the abstract idea, as discussed above in the rejection of claim 1. Regarding Claim 9: Step 2A Prong 1: See the rejection of Claim 1 above, which Claim 9 depends on. Step 2A Prong 2 & Step 2B: wherein the top-performing candidates include the threshold number of the plurality of candidate ML pipelines selected based on an execution time of each of the plurality of candidate ML pipelines (Field of Use – limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception does not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application; in this case specifying that the top-performing candidates include a threshold number selected based on execution time does not integrate the exception into a practical application nor amount to significantly more – See MPEP 2106.05(h)) Accordingly, under Step 2A Prong 2 and Step 2B, these additional elements do not integrate the abstract idea into practical application because they do not impose any meaningful limits on practicing the abstract idea, as discussed above in the rejection of claim 1. Regarding Claim 10: Step 2A Prong 1: See the rejection of Claim 1 above, which Claim 10 depends on. Step 2A Prong 2 & Step 2B: wherein the top-performing candidates include the threshold number of the plurality of candidate ML pipelines selected based on an execution time and performance score of each of the plurality of candidate ML pipelines (Field of Use – limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception does not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application; in this case specifying that the top-performing candidates include a threshold number selected based on execution time and performance score does not integrate the exception into a practical application nor amount to significantly more – See MPEP 2106.05(h)) Accordingly, under Step 2A Prong 2 and Step 2B, these additional elements do not integrate the abstract idea into practical application because they do not impose any meaningful limits on practicing the abstract idea, as discussed above in the rejection of claim 1. Regarding Claim 11: Step 2A Prong 1: See the rejection of Claim 1 above, which Claim 11 depends on. Step 2A Prong 2 & Step 2B: wherein the prediction model is configured to adapt to variable sizes of the set of preliminary tabular datasets and the candidate tabular dataset (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner' s note: high level recitation of applying an already configured/trained machine learning model without significantly more) Accordingly, under Step 2A Prong 2 and Step 2B, these additional elements do not integrate the abstract idea into practical application because they do not impose any meaningful limits on practicing the abstract idea, as discussed above in the rejection of claim 1. Regarding Claim 12: Step 2A Prong 1: See the rejection of Claim 11 above, which Claim 12 depends on. generating dataset-level meta-features of the preliminary tabular datasets (mental process – generating dataset-level meta-features of the preliminary tabular datasets may be performed manually by a user observing/analyzing the tabular datasets and accordingly using judgement/evaluation to generate dataset-level meta-features such as characteristics of the tabular datasets (i.e., number of rows, presence of missing values, presence of a number, etc.)) generating column-level meta-features of the preliminary tabular datasets (mental process – generating column-level meta-features of the preliminary tabular datasets may be performed manually by a user observing/analyzing the tabular datasets and accordingly using judgement/evaluation to generate column-level meta-features such as characteristics of the columns of tabular datasets (i.e., number of columns, presence of missing values, presence of a number, etc.)) combining the dataset-level meta-features and the column-level meta-features of the preliminary tabular datasets (mental process – combining the dataset-level meta-features and the column-level meta-features may be performed manually by a user observing/analyzing the meta-features and using judgement/evaluation to combine/concatenate the set of meta-features, with the aid of pen and paper) Step 2A Prong 2 & Step 2B: Accordingly, under Step 2A Prong 2 and Step 2B, these additional elements do not integrate the abstract idea into practical application because they do not impose any meaningful limits on practicing the abstract idea, as discussed above in the rejection of claim 1. Regarding Claim 13: Step 2A Prong 1: See the rejection of Claim 8 above, which Claim 13 depends on. removing one or more of the options without related second dataset meta-features from the list (mental process – removing one or more of the options without related second dataset meta-features from the list may be performed manually by a user observing/analyzing the second dataset meta-features and accordingly using judgement/evaluation to remove one or more options without related second dataset meta-features from the list, with the aid of pen and paper) Step 2A Prong 2 & Step 2B: obtaining a list of options for preprocessing components (MPEP 2106.05(d)(II) indicates that merely “Receiving or transmitting data over a network” is a well-understood, routine, conventional function when it is claimed in a merely generic manner (as it is in the present claim). Thereby, a conclusion that the claimed limitation is well-understood, routine, conventional activity is supported under Berkheimer) generating the plurality of candidate ML pipelines based on the list of options for preprocessing components (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner's note: high level recitation of generating machine learning pipelines with previously determined data without significantly more. This cannot provide an inventive concept) Accordingly, under Step 2A Prong 2 and Step 2B, these additional elements do not integrate the abstract idea into practical application because they do not impose any meaningful limits on practicing the abstract idea, as discussed above in the rejection of claim 1. Regarding Claim 14: Step 2A Prong 1: See the rejection of Claim 1 above, which Claim 14 depends on. predicting […] probability of failure of the plurality of candidate ML pipelines in performing the tasks on the candidate tabular dataset (mental process – predicting probability of failure of the plurality of candidate ML pipelines in performing the tasks on the candidate tabular dataset may be performed manually by a user observing/analyzing the candidate ML pipelines and any according performance metrics associated with the pipeline and accordingly using judgement/evaluation to cast a prediction on the probability of a failure/timeout based on said analysis) removing a number of the plurality of candidate ML pipelines with the probability of failure above a failure probability threshold (mental process – removing a number of the plurality of candidate ML pipelines with the probability of failure above a threshold may be performed manually by user observing/analyzing the plurality of candidate ML pipelines and their according failure probabilities and accordingly using judgement/evaluation to remove a number of the plurality of candidate ML pipelines (with the aid of pen and paper – candidate ML models may be removed from a list of options) with failure probabilities above a predetermined threshold) Step 2A Prong 2 & Step 2B: training a failure model that predicts probability of ML pipelines of failing to perform the tasks using the preliminary ML pipelines (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner' s note: high level recitation of training a machine learning model with previously determined data without significantly more. This cannot provide an inventive concept) […] using the failure model […] (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner' s note: high level recitation of applying a machine learning model with previously determined data without significantly more. This cannot provide an inventive concept) Accordingly, under Step 2A Prong 2 and Step 2B, these additional elements do not integrate the abstract idea into practical application because they do not impose any meaningful limits on practicing the abstract idea, as discussed above in the rejection of claim 1. Independent Claim 15 recites substantially the same limitations as Claim 1, in the form of a system, including generic computer components. The claim is also directed to performing mental processes without significantly more, therefore it is rejected under the same rationale. For the reasons above, Claim 15 is rejected as being directed to an abstract idea without significantly more. This rejection applies equally to dependent claims 16-20. The additional limitations of the dependent claims are addressed below. Claim 16 recites substantially the same limitations as Claim 2, in the form of a system, including generic computer components. The claim is also directed to performing mental processes without significantly more, therefore it is rejected under the same rationale. Claim 17 recites substantially the same limitations as Claim 3, in the form of a system, including generic computer components. The claim is also directed to performing mental processes without significantly more, therefore it is rejected under the same rationale. Claim 18 recites substantially the same limitations as Claim 4, in the form of a system, including generic computer components. The claim is also directed to performing mental processes without significantly more, therefore it is rejected under the same rationale. Claim 19 recites substantially the same limitations as Claim 5, in the form of a system, including generic computer components. The claim is also directed to performing mental processes without significantly more, therefore it is rejected under the same rationale. Claim 20 recites substantially the same limitations as Claim 7, in the form of a system, including generic computer components. The claim is also directed to performing mental processes without significantly more, therefore it is rejected under the same rationale. Claim Rejections - 35 USC § 103 6. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 7. Claims 1-12 and 15-20 are rejected under 35 U.S.C. 103 as being unpatentable over Chen et al. (hereinafter Chen) (US PG-PUB 20220036246), in view of Cai et al. (hereinafter Cai) (“ARM-Net: Adaptive Relation Modeling Network for Structured Data”). Regarding Claim 1, Chen teaches a method comprising: obtaining a set of preliminary tabular datasets (While Chen briefly discloses the use of tabular data in Par. [0070] and primarily focuses on evaluating time series data which is also structured and may be stored chronologically in tabular form, Chen does not explicitly disclose obtaining a set of preliminary tabular datasets (plural) – See introduction of Cai reference below for explicit teaching of obtaining a set of preliminary tabular datasets) and tasks to be performed by preliminary machine-learning (ML) pipelines (Chen, Par. [0040], “For example, the one or more input devices 106 can be employed to enter into the system 100, for instance, but not limited to: time series data, machine learning pipelines, knowledge databases, machine learning tasks […] For instance, the one or more input devices 106 can be employed to describe a machine learning task to be completed by the automated machine learning process executed by the time series analysis component 108. Further, the one or more input devices 106 can be employed to set one or more runtime thresholds (e.g., describing the maximum amount of time that can be allotted to the automated machine learning process), pipeline thresholds (e.g., describing one or more limitations on which machine learning pipelines can be selected to facilitate the machine learning task), and/or knowledge thresholds (e.g., describing one or more limitations on which knowledge databases can be utilized to facilitate the machine learning task).”, therefore, a set of preliminary structured data (time series data) and tasks to be performed by machine-learning pipelines are obtained) training a prediction model that predicts performance of ML pipelines in performing the tasks using the preliminary ML pipelines (Chen, Par. [0046], “The learner component 112 can employ one or more meta transfer learning techniques to identify the one or more machine learning pipelines of interest. For example, previous executions of the candidate machine learning pipelines can result in one or more observations regarding the machine learning pipelines' performance. These observations can be captured as meta-data associated with the machine learning pipelines. The meta-data can regard how well the machine learning pipeline accomplished a given machine learning task with respect to one or more evaluation metrics (e.g., accuracy of predictions and/or classifications).”, thus, the learner component (prediction model) may be trained, utilizing meta transfer learning, to help identify/predict performance of ML pipelines in performing a given machine learning task. For explicit recitation of the learner component/prediction model “predicting” performance, See Chen Par. [0049]), the preliminary ML pipelines synthesized as different approaches for performing the tasks (Chen, Par. [0026], “For example, the machine learning task can be executed across the ensemble of machine learning pipelines, wherein respective machine learning pipelines can be assigned weight values to delineate the weight of the outputs. Based on the output of the machine learning pipeline ensemble, one or more embodiments can further provide one or more explanations associated with the results of the machine learning task”, thus, the pipelines may be synthesized as an ensemble of different approaches for performing the tasks (e.g. forecasting, anomaly detection, clustering, regression, classification, prediction, a combination thereof, etc. as supported by Chen Par. [0040])); obtaining a candidate tabular dataset (Chen, Par. [0070], “For instance, the feature component 602 can select one or more knowledge databases from the knowledge library 122 based on, but not limited to: a domain of the time series data, tabular data, a combination thereof, and/or the like. […] For example, the semantic relationships and/or rules can facilitate the feature component 602 in transforming one or more data points of the time series data into one or more formats compatible with identified machine learning pipelines.”, therefore, a candidate tabular dataset may be obtained); predicting, using the prediction model, performance of a plurality of candidate ML pipelines for performing the tasks on the candidate tabular dataset (Chen, Par. [0049], “The learner component 112 can compare the meta-data of the machine learning pipelines with the characteristics of the time series data to predict how the respective machine learning pipelines will perform on the time series data. By employing the meta-data in the comparison, the learner component 112 can leverage insights previously developed by the machine learning pipelines to predict one or more performance metrics with regards to the time series data.”, thus, a prediction model (learner component) predicts performance of a plurality of machine learning candidate pipelines in performing different tasks with regards to the time series data/candidate tabular data, as disclosed by Chen Par. [0070]. This is further supported by Chen Par. [0050] which mentions that the comparison of the similarities of meta-data and characteristics of time series data may be an indication that a machine learning pipeline can accurately perform the desired machine learning task on the time series data/candidate tabular data); selecting a threshold number of top-performing candidates of the plurality of candidate ML pipelines as predicted by the prediction model for training to perform the task (Chen, Par. [0049], “In various embodiments, the learner component 112 can identify a machine learning pipeline for further development by the time series analysis component 108, wherein the predicted performance of the machine learning pipeline with respect to the time series data based on the meta-data is greater than a defined threshold with regards to a defined performance metric (e.g., an accuracy metric). In one or more embodiments, the learner component 112 can narrow the field of machine learning pipeline candidates based on the predicted runtime (e.g., determined from the meta-data) associated with the machine learning pipelines in order to meet one or more runtime thresholds defined by the one or more input devices 106.”, therefore, a threshold number (according to a performance metric, such as accuracy or runtime) of top-performing machine learning candidate pipelines of a plurality may be identified and selected for further training/optimization, based on the performance metrics predicted by the prediction model/learner component. Further, it is stated in Par. [0045] that the number of machine learning pipelines identified and/or ranked by the learner component may be defined by the one or more input devices); and identifying a top-performing ML pipeline based on performance of the trained top-performing candidates (Chen, Par. [0062], “For example, the joint optimization component 302 can allocate the next additional data subset to the highest priority machine learning pipeline, as determined by the ranking.”, therefore, based on the performance ranking of top-performing candidates, a top-performing ML pipeline (highest priority ML pipeline) may be identified). Chen does not explicitly disclose obtaining a set of preliminary tabular datasets […] However, Cai teaches obtaining a set of preliminary tabular datasets […] (Cai, Pg. 1, “Formally, structured data can be viewed as a logical table of 𝑛 rows (tuples/samples) and 𝑚 columns (attributes/features) [11, 32], which is extracted from relational databases via core relational operations such as select, project and join.”, therefore, a set of preliminary tabular datasets (structured data organized as a logical table of n rows and m columns) is obtained from relational databases – this is better depicted by Figure 1 on Pg. 2) It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of claim 1, as disclosed by Chen to including obtaining a set of preliminary tabular datasets, as disclosed by Cai. One of ordinary skill in the art would have been motivated to make this modification to enable the use of preliminary tabular datasets, which may be obtained and evaluated by the machine learning pipelines for data driven-decision making, identifying risks and opportunities, and/or extracting useful data insights (Cai, Pg. 1, “Relational databases are the𝑑𝑒𝑓𝑎𝑐𝑡𝑜 standard for storing and querying structured data that are critical to the operation of most businesses [27, 32, 40, 45]. They capture a huge wealth of information that can be used for data-driven decision making, and for identifying risks and opportunities. Extracting insights from data for decision making requires advanced analytics. In particular, deep learning, which is much more complex than statistical aggregation, has recently shown great promise.”). Regarding Claim 2, Chen in view of Cai teaches the method of claim 1, wherein training the prediction model comprises: splitting the set of Chen, Par. [0059], “For example, the data subset can be further split into training and testing sets.”, therefore, the set of datasets may be split into a training subset and a validation subset. Note: While Chen briefly discloses the use of tabular data in Par. [0070], Chen does not explicitly disclose obtaining a set of preliminary tabular datasets (plural) – See introduction of Cai reference below for explicit teaching of obtaining a set of preliminary tabular datasets); training each of the preliminary ML pipelines with the training subset; confirming performance of the preliminary ML pipelines with the validation subset; recording the performance of the preliminary ML pipelines (Chen, Par. [0059], “The joint optimization component 302 can then train the identified machine learning pipelines on the training set and score the machine learning pipelines on the testing set, wherein the resulting scores can be recorded (e.g., stored in the one or more memories 114).”, thus, each of the ML pipelines may be trained with the training subset, the performance of the ML pipelines may be confirmed with the validation/testing subset, and the performance of the ML pipelines is recorded); and training the prediction model using the performance of the preliminary ML pipelines and dataset meta-features of the set of Chen, Par. [0048], “One or more meta transfer learning algorithms executed by the learner component 112 can compare the meta-data of the candidate machine learning pipelines with one or more characteristics of the time series data subject to analysis. To facilitate a comparison of the time series data and the meta-data of the machine learning pipeline candidates, the learner component 112 can execute one or more data pre-processing algorithms for data: cleaning, resampling, balancing, label encoding, missing value imputation, smoothing, filtering, normalizing, detrending, one hot encoding, a combination thereof, and/or the like. Additionally, the learner component 112 can execute one or more feature extraction algorithms to facilitate the comparison”, therefore, the prediction model/learner component may be trained (using meta transfer learning) by using the performance of the ML pipelines (See subsequent Par. [0049]) and dataset meta-features (characteristics of the time series data) and pipeline meta-features (meta-data of the ML pipelines)). Regarding Claim 3, Chen in view of Cai teaches The method of claim 2, further comprising: extracting the dataset meta-features from the set of Chen, Par. [0048], “One or more meta transfer learning algorithms executed by the learner component 112 can compare the meta-data of the candidate machine learning pipelines with one or more characteristics of the time series data subject to analysis. Example characteristics of the time series data that can be determined and/or extracted by the learner component 112 can include, but are not limited to: variations in the data, data skewness, kurtosis, data trends, seasonality in the data, a Hurst parameter associated with the data, a combination thereof, and/or the like.”, thus, dataset meta-features are extracted, including characteristics of a given structured time series dataset. Note: While Chen briefly discloses the use of tabular data in Par. [0070], Chen does not explicitly disclose obtaining a set of preliminary tabular datasets (plural) – See introduction of Cai reference below for explicit teaching of obtaining a set of preliminary tabular datasets); and extracting the pipeline meta-features from the preliminary ML pipelines, the pipeline meta-features including characteristics of a given ML pipeline of the preliminary ML pipelines (Chen, Par. [0046], “These observations can be captured as meta-data associated with the machine learning pipelines. The meta-data can regard how well the machine learning pipeline accomplished a given machine learning task with respect to one or more evaluation metrics (e.g., accuracy of predictions and/or classifications). In various embodiments, the meta-data can characterize features of the machine learning pipeline, datasets analyzed by the machine learning pipeline, and/or interdependencies between the machine learning pipelines and the datasets.”, therefore, a plurality of pipeline meta-features are extracted from the ML pipelines, including characteristics of a given ML pipeline). Regarding Claim 4, Chen in view of Cai teaches the method of claim 3, wherein the characteristics of the given tabular dataset of the preliminary tabular datasets include one or more of a number of rows, a number of features, a presence of a number, a presence of missing values, a presence of a number category, a presence of a string category, a presence of text, a median, a mean, a mode, a distribution, a maximum value, a minimum value, and a label for categories of information (Cai, Pg. 1, “Formally, structured data can be viewed as a logical table of 𝑛 rows (tuples/samples) and 𝑚 columns (attributes/features) [11, 32], which is extracted from relational databases via core relational operations such as select, project and join.”, therefore, the characteristics of the preliminary tabular datasets include one or more of a number of rows or a number of features, etc.) The reasons of obviousness have been noted in the rejection of Claim 1 above and applicable herein. Regarding Claim 5, Chen in view of Cai teaches the method of claim 3, wherein the characteristics of the given ML pipeline of the preliminary ML pipelines include a set of preprocessing components present in the given ML pipeline, one or more ML models included in the preliminary ML pipelines, or both (Chen, Par. [0047], “In a further instance, the meta-data can characterize a topology of the one or more machine learning pipelines (e.g., via a string of identifiers sequenced based on the flow and/or stages of the machine learning pipelines). For example, machine learning pipeline topology can comprise a collection of algorithms and/or an execution sequence of the algorithms, wherein the topology can be represented by a sequence of words. […] Thereby the meta-data can describe the structure of the machine learning pipelines, […]”, thus, the characteristics of a given ML pipeline may include the structure/topology of the pipeline, including the ML models/algorithms included within the pipeline). Regarding Claim 6, Chen in view of Cai teaches the method of claim 2, wherein the recorded performance of each of the preliminary ML pipelines comprise one or more scores (Chen, Par. [0059], “The joint optimization component 302 can then train the identified machine learning pipelines on the training set and score the machine learning pipelines on the testing set, wherein the resulting scores can be recorded (e.g., stored in the one or more memories 114).”, thus, the recorded performance of each of the ML pipelines may comprise one or more scores) and an execution time (Chen, Par. [0049], “In one or more embodiments, the learner component 112 can narrow the field of machine learning pipeline candidates based on the predicted runtime (e.g., determined from the meta-data) associated with the machine learning pipelines in order to meet one or more runtime thresholds defined by the one or more input devices 106.”, thus, the performance metrics of each of the ML pipelines may also include a runtime). Regarding Claim 7, Chen in view of Cai teaches the method of claim 3, wherein inputs to the prediction model include the dataset meta-features and the pipeline meta-features (Chen, Par. [0049], “The learner component 112 can compare the meta-data of the machine learning pipelines with the characteristics of the time series data to predict how the respective machine learning pipelines will perform on the time series data. By employing the meta-data in the comparison, the learner component 112 can leverage insights previously developed by the machine learning pipelines to predict one or more performance metrics with regards to the time series data.”, thus, inputs to the prediction model/learner component includes the dataset meta-features (characteristics of the time series data) and the pipeline meta-features (meta-data of the machine learning pipelines)). Regarding Claim 8, Chen in view of Cai teaches the method of claim 1, further comprising: after obtaining the candidate tabular dataset: extracting second dataset meta-features from the candidate tabular dataset (Chen, Par. [0070], “For instance, the feature component 602 can select one or more knowledge databases from the knowledge library 122 based on, but not limited to: a domain of the time series data, tabular data, a combination thereof, and/or the like. […] For example, the semantic relationships and/or rules can facilitate the feature component 602 in transforming one or more data points of the time series data into one or more formats compatible with identified machine learning pipelines.”, therefore, one or more second dataset meta-features may be extracted from the candidate tabular dataset); obtaining the plurality of candidate ML pipelines (Chen, Par. [0046], “The learner component 112 can employ one or more meta transfer learning techniques to identify the one or more machine learning pipelines of interest.”, thus, the plurality of candidate ML pipelines are identified and obtained); extracting second pipeline meta-features from each of the plurality of candidate ML pipelines (Chen, Par. [0047], “For instance, the meta-data can describe one or more features of the previously analyzed datasets, such as, but not limited to: a domain of the dataset, number of datapoints in the dataset, number of attributes, percentage of missing values, the scope of the algorithms to be considered, the candidate algorithm's hyperparameter values, the selection of variables, a combination thereof, and/or the like.”, thus, one or more second pipeline meta-features/meta-data may be extracted from each of the candidate ML pipelines. The remainder of Chen Par. [0047] further outlines different types of meta-data/meta-features which may also be extracted); and combining the second pipeline meta-features and the second dataset meta-features (Chen, Par. [0070], “For example, the feature component 602 can perform one or more of the following data transformation techniques based on one or more rules defined within the or more knowledge databases, including, but not limited to: imputation (e.g., categorical and/or numerical), outlier processing (e.g., outlier detection and/or capping), binning, logarithm transformations, one-hot encoding, grouping operations, feature splitting, feature scaling (e.g., normalization and/or standardization), deletion of data, a combination thereof, and/or the like.”, thus, in order to leverage semantic relationship insights (See Par. [0043] which explicitly defines this as relationships between data points and/or machine learning model features) between the characteristics of time series data (second dataset meta-features) and the pipeline meta-data (second pipeline meta-features), the feature component may transform these features using feature grouping/combining operations) Regarding Claim 9, Chen in view of Cai teaches the method of claim 1, wherein the top-performing candidates include the threshold number of the plurality of candidate ML pipelines selected based on an execution time of each of the plurality of candidate ML pipelines (Chen, Par. [0045], “In one or more embodiments, a machine learning pipeline candidate pool can be the entirety of the pipeline library 120, a subset of the pipeline library 120 (e.g., defined by one or more runtime or pipeline thresholds defined by the one or more input devices 106), and/or a defined set of machine learning pipelines (e.g., wherein the one or more input devices 106 can be employed to target specific machine learning pipelines from the pipeline library 120 for consideration by the learner component 112).”, therefore, the top-performing candidates may include a threshold number of candidate ML pipelines based on an execution time/runtime of each of the candidates). Regarding Claim 10, Chen in view of Cai teaches the method of claim 1, wherein the top-performing candidates include the threshold number of the plurality of candidate ML pipelines selected based on an execution time (Chen, Par. [0045], “In one or more embodiments, a machine learning pipeline candidate pool can be the entirety of the pipeline library 120, a subset of the pipeline library 120 (e.g., defined by one or more runtime or pipeline thresholds defined by the one or more input devices 106), and/or a defined set of machine learning pipelines (e.g., wherein the one or more input devices 106 can be employed to target specific machine learning pipelines from the pipeline library 120 for consideration by the learner component 112).”, therefore, the top-performing candidates may include a threshold number of candidate ML pipelines based on an execution time/runtime of each of the candidates) and performance score of each of the plurality of candidate ML pipelines (Chen, Par. [0060], “Further, the joint optimization component 302 can rank the identified machine learning pipelines based on the predicted performance scores at the target sample size. In one or more embodiments, as the predicted score decreases, the predicted accuracy of the machine learning pipeline at the target sample size can increase. The ranking can facilitate the joint optimization component 302 in selecting those machine learning pipelines anticipated to receive the most benefit from further optimization”, thus, the top-performing candidates may be further narrowed down based on the performance score of each of the plurality of candidate ML pipelines). Regarding Claim 11, Chen in view of Cai teaches the method of claim 1, wherein the prediction model is configured to adapt to variable sizes of the set of preliminary tabular datasets and the candidate tabular dataset (Cai, Pg. 9, “We further evaluate the training and inference efficiency of ARM-Net on the adopted benchmark datasets of different attribute fields size (𝑚) in Table 3, which shows the training/inference throughput (the number of training/inference tuples per second). We consistently adopt 𝐾=4, 𝑜=64 and 𝑛𝑒=10 for the benchmark ARM-Net, and train the model on one CPU or GPU.”, therefore, the model may be configured to adapt to variable sizes of the set of the tabular datasets – this is further supported by Table 3 on Pg. 9 which shows the different datasets that are processed/evaluated by ARM-Net, all of which have different attribute field sizes). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of claim 1, as disclosed by Chen in view of Cai to include wherein the prediction model is configured to adapt to variable sizes of the set of preliminary tabular datasets and the candidate tabular dataset, as disclosed by Cai. One of ordinary skill in the art would have been motivated to make this modification to enable the prediction model to adapt to variable sizes of tabular datasets, which may improve efficiency and reduce computational complexity, when handling heterogeneous data (Cai, Pg. 9, “From Table 3, we can observe that ARM-Net is rather efficient in both training and inference, whose throughputs are high and decrease linearly with the size of attribute fields 𝑚. This is in line with our analysis in Section 3.4 that the computational complexity of ARM-Net scales linearly to 𝑚. Further, we find that GPU can considerably speed up both training and inference, with a ratio from 23.92x to 38.11x for the benchmark ARM-Net.”). Regarding Claim 12, Chen in view of Cai teaches the method of claim 11, wherein the adaptation to variable sizes includes: generating dataset-level meta-features of the preliminary tabular datasets (Cai, Pg. 6, “Meanwhile, the proposed gated attention mechanism also encourages local interpretability, which supports feature attribution on a per-input basis. Note that each exponential neuron specifies a sparse set of attribute fields that are being used dynamically via the attention alignment.”, therefore, dataset-level (per-input basis) meta-features may be generated from the tabular/structured datasets); generating column-level meta-features of the preliminary tabular datasets (Cai, Pg. 6, “We can thus aggregate the absolute values of all the value vectors of exponential neurons for global interpretability, which indicates the general focus of ARM-Net on each attribute field in the data, namely the feature importance of attribute fields.”, thus, column-level (attribute field level) meta-features may be generated from the tabular/structured datasets); and combining the dataset-level meta-features and the column-level meta-features of the preliminary tabular datasets (Cai, Pg. 6, “Therefore, we can identify the cross features captured dynamically and similarly, obtain relative feature importance by aggregating the interaction weights of all exponential neurons for each instance. The cross feature terms captured can also be analyzed globally/locally for understanding the internal modeling process”, thus, the dataset-level meta-features (based on input data) and column-level meta-features (based on the attribute fields) may be combined through identifying cross features, which may combine two or more existing features to capture interactions/non-linear relationships within the tabular dataset). The reasons of obviousness have been noted in the rejection of Claims 1 and 11 above and applicable herein. Regarding Claim 15, Chen in view of Cai teaches a system comprising: one or more processors; and one or more non-transitory computer-readable storage media configured to store instructions that, in response to being executed, cause a system to perform operations (Chen, Claim 1, “A system, comprising: a memory that stores computer executable components; and a processor, operably coupled to the memory, and that executes the computer executable components stored in the memory, wherein the computer executable components comprise: […]”, therefore, a system comprising one or more processors and one or more non-transitory computer-readable storage media (See Chen Par. [0128]) is disclosed), the operations comprising: […] The rest of the claim language in Claim 15 recites substantially the same limitations as Claim 1, in the form of a system, therefore it is rejected under the same rationale. The reasons of obviousness have been noted in the rejection of Claim 1 above and applicable herein. Claim 16 recites substantially the same limitations as Claim 2 in the form of a system, therefore it is rejected under the same rationale. Claim 17 recites substantially the same limitations as Claim 3 in the form of a system, therefore it is rejected under the same rationale. Claim 18 recites substantially the same limitations as Claim 4 in the form of a system, therefore it is rejected under the same rationale. Claim 19 recites substantially the same limitations as Claim 5 in the form of a system, therefore it is rejected under the same rationale. Claim 20 recites substantially the same limitations as Claim 7 in the form of a system, therefore it is rejected under the same rationale. 8. Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Chen et al. (hereinafter Chen) (US PG-PUB 20220036246), in view of Cai et al. (hereinafter Cai) (“ARM-Net: Adaptive Relation Modeling Network for Structured Data”), further in view of Sharma et al. (hereinafter Sharma) (US PG-PUB 20200089650). Regarding Claim 13, Chen in view of Cai teaches the method of claim 8. Chen in view of Cai do not explicitly disclose after extracting the second dataset meta-features from the candidate tabular dataset: obtaining a list of options for preprocessing components; removing one or more of the options without related second dataset meta-features from the list; and generating the plurality of candidate ML pipelines based on the list of options for preprocessing components. However, Sharma teaches after extracting the second dataset meta-features from the candidate tabular dataset: obtaining a list of options for preprocessing components (Sharma, Par. [0008], “There are several methods for each of the preprocessing data cleansing operations listed above that can be chosen from and applied to the data. Different approaches are better suited to different kinds of data. As is known, each preprocessing operation can greatly influence the results of the machine learning algorithms, and even the selection of a given type of each of the preprocessing operations can greatly influence the results of the machine learning algorithms.”, thus, a list of options for preprocessing components may be obtained); removing one or more of the options without related second dataset meta-features from the list (Sharma, Par. [0042], “The FIG. 3 approach is able to achieve better predictions and improve the choice of preprocessing, automatically. As with FIG. 2, the FIG. 3 approach of certain example embodiments involves reading the data and identifying the data types for the different data records in step S302, and filling in missing values via imputation in step S304. However, in step S306, numerical variables are passed through a program (described in greater detail below) to identify whether they can be treated like categorical variables. If so, the variables are flagged and treated as categorical variable. If not, they are treated as numerical variables. In step S308, the decision of which preprocessing operations are to be applied will be predicted by a trained machine learning algorithm.”, therefore, the second dataset meta-features (which may include characteristics of the data, including analysis of missing values and/or categorical vs. numerical variables) may be evaluated to determine which preprocessing options should be included and/or removed. This is better depicted by Sharma Figure 3); and generating the plurality of candidate ML pipelines based on the list of options for preprocessing components (Sharma, Claim 1, “transforming the data in the dataset by selectively applying to the data the one or more missing value imputation operations and the one or more other preprocessing data cleansing-related operations, in accordance with the independent variables associated with the data; building the machine learning model based on the transformed data”, thus, the machine learning models may be built/generated based on the selected list of preprocessing options). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of claim 8, as disclosed by Chen in view of Cai to include after extracting the second dataset meta-features from the candidate tabular dataset: obtaining a list of options for preprocessing components; removing one or more of the options without related second dataset meta-features from the list; and generating the plurality of candidate ML pipelines based on the list of options for preprocessing components, as disclosed by Sharma. One of ordinary skill in the art would have been motivated to make this modification to enable the selection of optimal preprocessing component options based on evaluating features of the tabular dataset and determining which approaches are better suited for cleansing the dataset, hence improving accuracy and efficiency of the plurality of machine learning pipelines (Sharma, Par. [0008], “There are several methods for each of the preprocessing data cleansing operations listed above that can be chosen from and applied to the data. Different approaches are better suited to different kinds of data. As is known, each preprocessing operation can greatly influence the results of the machine learning algorithms, and even the selection of a given type of each of the preprocessing operations can greatly influence the results of the machine learning algorithms.”). 9. Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Chen et al. (hereinafter Chen) (US PG-PUB 20220036246), in view of Cai et al. (hereinafter Cai) (“ARM-Net: Adaptive Relation Modeling Network for Structured Data”), further in view of Mohr et al. (hereinafter Mohr) (“Predicting Machine Learning Pipeline Runtimes in the Context of Automated Machine Learning”). Regarding Claim 14, Chen in view of Cai teaches the method of claim 1. Chen in view of Cai do not explicitly disclose: training a failure model that predicts probability of ML pipelines of failing to perform the tasks using the preliminary ML pipelines; predicting, using the failure model, probability of failure of the plurality of candidate ML pipelines in performing the tasks on the candidate tabular dataset; and removing a number of the plurality of candidate ML pipelines with the probability of failure above a failure probability threshold. However, Mohr teaches: training a failure model that predicts probability of ML pipelines of failing to perform the tasks using the preliminary ML pipelines (Mohr, Pg. 6, “We now answer the question of how accurate the (regression-based) classifier (cf. Section 2) predicts timeouts for pipelines that only consist of an atomic algorithm. We check for each t {0:5;1;5;10;15;20;30;60} (minutes) and each atomic algorithm whether the rejection rule will correctly predict a timeout. For each timeout, we use a different reference size of datasets to check against (because the timeouts are typically adjusted to the dataset size as well).”, thus, a regression-based classifier (failure model) is trained to predict probability of ML pipeline timeouts. According to Applicant’s specification Par. [00105], “In some embodiments, the failure model may be configured to predict candidate pipelines that are likely to fail due to an error or a timeout.” – hence, Examiner asserts that predicting a likelihood of a timeout is analogous to predicting probability of ML pipelines ‘failing to perform the tasks using the preliminary ML pipelines’, as supported by Applicant’s specification. Examiner also notes that Pg. 2 of Mohr similarly describes a pipeline execution as either returning results or returning with a failure (timeout)); predicting, using the failure model, probability of failure of the plurality of candidate ML pipelines in performing the tasks on the candidate (Chen in view of Cai are relied upon for teaching of the dataset comprising a tabular dataset – See rejection of Claim 1 above) dataset (Mohr, Pg. 2, “To predict whether the evaluation of such a pipeline will timeout, we predict its runtime via regression, and then pass this prediction together with the time-bound to some decision rule :R R -> {0;1}.The simplest rule is to multiply the predicted runtime with the number of executions required by the cross-validation, and check whether it exceeds the time bound.”, therefore, the regression-based classifier (failure model) predicts probability of failure (timeout, as supported by Applicant’s specification Par. [00105]) of the plurality of candidate ML pipelines. This is better depicted by Mohr Figure 1 on Pg. 2); and removing a number of the plurality of candidate ML pipelines with the probability of failure above a failure probability threshold (Mohr, Pg. 6, “The top figure indicates the number of decisions for rejecting an execution in order to put the relative plots below into context. First of all, we can see that most pre processors do never timeout, and the predictor decides almost always correctly to allow their execution. A similar situation occurs with some classifiers, in particular decision stumps, Naive Bayes, or the random trees and forests. Among the cases in which the guard decides to allow execution, there are very few cases in which a timeout occurs.”, therefore, pipelines which are predicted to have a timeout above a predetermined period of time (See preceding paragraph on Pg. 6 which specifies times at which the models are checked for timeouts) may not be chosen for execution (i.e., they are removed from the candidate pipelines for execution). This is also better illustrated by Mohr Figure 4 on Pg. 6) It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of claim 1, as disclosed by Chen in view of Cai to include training a failure model that predicts probability of ML pipelines of failing to perform the tasks using the preliminary ML pipelines; predicting, using the failure model, probability of failure of the plurality of candidate ML pipelines in performing the tasks on the candidate tabular dataset; and removing a number of the plurality of candidate ML pipelines with the probability of failure above a failure probability threshold, as disclosed by Mohr. One of ordinary skill in the art would have been motivated to make this modification to improve execution efficiency of the ML pipelines, by avoiding pipelines that have an increased probability of failing and/or timing out (Mohr, Pg. 10, “We stress again that our primary goal is not to improve the performance of the AutoML tool but to improve execution efficiency. Since our approach does not improve any particular model, we cannot generally expect that the performance of the AutoML tool improves. It occasionally may improve results if avoided timeouts yield to the execution of models that otherwise would not have been executed. However, we consider this rather a desirable side effect. The main goal is to reduce wasted CPU time as much as possible, because large scale wasted CPU time is not only a severe ethical concern with the ambience (energy, CO2) but also leaves the AutoML user with the “what if?” question: […]”) Conclusion 10. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Devika S Maharaj whose telephone number is (571)272-0829. The examiner can normally be reached Monday - Thursday 8:30am - 5:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alexey Shmatov can be reached at (571)270-3428. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DEVIKA S MAHARAJ/ Examiner, Art Unit 2123
Read full office action

Prosecution Timeline

May 25, 2023
Application Filed
Feb 06, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585948
NEURAL PROCESSING DEVICE AND METHOD FOR PRUNING THEREOF
2y 5m to grant Granted Mar 24, 2026
Patent 12579426
Training a Neural Network having Sparsely-Activated Sub-Networks using Regularization
2y 5m to grant Granted Mar 17, 2026
Patent 12572795
ANSWER SPAN CORRECTION
2y 5m to grant Granted Mar 10, 2026
Patent 12561577
AUTOMATIC FILTER SELECTION IN DECISION TREE FOR MACHINE LEARNING CORE
2y 5m to grant Granted Feb 24, 2026
Patent 12554969
METHOD AND SYSTEM FOR THE AUTOMATIC SEGMENTATION OF WHITE MATTER HYPERINTENSITIES IN MAGNETIC RESONANCE BRAIN IMAGES
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
55%
Grant Probability
63%
With Interview (+7.7%)
5y 0m
Median Time to Grant
Low
PTA Risk
Based on 78 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month