Prosecution Insights
Last updated: April 19, 2026
Application No. 17/450,670

ENHANCED MACHINE LEARNING PIPELINES WITH MULTIPLE OBJECTIVES AND TRADEOFFS

Final Rejection §103
Filed
Oct 12, 2021
Examiner
TRIEU, EM N
Art Unit
2128
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
4 (Final)
48%
Grant Probability
Moderate
5-6
OA Rounds
3y 10m
To Grant
53%
With Interview

Examiner Intelligence

Grants 48% of resolved cases
48%
Career Allow Rate
30 granted / 63 resolved
-7.4% vs TC avg
Minimal +5% lift
Without
With
+5.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 10m
Avg Prosecution
29 currently pending
Career history
92
Total Applications
across all art units

Statute-Specific Performance

§101
29.1%
-10.9% vs TC avg
§103
48.5%
+8.5% vs TC avg
§102
6.0%
-34.0% vs TC avg
§112
13.5%
-26.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 63 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This office action is in response to the claims filed on 01/22/2026. Claims 1-19 are presented for examination. Response to Argument In reference to applicant’s argument regrading rejections under 35 U.S.C. § 103: Applicant’s Argument: Applicant’s argument regrading the 103 rejection based on the claim amendment filed on 05/30/2025. Applicant’s Argument: This argument includes the newly amended limitations. It has been fully considered but is moot in view of the new grounds of rejection presented below necessitated by the amendment. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-3, 5-14are rejected under 35 U.S.C. 103 as being unpatentable over Wee et al. (Pub. No. US 2020/0249637-hereinafter, WEE) in view of Fang et al. (PUB. No. US 20220067752 -hereinafter, Fang) and further in view of Wulff et al . (PUB. No. US 20200342346 -hereinafter, Wulff) Regarding claim 1, Wee teaches a method for generating machine learning pipelines in a computing environment by one or more processors comprising: [Wee, [Par.0017], “A ensemble control program according to the present invention is an ensemble control program mounted on a computer which combines different types of plant control, the program causing the computer to perform: an optimizing process of optimizing an objective function that is a cost function to be minimized for calculating actions and outputting a control action; a predicting process of predicting based on machine learning models and outputting a predicted action, and a combining or switching process of combining or switching actions to maximize prediction or control performance as best control action based on the output actions. Advantageous Effects of Invention” Examiner’s note, system generates the plurality of sub-controllers based on the machine learning technique to select the best action, which is considered as the machine learning pipeline.), receiving volumes of data to train a machine learning model to provide automated evaluation , (Wee, [Par.0060, 00065],”0060, The proposed solution is to build different types of controllers which are suitable to different kinds of tasks by employing all known public and private information from the car manufacturers. The collected data from the controlled car and other cars can also be used for training and updating the learning-based and model-based controllers.” and “[0065], A main controller 108 is considered as part of the control ensemble system 100 for additional guarantees based on the plant dynamics and constraints. The control action computed by the classifier/combiner 105 can be used as input to the main controller 108, which can be a model-based predictive controller. Compared to a possible use of model predictive controller as subcontroller 103, the main difference in using a model predictive controller as the main controller 108 is to assure that the final control actions satisfy all constraints and at the same time be close to the output of the classifier/combiner 105.” Examiner’s note, the collected data relates to the controlled car and other cars can be used to train and update the learning model, therefore, the collected data relates to controlled car/particular car is considered as the volume of the data.) wherein said data related to in said computing environment having one or more components, modules, functions and services used to provide a process flow in a computer system (Wee, [par.0065-0068], “[0065], A main controller 108 is considered as part of the control ensemble system 100 for additional guarantees based on the plant dynamics and constraints. The control action computed by the classifier/combiner 105 can be used as input to the main controller 108, which can be a model-based predictive controller. Compared to a possible use of model predictive controller as subcontroller 103, the main difference in using a model predictive controller as the main controller 108 is to assure that the final control actions satisfy all constraints and at the same time be close to the output of the classifier/combiner 105…[0068], Next, an overview of the present invention will be described. FIG. 6 depicts a block diagram illustrating an overview of an ensemble control system of the present invention. An ensemble control system 80 (for example ensemble control system 100) according to the present invention is an ensemble control system which combines different types of plant control, the ensemble control system comprising: a plurality of subcontrollers 81 (for example learned subcontrollers 102, model predictive subcontroller(s) 103, alternative subcontroller(s) 104) each of which outputs action (for example control action, predicted action) for the plant control based on a prediction result by a predictor (for example predictors 101); and a combiner or switch 82 (for example classifier/combiner 105) which combines or switches actions to maximize prediction or control performance as best control action based on the actions output by each subcontroller 81, wherein subcontrollers 81 include at least two types of subcontrollers, a first type subcontroller is an optimization-based subcontroller (for example model predictive subcontroller 103) which optimizes an objective function that is a cost function to be minimized for calculating actions and outputs a control action, and a second type subcontroller (for example learned subcontroller 102) is a prediction-subcontroller which predicts based on machine learning models and outputs a predicted action.” Examiner’s the data is trained by the machine learning model to prediction the best action based on the predicted action of the sub-controllers (process flows)).; receiving data relating to completion of said process flow by a machine learning optimizer that optimizes multiple objectives relating to said process flow (Wee, [Par.0055], “[0055] In this manner, in the present exemplary embodiment, each of the subcontrollers outputs action for the plant control based on a prediction result by predictors 101; and the classifier/combiner 105 combines or switches actions to maximize prediction or control performance as best control action based on the actions output by each subcontroller. Furthermore, subcontrollers include at least two types of subcontrollers, the model predictive subcontroller 103 and the learned subcontroller 102 (hereinafter first type subcontroller and second type subcontroller). The first type subcontroller is an optimization-based subcontroller which optimizes an objective function that is a cost function to be minimized for calculating actions and outputs a control action. The second type subcontroller is a prediction-subcontroller which predicts based on machine learning models and outputs a predicted action.”). wherein said objectives are in conflict with one another (Wee, [Par.0017], “A ensemble control program according to the present invention is an ensemble control program mounted on a computer which combines different types of plant control, the program causing the computer to perform: an optimizing process of optimizing an objective function that is a cost function to be minimized for calculating actions and outputting a control action; a predicting process of predicting based on machine learning models and outputting a predicted action, and a combining or switching process of combining or switching actions to maximize prediction or control performance as best control action based on the output actions.” Examiner’s note, the action is switch to select the best action corresponding to the objective conflicts with other, as it can be seen at [Par.0061], “The classifier/combiner 105 can then be chosen so as to maximize the predictive and/or control performance of the subcontrollers based on different performance criteria such as obstacle avoidance, fuel consumption and comfort level. The final control action can be obtained using ensemble methods or using weights based on the relative importance that can be related to past performance.”.. ) analyzing by said machine learning optimizer a plurality of tradeoffs and said plurality of objectives (WEE, [Par.0055-0061], “In this manner, in the present exemplary embodiment, each of the subcontrollers outputs action for the plant control based on a prediction result by predictors 101; and the classifier/combiner 105 combines or switches actions to maximize prediction or control performance as best control action based on the actions output by each subcontroller. Furthermore, subcontrollers include at least two types of subcontrollers, the model predictive subcontroller 103 and the learned subcontroller 102 (hereinafter first type subcontroller and second type subcontroller). The first type subcontroller is an optimization-based subcontroller which optimizes an objective function that is a cost function to be minimized for calculating actions and outputs a control action. The second type subcontroller is a prediction-subcontroller which predicts based on machine learning models and outputs a predicted action…The classifier/combiner 105 can then be chosen so as to maximize the predictive and/or control performance of the subcontrollers based on different performance criteria such as obstacle avoidance, fuel consumption and comfort level. The final control action can be obtained using ensemble methods or using weights based on the relative importance that can be related to past performance.” Examiner’s note, plurality of the machine learning models (sub-controllers) to predict the actions, and the final action is to switch to the best action. ) ; and their associated weight that are associated with completion of said process flow and one or more machine learning models (Wee, [Par.0043-0044], “The classifier/combiner 105 may decide the best control operation to actuate by comparing the values of certain performance measures on which the input actions returned by the subcontrollers are evaluated, such as distance to surrounding objects, comfort level, safety and energy consumption, and choosing the action that minimizes a weighted sum of the said performance measures. [0044] Also, similar to ensemble methods in machine learning, depending on the scenario and the nature of the control actions, the classifier/combiner 105 may also decide the best control operation from the outputs of the subcontrollers by voting, if for categorical actions, or by averaging, for numerical actions. The quality of resulting new actions from such approaches can also be evaluated using the performance measures described above and can be compared to the individual outputs of the subcontrollers if desired” Examiner’s note, the weight or performance measurement based on the output of the subcontrollers.), and generating one or more instantiated final machine learning pipelines based on the plurality of tradeoffs and objectives (Wee, [Par.0061-0065], “A main controller 108 is considered as part of the control ensemble system 100 for additional guarantees based on the plant dynamics and constraints. The control action computed by the classifier/combiner 105 can be used as input to the main controller 108, which can be a model-based predictive controller. Compared to a possible use of model predictive controller as subcontroller 103, the main difference in using a model predictive controller as the main controller 108 is to assure that the final control actions satisfy all constraints and at the same time be close to the output of the classifier/combiner 105. Note that for computational purposes it is possible to consider only input tracking terms in which the control actions from the classifier/combiner 105 will be used. The main controller 108 then controls the plant 106 by using the control input with the minimum distance from the output of the classifier/combiner 105. [0066] Specifically, the output of the classifier/combiner 105 that is sent to the main controller 108 are the control actions required by the actuator to perform the task, e.g., steering angle and acceleration in autonomous driving. The main controller 108 calculates the final control actions to be actuated by performing optimization with respect to plant dynamics and constraints. In the autonomous driving example, the main controller 108 may be a model predictive controller which solves an optimization problem for finding the steering angle and acceleration closest to the values sent by the classifier/combiner 105, subject to vehicle dynamics and constraints. The (steering and acceleration) values computed by the main controller 108 are the actual control actions that will be actuated in the plant 106.”) However, Wee teach the generating of the machine learning model pipeline to optimize the objective function, but it does not teach generating a plurality of initial machine learning pipelines simultaneously to optimize a plurality of objectives, assigning a plurality of tradeoff with a pair of vectors having a first vector representing a first objective and a second vector representing a second objective, wherein the first objective is a preferred objective compared to the second objective and each objective is analyzed by a plurality of user defined preferences and an associated weight assigned to each of said user assigned preferences; and said plurality of objectives in conflict with one another using said pair of vectors. On the other hand, Fang teaches generating a plurality of initial machine learning pipelines simultaneously to optimize a plurality of objectives (Fang, [Par.0061-0062], “In some configurations, the AutoML model may be an offline training pipeline or an online prediction pipeline in the cloud or a decentralized blockchain node. The offline training pipeline includes feature extraction and transformation, parallel model training, model metric evaluation, and model selection. The online prediction pipeline includes feature extraction and transformation, model prediction, and result formatting. [0062] Automated machine learning (AutoML), is a system and methodology that automates various stages of the machine learning process, such as model selection, hyperparameter optimization, etc. The AutoML system takes the labeled data as input, runs a parallel competition to select the best machine learning model that meets the success criteria, and eventually emits a serialized machine learning model that can be deployed in the prediction pipeline.” ) Wee and Fang are analogous in arts because they have the same field of endeavor of generating the plurality of the machine learning models. Accordingly, it would have been obvious to one of the ordinary skills in the art before the effective filing date of the claimed invention to modify the generating of the machine learning model pipeline, as taught by Wee, to include the generating a plurality of initial machine learning pipelines simultaneously to optimize a plurality of objectives, as taught by Fang. The modification would have been obvious because one of the ordinary skills in art would be motivated to create the high quality machine learning model quickly, [Fang. [Par.0063-0064], “The advantage of using AutoML is mostly about efficiency. AutoML helps creates high quality ML models quickly, while using minimal data science labors. [0064] AutoML model can be modified to operate similarly to a traditional machine learning model, by limiting the compute resources. For example, AutoML can be configured to use only 1 CPU and train 1 model (say Gradient Boosting Tree) at a time, which is actually a traditional machine learning model.”). However, neither Wee Nor Fang teaches assigning a plurality of tradeoff with a pair of vectors having a first vector representing a first objective and a second vector representing a second objective, wherein the first objective is a preferred objective compared to the second objective and each objective is analyzed by a plurality of user defined preferences and an associated weight assigned to each of said user assigned preferences; and said plurality of objectives in conflict with one another using said pair of vectors. On the other hand, Wulff teaches assigning a plurality of tradeoff with a pair of vectors having a first vector representing a first objective and a second vector representing a second objective (Wulff, [Par.0074], “According to various embodiments, performance evaluation module 604 may aggregate data from any or all tunnels T.sub.i in the network, build a feature vector X.sub.t,i, and update a precision-recall curve for T.sub.i. The precision-recall curve is a way to evaluate the precision-recall tradeoff of the classifier C governed by its decision threshold. At every timestamp t, performance evaluation module 604 may perform an inference step of the classifier C model(s) 602 and compare the actual label L.sub.t with the predicted label L{tilde over ( )}.sub.t for different values of the decision threshold of the classifier. Said differently, a key function of performance evaluation module 604 is verify, using a lookback period, whether a tunnel failure predicted by the classifier using one of the decision thresholds actually occurred.” Examiner’s note, the precision-recall curve represents the precision-recall tradeoff , wherein the first objective is a preferred objective compared to the second objective ([Par.0076], “Decision threshold adjuster 606 may employ a number of different strategies, to optimize the precision-recall tradeoff of the prediction model(s) 602. In the simplest embodiment, performance evaluation module 604 may aggregate the mappings between decision thresholds and model performance to build a complete precision-recall curve. In turn, decision threshold adjuster 606 may set the decision threshold at a value that optimizes the precision-recall curve for that tunnel. To do so, decision threshold adjuster 606 may first set a minimum acceptable precision such that the precision >P.sub.Min (usually close to 1, i.e., 100% precision) and then identify the decision threshold that gives the maximum recall that satisfies the precision constraint.” and each objective is analyzed by a plurality of user defined preferences and an associated weight assigned to each of said user assigned preferences (Wulff, [Par.0064], “In many supervised classification tasks, such as predicting tunnel failures using a trained classifier (e.g., the model of MLLF module 304), the classifier may generate a probability distribution over the space of labels, rather than a single label or class. For example, in a binary classification task, the prediction might be of the form [0.25, 0.75], meaning that the classifier asserts that there is a 75% chance that the test sample belongs to the second class/label (e.g., label ‘1’) and only a 25% that the sample belongs to the first class (e.g., label ‘0’). In various embodiments, this probabilistic output can be transformed into a hard class assignment by applying a decision threshold to the probabilities. For example, if the classifier has a decision threshold of 60% and the prediction is of the form [0.25, 0.75], the classifier may assign the sample to class/label as the probability of the sample belonging to this class exceeds the threshold (i.e., 75%>60%). Conversely, if the decision threshold is set to 88%, the sample may be assigned to class/label 0.”.); and said plurality of objectives in conflict with one another using said pair of vectors (Wulff, [Par.0104], “In this case, the MFFP metric represents that maximum recall computed for the tunnel. Said differently, the maximum recall represents the maximum percentage of failures that can be forecasted with high precision, where high precision is at least equal to a given value. The MFFP may be expressed as (R, P) where R is the maximum recall for precision P>P.sub.Min. For example, if MFFP=(0.3, 0.9) this means that the classifier for the tunnel is capable of forecasting 30% of tunnel failures with at least 90% precision.”). Wee, Fang and Wulff are analogous in arts because they have the same field of endeavor of generating the plurality of the machine learning models. Accordingly, it would have been obvious to one of the ordinary skills in the art before the effective filing date of the claimed invention to modify the generating of the machine learning model pipeline, as taught by Wee, to include the assigning a plurality of tradeoff with a pair of vectors having a first vector representing a first objective and a second vector representing a second objective, wherein the first objective is a preferred objective compared to the second objective and each objective is analyzed by a plurality of user defined preferences and an associated weight assigned to each of said user assigned preferences; and said plurality of objectives in conflict with one another using said pair of vectors., as taught by Wulff. The modification would have been obvious because one of the ordinary skills in art would be motivated to optimize the precision recall tradeoff (Wulff, [Par. 0075], “In various embodiments, decision threshold adjuster 606 of MLFF module 304 may take as input the mapping of threshold values and label comparisons produced by performance evaluation module 604 (e.g., the precision recall curve information) and dynamically sets the decision threshold of the classifier in machine learning model(s) 602, to optimize its precision-recall tradeoff. More formally, decision threshold adjuster 606 may dynamically adapt the decision threshold D.sub.C,i of classifier C in model(s) 602 for tunnel T.sub.i, based on its performance metrics computed by performance evaluation module 604.”). Regarding the claim 2, Wee teaches the method of claim 1, further including generating a plurality of additional tradeoffs in relation to the objectives (WEE, [Par.0055-0061], “In this manner, in the present exemplary embodiment, each of the subcontrollers outputs action for the plant control based on a prediction result by predictors 101; and the classifier/combiner 105 combines or switches actions to maximize prediction or control performance as best control action based on the actions output by each subcontroller. Furthermore, subcontrollers include at least two types of subcontrollers, the model predictive subcontroller 103 and the learned subcontroller 102 (hereinafter first type subcontroller and second type subcontroller). The first type subcontroller is an optimization-based subcontroller which optimizes an objective function that is a cost function to be minimized for calculating actions and outputs a control action. The second type subcontroller is a prediction-subcontroller which predicts based on machine learning models and outputs a predicted action…The classifier/combiner 105 can then be chosen so as to maximize the predictive and/or control performance of the subcontrollers based on different performance criteria such as obstacle avoidance, fuel consumption and comfort level. The final control action can be obtained using ensemble methods or using weights based on the relative importance that can be related to past performance.”). Regarding claim 3, Wee teaches The method of claim 1, further including defining the objectives, the plurality of tradeoffs, and the one or more machine learning models (WEE, [Par.0055-0061], “In this manner, in the present exemplary embodiment, each of the subcontrollers outputs action for the plant control based on a prediction result by predictors 101; and the classifier/combiner 105 combines or switches actions to maximize prediction or control performance as best control action based on the actions output by each subcontroller. Furthermore, subcontrollers include at least two types of subcontrollers, the model predictive subcontroller 103 and the learned subcontroller 102 (hereinafter first type subcontroller and second type subcontroller). The first type subcontroller is an optimization-based subcontroller which optimizes an objective function that is a cost function to be minimized for calculating actions and outputs a control action. The second type subcontroller is a prediction-subcontroller which predicts based on machine learning models and outputs a predicted action…The classifier/combiner 105 can then be chosen so as to maximize the predictive and/or control performance of the subcontrollers based on different performance criteria such as obstacle avoidance, fuel consumption and comfort level. The final control action can be obtained using ensemble methods or using weights based on the relative importance that can be related to past performance.”), wherein an objective includes one or more performance objectives of the one or more machine learning models (Wee, [Par.0038], “Specifically, the model predictive subcontroller 103 optimizes an objective function that is a cost function to be minimized for calculating control actions. That is, the objective function to be optimized refers to the cost function that is minimized for calculating control actions in the model predictive subcontrollers 103. The objective function may be a weighted sum of terms that represent different performance measures, such as distance to target state or change in input. In the autonomous driving example, this is the sum of terms relating to distance to target location, change in acceleration and steering, comfort, or energy consumption.”). and a tradeoff includes one or more objectives that are replaced with one or more alternative objectives (Wee, [Par.0016-0017], “An ensemble control method according to the present invention is an ensemble control method which combines different types of plant control, the ensemble control method includes: optimizing an objective function that is a cost function to be minimized for calculating actions and outputting a control action; predicting based on machine learning models and outputting a predicted action; and combining or switching actions to maximize prediction or control performance as best control action based on the output actions.”). . Regarding claim 5, Wee teaches the method of claim 1, further including assigning weighted values to each of the objectives, (Wee, [Par.0038], “Specifically, the model predictive subcontroller 103 optimizes an objective function that is a cost function to be minimized for calculating control actions. That is, the objective function to be optimized refers to the cost function that is minimized for calculating control actions in the model predictive subcontrollers 103. The objective function may be a weighted sum of terms that represent different performance measures, such as distance to target state or change in input. In the autonomous driving example, this is the sum of terms relating to distance to target location, change in acceleration and steering, comfort, or energy consumption.”). Regarding claim 6, Wee teaches the method of claim 1, further including switching one or more of the objectives with one or more alternative objectives based on one or more of the plurality of tradeoffs for generating the one or more instantiated machine learning pipelines (Wee, [Par.0038], “Specifically, the model predictive subcontroller 103 optimizes an objective function that is a cost function to be minimized for calculating control actions. That is, the objective function to be optimized refers to the cost function that is minimized for calculating control actions in the model predictive subcontrollers 103. The objective function may be a weighted sum of terms that represent different performance measures, such as distance to target state or change in input. In the autonomous driving example, this is the sum of terms relating to distance to target location, change in acceleration and steering, comfort, or energy consumption.” And [par.0055], “In this manner, in the present exemplary embodiment, each of the subcontrollers outputs action for the plant control based on a prediction result by predictors 101; and the classifier/combiner 105 combines or switches actions to maximize prediction or control performance as best control action based on the actions output by each subcontroller.”. ). Regarding claim 7, Wee teaches the method of claim 1, further including determining a first instantiated machine learning pipeline is preferred compared to a second instantiated machine learning pipeline based on the plurality of tradeoffs and objectives (Wee, [Par.0043-0044], “the classifier/combiner 105 may decide the best control operation to actuate by comparing the values of certain performance measures on which the input actions returned by the subcontrollers are evaluated, such as distance to surrounding objects, comfort level, safety and energy consumption, and choosing the action that minimizes a weighted sum of the said performance measures. [0044] Also, similar to ensemble methods in machine learning, depending on the scenario and the nature of the control actions, the classifier/combiner 105 may also decide the best control operation from the outputs of the subcontrollers by voting, if for categorical actions, or by averaging, for numerical actions. The quality of resulting new actions from such approaches can also be evaluated using the performance measures described above and can be compared to the individual outputs of the subcontrollers if desired.” And [par.0055], “In this manner, in the present exemplary embodiment, each of the subcontrollers outputs action for the plant control based on a prediction result by predictors 101; and the classifier/combiner 105 combines or switches actions to maximize prediction or control performance as best control action based on the actions output by each subcontroller.” Examiner’s note, the output actions of the sub-controllers are compared and best action is chosen.) Regarding the claims 8-10, 12 -14 are rejected for the same reasons as the claim 1-3, 5-7, since these claims recite the same limitations. Additionally, Wee further teaches the additional limitations of these claims a system for generating machine learning pipelines in a computing environment, comprising:one or more computers with executable instructions that when executed cause the system to (Wee, [Par.0017], “A ensemble control program according to the present invention is an ensemble control program mounted on a computer which combines different types of plant control, the program causing the computer to perform:”). Regarding claim 11, Wee teaches the method of claim 8, further including assigning a tradeoff with a pair of vectors having a first vector representing a first objective and a second vector representing a second objective, wherein the first objective is a preferred objective compared to the second objective, (Wee, [Par.0043-0045], “The classifier/combiner 105 may decide the best control operation to actuate by comparing the values of certain performance measures on which the input actions returned by the subcontrollers are evaluated, such as distance to surrounding objects, comfort level, safety and energy consumption, and choosing the action that minimizes a weighted sum of the said performance measures.[0044] Also, similar to ensemble methods in machine learning, depending on the scenario and the nature of the control actions, the classifier/combiner 105 may also decide the best control operation from the outputs of the subcontrollers by voting, if for categorical actions, or by averaging, for numerical actions. The quality of resulting new actions from such approaches can also be evaluated using the performance measures described above and can be compared to the individual outputs of the subcontrollers if desired. [0045] Moreover, the classifier/combiner 105 may keep the historical performance of each subcontroller in different kinds of control scenarios (such as driving maneuvers) assuming that the control actions obtained by each have been realized. This allows establishing confidence levels regarding the use of input actions from specific subcontrollers, and helps identification of poorly performing subcontrollers which might be removed or retrained.” Examiner’s note, each sub-controller performance measurement (predicted action) associated with the particular value, and the system compare the values of each predicted action to choose the best action.). Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Wee et al. (Pub. No. US 2020/0249637-hereinafter, WEE) in view of Fang et al. (PUB. No. US 20220067752 -hereinafter, Fang) and further in view of Wulff et al . (PUB. No. US 20200342346 -hereinafter, Wulff) and further in view of Oles et al . (Patent. No. US 6571225 -hereinafter, Oles) Regarding the claim 4, Wee teaches the method of the claim 1, but it does not teach wherein said pair of tradeoffs vectors are further defined as u and v and vector u is preferred to vector v, together with a weight vector w defining a preference relation over utility k vector in R. On the other hand, Oles teaches wherein said pair of tradeoffs vectors are further defined as u and v and vector u is preferred to vector v, together with a weight vector w defining a preference relation over utility k vector in R, (Oles, [Col. 1-25], “With the measures of precision and recall in mind, and having obtained a weight vector w and a corresponding threshold t=0 by the procedure described above, one may optionally consider next the effect of varying t while holding the previously obtained w fixed. One should keep in mind the geometric intuition: w determines only the slope of the classifying hyperplane, and then t determines its precise position, effectively selecting exactly one hyperplane out of the set of parallel hyperplanes that are orthogonal to w. By varying t in this way, in many cases one can trade precision for recall, and vice versa. More precisely, on the training set one can evaluate the precision and recall of the categorization rule w.sup.T x.gtoreq.t for a variety of values of t, choosing the one that gives the best balance between precision and recall as determined by the practical problem at hand.”). Wee, Fang, Wulff and Oles are analogous in arts because they have the same field of endeavor of generating the plurality of the machine learning models. Accordingly, it would have been obvious to one of the ordinary skills in the art before the effective filing date of the claimed invention to modify the generating of the machine learning model pipeline, as taught by Wee, to include the wherein said pair of tradeoffs vectors are further defined as u and v and vector u is preferred to vector v, together with a weight vector w defining a preference relation over utility k vector in R., as taught by Oles. The modification would have been obvious because one of the ordinary skills in art would be motivated to choose the one that gives the best balance between precision and recall as determined by the practical problem at hand, (Oles, [Col. 1-25], “With the measures of precision and recall in mind, and having obtained a weight vector w and a corresponding threshold t=0 by the procedure described above, one may optionally consider next the effect of varying t while holding the previously obtained w fixed. One should keep in mind the geometric intuition: w determines only the slope of the classifying hyperplane, and then t determines its precise position, effectively selecting exactly one hyperplane out of the set of parallel hyperplanes that are orthogonal to w. By varying t in this way, in many cases one can trade precision for recall, and vice versa. More precisely, on the training set one can evaluate the precision and recall of the categorization rule w.sup.T x.gtoreq.t for a variety of values of t, choosing the one that gives the best balance between precision and recall as determined by the practical problem at hand.”). Claims 15-19 are rejected under 35 U.S.C. 103 as being unpatentable over Wee et al. (Pub. No. US 2020/0249637-hereinafter, WEE) in view of Wulff et al . (PUB. No. US 20200342346 -hereinafter, Wulff) and further in view of Achin et al. (Patent. No. US 10496927 -hereinafter, Achin) and futher in view of Eberhardt et al. (PUB. No. US 20110082712 -hereinafter, Eberhardt). Regarding claim 15, Wee teaches a computer program product for generating machine learning pipelines in a computing environment, the computer program product comprising:one or more computer readable storage media, and program instructions collectively stored on the one or more computer readable storage media, the program instruction comprising (Wee, [par.0047], “The predictors 101, the subcontrollers 120 (more specifically, learned subcontrollers 102, model predictive subcontroller(s) 103, alternative subcontroller(s) 104), and the classifier/combiner 105 are each implemented by a CPU of a computer that operates in accordance with a program (ensemble control program). For example, the program may be stored in a storage unit (not shown) included in the ensemble control system, and the CPU may read the program and operate as the predictors 101, the subcontrollers 120 (more specifically, learned subcontrollers 102, model predictive subcontroller(s) 103, alternative subcontroller(s) 104), and the classifier/combiner 105 in accordance with the program.”): andprogram instructions to determine a first instantiated machine learning pipeline is preferred compared to a second instantiated machine learning pipeline based on the plurality of tradeoffs and objectives (Wee, [Par.0043-0044], “the classifier/combiner 105 may decide the best control operation to actuate by comparing the values of certain performance measures on which the input actions returned by the subcontrollers are evaluated, such as distance to surrounding objects, comfort level, safety and energy consumption, and choosing the action that minimizes a weighted sum of the said performance measures. [0044] Also, similar to ensemble methods in machine learning, depending on the scenario and the nature of the control actions, the classifier/combiner 105 may also decide the best control operation from the outputs of the subcontrollers by voting, if for categorical actions, or by averaging, for numerical actions. The quality of resulting new actions from such approaches can also be evaluated using the performance measures described above and can be compared to the individual outputs of the subcontrollers if desired.” And [par.0055], “In this manner, in the present exemplary embodiment, each of the subcontrollers outputs action for the plant control based on a prediction result by predictors 101; and the classifier/combiner 105 combines or switches actions to maximize prediction or control performance as best control action based on the actions output by each subcontroller.” Examiner’s note, the output actions of the sub-controllers are compared and best action is chosen.0, wherein the objectives are in conflict of one another (Wee, [Par.0017], “A ensemble control program according to the present invention is an ensemble control program mounted on a computer which combines different types of plant control, the program causing the computer to perform: an optimizing process of optimizing an objective function that is a cost function to be minimized for calculating actions and outputting a control action; a predicting process of predicting based on machine learning models and outputting a predicted action, and a combining or switching process of combining or switching actions to maximize prediction or control performance as best control action based on the output actions.” Examiner’s note, the action is switch to select the best action corresponding to the objective conflicts with other, as it can be seen at [Par.0061], “The classifier/combiner 105 can then be chosen so as to maximize the predictive and/or control performance of the subcontrollers based on different performance criteria such as obstacle avoidance, fuel consumption and comfort level. The final control action can be obtained using ensemble methods or using weights based on the relative importance that can be related to past performance.”.. ) . However, Wee does not teach program instructions to incrementally allocate time series data from a time series data set for testing by one or more candidate machine learning pipelines based on seasonality or a degree of temporal dependence of the time series data; program instructions to provide intermediate evaluation scores by each of the one or more candidate machine learning pipelines following each time series data allocation ;program instructions to automatically select one or more machine learning pipelines from a ranked list of the one or more candidate machine learning pipelines based on a projected learning curve generated from the intermediate evaluation scores, On the other hand, Anchin teaches program instructions to incrementally allocate time series data from a time series data set for testing by one or more candidate machine learning pipelines based on seasonality or a degree of temporal dependence of the time series data (Anchin, [Col 3, lines 50-67 and Col.4, lines 1-20], “In general, one innovative aspect of the subject matter described in this specification can be embodied in a predictive modeling method including performing a predictive modeling procedure, including: (a) obtaining time-series data including one or more data sets, wherein each data set includes a plurality of observations, wherein each observation includes (1) an indication of a time associated with the observation and (2) respective values of one or more variables; (b) determining a time interval of the time-series data; (c) identifying one or more of the variables as targets, and identifying zero or more other variables as features; (d) determining a forecast range and a skip range associated with a prediction problem represented by the time-series data, wherein the forecast range indicates a duration of a period for which values of the targets are to be predicted, and wherein the skip range indicates a temporal lag between a time associated with an earliest prediction in the forecast range and a time associated with a latest observation upon which predictions in the forecast range are to be based; (e) generating training data from the time-series data, wherein the training data include a first subset of the observations of at least one of the data sets, wherein the first subset of the observations includes training-input and training-output collections of the observations, wherein the times associated with the observations in the training-input and training-output collections correspond, respectively, to a training-input time range and a training-output time range, wherein the skip range separates an end of the training-input time range from a beginning of the training-output time range, and wherein a duration of the training-output time range is at least as long as the forecast range; (f) generating testing data from the time-series data, wherein the testing data include a second subset of the observations of at least one of the data sets, wherein the second subset of the observations includes testing-input and testing-validation collections of the observations, wherein the times associated with the observations in the testing-input and testing-validation collections correspond,”); program instructions to provide intermediate evaluation scores by each of the one or more candidate machine learning pipelines following each time series data allocation, (Anchin, page 20, column 11, lines 18-37, ACHIN: ”a method including: (a) performing a plurality of predictive modeling procedures, wherein each of the predictive modeling procedures is associated with a predictive model, and wherein performing each modeling procedure includes fitting the associated predictive model to an initial dataset representing an initial prediction problem; (b) determining a first respective accuracy score of each of the fitted predictive models, wherein the first accuracy score of each fitted model represents an accuracy with which the fitted model predicts one or more outcomes of the initial prediction problem; (c)… generating a modified dataset…; (d) determining a second respective accuracy score of each of the fitted predictive models, wherein the second accuracy score of each fitted model represents an accuracy with which the fritted model predicts one or more outcomes of the modified prediction problem” Wee and Anchin are analogous in arts because they have the same field of endeavor of generating the machine learning model. Accordingly, it would have been obvious to one of the ordinary skills in the art before the effective filing date of the claimed invention to modify the generating of the machine learning model pipeline, as taught by Wee, to include the program instructions to incrementally allocate time series data from a time series data set for testing by one or more candidate machine learning pipelines based on seasonality or a degree of temporal dependence of the time series data, program instructions to provide intermediate evaluation scores by each of the one or more candidate machine learning pipelines following each time series data allocation, as taught by Anchin. The modification would have been obvious because one of the ordinary skills in art would be motivated to improve the performance of the predictive model, [Anchin. [Col. 32, lines 66-67, Col. 33, lines 1-10], “in some embodiments, a user may select one or more modeling procedures to be executed. The user-selected procedures may be executed in addition to or in lieu of one or more modeling procedures selected by exploration engine 110. Allowing the users to select modeling procedures for execution may improve the performance of predictive modeling system 100, particularly in scenarios where a data analyst's intuition and experience indicate that the modeling system 100 has not accurately estimated a modeling procedure's suitability for a prediction problem..”). However, neither Wee nor Anchin teaches program instructions to automatically select one or more machine learning pipelines from a ranked list of the one or more candidate machine learning pipelines based on a projected learning curve generated from the intermediate evaluation scores, On the other hand, Eberhardt teaches program instructions to automatically select one or more machine learning pipelines from a ranked list of the one or more candidate machine learning pipelines based on a projected learning curve generated from the intermediate evaluation scores (Eberhardt [Par.0037, 0041-0042], “0037, Machine learning algorithms allow computer to learn dynamically from source data that can reside in a file, a database, or a data warehouse. The machine learning algorithm automatically detects, evaluates, and promotes significant relationships between variables without the need for human interaction using a scoring algorithm intended to optimize the network for robustness, thus learning information structure natively from data without prior specification by the operator. This allows for the processing of vast amounts of complex data quickly and easily into a tractable BBN.” And “0041FIG. 4 is a flow diagram illustrating a method for creating a BBN model according to an alternative embodiment of the invention. For example, method 400 may be performed as part of operations involved in blocks 302-304 of FIG. 3. Referring to FIG. 4, at block 401, distributions of discrete states are calculated for categorical and continuous variables to be incorporated in the machine-learned BBN network. At block 402, preliminary modeling is performed on the variables to identify appropriate machine learning parameters and data quality issues, optionally generating a first BBN model. At block 403, global modeling is performed to set appropriate machine learning parameters, prune attributes, and observe global data structures, optionally generating a second BBN model. At block 404, naive modeling is performed to observe contribution of individual variables, optionally generating a third BBN model. At block 405, focused modeling is performed on subsets of variables identified above, generating a fourth BBN model. Throughout the process, k-fold cross-validation is used to assist in feature selection. At block 406, all BBN models are scored and the one with the highest score is selected as the final candidate.” Examiner’s note, the BNN models are scored that is corresponding to the rank list of the machine learning models, and the machine learning model with highest score is selected. ); Wee, Anchin and Eberhardt are analogous in arts because they have the same field of endeavor of generating the machine learning model. Accordingly, it would have been obvious to one of the ordinary skills in the art before the effective filing date of the claimed invention to modify the generating of the machine learning model pipeline, as taught by Wee, to include the program instructions to automatically select one or more machine learning pipelines from a ranked list of the one or more candidate machine learning pipelines based on a projected learning curve generated from the intermediate evaluation scores, as taught by Eberhardt. The modification would have been obvious because one of the ordinary skills in art would be motivated to improve to improve the performance (Eberhardt [Par.0063], “In one embodiment, machine learning is used to calculate prior probabilities and identify the structure of the BBN. Prior probabilities are derived from the data to be modeled by calculating distributions of discrete states for categorical variables or using binning to convert continuous variables into categorical variables. A heuristic search method is used to generate hypothetical models with different conditional independence assumptions in order to identify the best model structure. The heuristic search method used in this study benefits from at least two proprietary advances, one that uses a more efficient caching and query system that allows individuals to consider an order of magnitude more data, the second being a very efficient search architecture that provides additional flexibility in searching for the optimal model structure. These improvements have been shown to perform 1%-5% better than a standard heuristic algorithm in terms of model quality score.”). Regarding the claims 16-19 are rejected for the same reason as the claims 1-6, since these claims recite the same limitations. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to EM N TRIEU whose telephone number is (571)272-5747. The examiner can normally be reached on Mon-Fri from 9:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Omar Fernandez Rivas can be reached on (571) 272-2589. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /E.T./Examiner, Art Unit 2128 /OMAR F FERNANDEZ RIVAS/Supervisory Patent Examiner, Art Unit 2128
Read full office action

Prosecution Timeline

Oct 12, 2021
Application Filed
Nov 07, 2024
Non-Final Rejection — §103
Feb 17, 2025
Response Filed
Mar 17, 2025
Final Rejection — §103
May 17, 2025
Interview Requested
May 28, 2025
Applicant Interview (Telephonic)
May 28, 2025
Examiner Interview Summary
May 30, 2025
Response after Non-Final Action
Jun 30, 2025
Request for Continued Examination
Jul 03, 2025
Response after Non-Final Action
Oct 09, 2025
Non-Final Rejection — §103
Jan 22, 2026
Response Filed
Feb 12, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572779
INTERFACE NEURAL NETWORK
2y 5m to grant Granted Mar 10, 2026
Patent 12541705
SYSTEM AND METHOD FOR FACILITATING A MACHINE LEARNING MODEL REBUILD
2y 5m to grant Granted Feb 03, 2026
Patent 12511531
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Dec 30, 2025
Patent 12493804
METHOD OF BUILDING AND OPERATING DECODING STATUS AND PREDICTION SYSTEM
2y 5m to grant Granted Dec 09, 2025
Patent 12493774
NEURAL NETWORK OPERATION MODULE AND METHOD
2y 5m to grant Granted Dec 09, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
48%
Grant Probability
53%
With Interview (+5.0%)
3y 10m
Median Time to Grant
High
PTA Risk
Based on 63 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month