DETAILED ACTION
This action is in response to claims filed 07 January 2026 for application 17848728 filed 24 June 2022. Currently, claims 1-20 are pending.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 07 January 2026 has been entered.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chen et al. (Autostacker: A Compositional Evolutionary Learning System) in view of Marsden et al. (US 20210209099 A1).
Regarding claim 1, Chen discloses: A non-transitory computer-readable storage medium for storing instructions that when executed by a processor cause the processor to:
generate a generation of stacked machine learning model ensemble pipeline architectures, wherein each of generated stacked machine learning model ensemble pipeline architectures specifies how many layers of machine learning models there are in the architecture, what machine learning models are on each of the layers and what hyperparameter values are specified for the machine learning models (“Figure 1: A typical pipeline generated by Autostacker. Each column represents a layer, whereas each node in a layer represents a machine learning primitive model (e.g., SVM, MLP). The number of layers and nodes per layer can be specified a priori, or treated as a hyperparameter. A raw dataset is used as input for the first layer. In the subsequent layers, the prediction results from each node will be added to the raw dataset as synthetic features (new colors). The new dataset generated by each layer serves as input to the next layer.” Fig 1, “type of each primitive
each model hyperparameter within each primitive
number of layers in each pipeline
number of nodes in each layer” p405 EQ4, types of hyperparameters);
apply the generation of stacked machine learning model ensemble pipeline architectures to a data set (“Once the above steps are completed, the generated 2N pipelines are trained and evaluated through cross-validation. As a result, N pipelines with the highest validation accuracies are selected as the seed pipelines for the next generation of mutation and cross-over. Once the seed pipelines are completed, they are subjected to a further one-step mutation and cross-over, followed by evaluation and selection. This process is repeated until all the iterations are executed. It should be noted that the number of iterations M can be specified by the user.” P406 ¶4);
score how well the stacked machine learning model ensemble pipeline architectures in the generation process the data set (“Once the above steps are completed, the generated 2N pipelines are trained and evaluated through cross-validation. As a result, N pipelines with the highest validation accuracies are selected as the seed pipelines for the next generation of mutation and cross-over. Once the seed pipelines are completed, they are subjected to a further one-step mutation and cross-over, followed by evaluation and selection. This process is repeated until all the iterations are executed. It should be noted that the number of iterations M can be specified by the user.” P406 ¶4); and
repeat at least once:
(1) based on the scores of the stacked machine learning model ensemble pipeline architectures in a most recent generation, select a subset of the stacked machine learning ensemble model pipeline architectures in the previous generation and mutating the stacked machine learning model ensemble pipeline architectures in the previous generation as part of generating a next generation of stacked machine learning model ensemble pipeline architectures (“Once the above steps are completed, the generated 2N pipelines are trained and evaluated through cross-validation. As a result, N pipelines with the highest validation accuracies are selected as the seed pipelines for the next generation of mutation and cross-over. Once the seed pipelines are completed, they are subjected to a further one-step mutation and cross-over, followed by evaluation and selection. This process is repeated until all the iterations are executed. It should be noted that the number of iterations M can be specified by the user.” P406 ¶4), and
(2) score the next generation of stacked machine learning model ensemble pipeline architectures process the data set (“Once the above steps are completed, the generated 2N pipelines are trained and evaluated through cross-validation. As a result, N pipelines with the highest validation accuracies are selected as the seed pipelines for the next generation of mutation and cross-over. Once the seed pipelines are completed, they are subjected to a further one-step mutation and cross-over, followed by evaluation and selection. This process is repeated until all the iterations are executed. It should be noted that the number of iterations M can be specified by the user.” P406 ¶4),
(3) based on the scores for the next generation of stacked machine learning model ensemble pipeline architectures, determine whether to:
repeat steps (1)-(3) with the next generation being the most recent generation (“Once the above steps are completed, the generated 2N pipelines are trained and evaluated through cross-validation. As a result, N pipelines with the highest validation accuracies are selected as the seed pipelines for the next generation of mutation and cross-over. Once the seed pipelines are completed, they are subjected to a further one-step mutation and cross-over, followed by evaluation and selection. This process is repeated until all the iterations are executed. It should be noted that the number of iterations M can be specified by the user.” P406 ¶4),
select one of stacked machine learning model ensemble pipeline architectures in the next generation that meets an evaluation metric (“Once all pipelines are trained and validated, the first ten pipelines with the highest validation accuracies are selected as the final Autostacker output. This choice is based on the premise that these ten pipelines can provide better baselines for human experts when aiming to solve a specific problem.” P406 §3.4 ¶2).
Chen does not explicitly disclose: and in response to one of the stacked machine learning model ensemble pipeline architectures being selected, send a notification to a user device, the notification identifying the layers of the selected stacked machine learning model ensemble pipeline architecture and the machine learning models in each layer.
However, Marsden teaches: and in response to one of the stacked machine learning model ensemble pipeline architectures being selected, send a notification to a user device, the notification identifying the layers of the selected stacked machine learning model ensemble pipeline architecture and the machine learning models in each layer (“As seen in FIG. 21, once a model is created within any project the topmost menu includes a MODELS selector which may be selected to bring the user to a models list as depicted by models interface 2600 of FIG. 26. This list includes, for each model, the project with which the model is associated, the date and time the model was generated, the parameters and summary statistics of the model, and the model framework. A deploy selector is shown for each model and may be used to deploy the model. In FIG. 26 the user has already selected the DEPLOY selector for the bottommost model, which is the more accurate road signs model, and a popup notification indicates that the model has been sent to the CI system and that the user may select a VIEW PIPELINE selector to see the pipeline or a CANCEL selector to cancel.” [0200]).
Chen and Marsden are in the same field of endeavor machine learning pipelines and are analogous. Chen discloses a stacked machine learning model ensemble pipeline. Marsden teaches the selection and display of the pipelines and models within a pipeline. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the stacked ensemble pipeline architecture as disclosed by Chen with the notification and model selection capability as taught by Marsden to yield predictable results of allowing a user to interact with the models in an intuitive manner.
Regarding claim 2, Chen discloses: The non-transitory computer-readable storage medium of claim 1, wherein the selected one of the stacked machine learning model ensemble pipeline architectures is a best scoring one of the stacked machine learning model ensemble architectures that were scored (“Once all pipelines are trained and validated, the first ten pipelines with the highest validation accuracies are selected as the final Autostacker output. This choice is based on the premise that these ten pipelines can provide better baselines for human experts when aiming to solve a specific problem.” P406 §3.4 ¶2).
Chen does not explicitly disclose: and wherein the processor is further caused to provide a link, file, or message to the user device that provides access to the selected stacked machine learning model ensemble pipeline architecture.
However, Marsden teaches: and wherein the processor is further caused to provide a link, file, or message to the user device that provides access to the selected stacked machine learning model ensemble pipeline architecture (“As seen in FIG. 21, once a model is created within any project the topmost menu includes a MODELS selector which may be selected to bring the user to a models list as depicted by models interface 2600 of FIG. 26. This list includes, for each model, the project with which the model is associated, the date and time the model was generated, the parameters and summary statistics of the model, and the model framework. A deploy selector is shown for each model and may be used to deploy the model. In FIG. 26 the user has already selected the DEPLOY selector for the bottommost model, which is the more accurate road signs model, and a popup notification indicates that the model has been sent to the CI system and that the user may select a VIEW PIPELINE selector to see the pipeline or a CANCEL selector to cancel.” [0200]).
Regarding claim 3, Chen discloses: The non-transitory computer-readable storage medium of claim 1, wherein genetic programming is used in the mutating of the stacked machine learning model ensemble pipeline architectures in the previous generation to generate the next generation of stacked machine learning model ensemble pipeline architectures (Algorithm 1 uses evolutionary algorithm (genetic programming)).
Regarding claim 4, Chen discloses: The non-transitory computer-readable storage medium of claim 1, wherein the instructions when executed further cause the processor to provide access to the selected one of the stacked machine learning model ensemble pipeline architectures in the next generation for processing another data set (“Once the above steps are completed, the generated 2N pipelines are trained and evaluated through cross-validation. As a result, N pipelines with the highest validation accuracies are selected as the seed pipelines for the next generation of mutation and cross-over. Once the seed pipelines are completed, they are subjected to a further one-step mutation and cross-over, followed by evaluation and selection. This process is repeated until all the iterations are executed. It should be noted that the number of iterations M can be specified by the user.” P406 ¶4).
Regarding claim 5, Chen discloses: The non-transitory computer-readable storage medium of claim 1, wherein the mutating the stacked machine learning model ensemble pipeline architectures in the previous generation to generate a next generation of stacked machine learning model ensemble pipeline architectures comprises modifying a subset of the stacked machine learning model ensemble pipeline architectures in the previous generation (“As can be seen from the flowchart, N completed pipelines are initially generated by randomly selecting the hyperparameters. Next, a one-step mutation is run on the upper half of these pipelines to obtain additional N/2 pipelines, whereby the candidates for mutation are chosen randomly. In the following step, further N/2 pipelines are selected to run the cross-over, resulting in N new pipelines in total.” P406 ¶2, “The one-step mutation results in a random change in one of the hyperparameters in H as in set (4). This change could pertain to, for example, the number of estimators in a Random Forest Classifier, or result in replacing an SVM classifier with a logistic regression classifier.” P406 ¶3).
Regarding claim 6, Chen discloses: The non-transitory computer-readable storage medium of claim 5, wherein the subset comprises stacked machine learning model ensemble pipeline architectures in the previous generation having scores that exceed a threshold (“Once the above steps are completed, the generated 2N pipelines are trained and evaluated through cross-validation. As a result, N pipelines with the highest validation accuracies are selected as the seed pipelines for the next generation of mutation and cross-over.” P406 ¶4).
Regarding claim 7, Chen discloses: The non-transitory computer-readable storage medium of claim 1, wherein the mutating of the stacked machine learning model ensemble pipeline architectures in the previous generation to generate the next generation of stacked machine learning model ensemble pipeline architectures comprises changing what machine learning models are in a layer of at least one of the stacked machine learning model ensemble pipeline architectures in the previous generation (“The one-step mutation results in a random change in one of the hyperparameters in H as in set (4). This change could pertain to, for example, the number of estimators in a Random Forest Classifier, or result in replacing an SVM classifier with a logistic regression classifier.” P406 ¶3).
Regarding claim 8, Chen discloses: The non-transitory computer-readable storage medium of claim 1, wherein the mutating of the stacked machine learning model ensemble pipeline architectures in the previous generation to generate the next generation of stacked machine learning model ensemble pipeline architectures comprises changing how many layers are in at least one of the stacked machine learning model ensemble pipeline architectures in the previous generation (“number of layers in each pipeline” p405 EQ4, note: number of layers is a hyperparameter that varies during mutation).
Regarding claim 9, Chen discloses: The non-transitory computer-readable storage medium of claim 1, wherein the mutating of the stacked machine learning model ensemble pipeline architectures in the previous generation to generate the next generation of stacked machine learning model ensemble pipeline architectures comprises changing at least one hyperparameter for a machine learning model in at least one of the stacked machine learning model ensemble pipeline architectures in the previous generation (“The search algorithm for finding the appropriate hyperparameters is described in the next section.” P405 right column ¶2, Algorithm 1).
Regarding claim 10, Chen discloses: A non-transitory computer-readable storage medium for storing instructions that when executed by a processor cause the processor to:
receive as input an indication of what machine learning models may be used in a stacked machine learning model ensemble pipeline architecture (“Figure 1: A typical pipeline generated by Autostacker. Each column represents a layer, whereas each node in a layer represents a machine learning primitive model (e.g., SVM, MLP). The number of layers and nodes per layer can be specified a priori, or treated as a hyperparameter. A raw dataset is used as input for the first layer. In the subsequent layers, the prediction results from each node will be added to the raw dataset as synthetic features (new colors). The new dataset generated by each layer serves as input to the next layer.” Fig 1, “type of each primitive
each model hyperparameter within each primitive
number of layers in each pipeline
number of nodes in each layer” p405 EQ4, types of hyperparameters);
receive as input an identification of hyperparameters for the machine learning models that may be used in a stacked machine learning model ensemble pipeline architecture (“Figure 1: A typical pipeline generated by Autostacker. Each column represents a layer, whereas each node in a layer represents a machine learning primitive model (e.g., SVM, MLP). The number of layers and nodes per layer can be specified a priori, or treated as a hyperparameter. A raw dataset is used as input for the first layer. In the subsequent layers, the prediction results from each node will be added to the raw dataset as synthetic features (new colors). The new dataset generated by each layer serves as input to the next layer.” Fig 1, “type of each primitive
each model hyperparameter within each primitive
number of layers in each pipeline
number of nodes in each layer” p405 EQ4, types of hyperparameters);
based on the inputs, generate stacked machine learning model pipeline architectures which contain at least two layers, with each layer including multiple ones of the machine learning models that may be used (Fig 1, “Figure 1: A typical pipeline generated by Autostacker. Each column represents a layer, whereas each node in a layer represents a machine learning primitive model (e.g., SVM, MLP). The number of layers and nodes per layer can be specified a priori, or treated as a hyperparameter. A raw dataset is used as input for the first layer. In the subsequent layers, the prediction results from each node will be added to the raw dataset as synthetic features (new colors). The new dataset generated by each layer serves as input to the next layer.” Fig 1, “type of each primitive
each model hyperparameter within each primitive
number of layers in each pipeline
number of nodes in each layer” p405 EQ4, types of hyperparameters);
generate possible hyperparameter values for the generated stacked machine learning model pipeline architectures (Algorithm 1, Eq4);
score the generated stacked machine learning model pipeline architectures based on a performance with the generated possible hyperparameter values in processing a data set (“Once the above steps are completed, the generated 2N pipelines are trained and evaluated through cross-validation. As a result, N pipelines with the highest validation accuracies are selected as the seed pipelines for the next generation of mutation and cross-over. Once the seed pipelines are completed, they are subjected to a further one-step mutation and cross-over, followed by evaluation and selection. This process is repeated until all the iterations are executed. It should be noted that the number of iterations M can be specified by the user.” P406 ¶4);
select one of the generated stacked machine learning model pipeline architectures and a set of generated possible hyperparameter values based on a score associated with each of the generated stacked machine learning model pipeline architectures (“Once all pipelines are trained and validated, the first ten pipelines with the highest validation accuracies are selected as the final Autostacker output. This choice is based on the premise that these ten pipelines can provide better baselines for human experts when aiming to solve a specific problem.” P406 §3.4 ¶2, “Once the above steps are completed, the generated 2N pipelines are trained and evaluated through cross-validation. As a result, N pipelines with the highest validation accuracies are selected as the seed pipelines for the next generation of mutation and cross-over. Once the seed pipelines are completed, they are subjected to a further one-step mutation and cross-over, followed by evaluation and selection. This process is repeated until all the iterations are executed. It should be noted that the number of iterations M can be specified by the user.” P406 ¶4).
Chen does not explicitly disclose: and send a notification to a user device, the notification identifying the layers of the selected stacked machine learning model pipeline architecture and the machine learning models in each layer.
Marsden teaches: and send a notification to a user device, the notification identifying the layers of the selected stacked machine learning model pipeline architecture and the machine learning models in each layer (“As seen in FIG. 21, once a model is created within any project the topmost menu includes a MODELS selector which may be selected to bring the user to a models list as depicted by models interface 2600 of FIG. 26. This list includes, for each model, the project with which the model is associated, the date and time the model was generated, the parameters and summary statistics of the model, and the model framework. A deploy selector is shown for each model and may be used to deploy the model. In FIG. 26 the user has already selected the DEPLOY selector for the bottommost model, which is the more accurate road signs model, and a popup notification indicates that the model has been sent to the CI system and that the user may select a VIEW PIPELINE selector to see the pipeline or a CANCEL selector to cancel.” [0200]).
Regarding claim 11, Chen discloses: The non-transitory computer-readable storage medium of claim 10, wherein the instructions include instructions that when executed by a processor cause the processor to receive as input value ranges for the hyperparameters (“Figure 1: A typical pipeline generated by Autostacker. Each column represents a layer, whereas each node in a layer represents a machine learning primitive model (e.g., SVM, MLP). The number of layers and nodes per layer can be specified a priori, or treated as a hyperparameter. A raw dataset is used as input for the first layer. In the subsequent layers, the prediction results from each node will be added to the raw dataset as synthetic features (new colors). The new dataset generated by each layer serves as input to the next layer.”, “I and J : These parameters respectively denote the maximum number of layers and the maximum number of nodes corresponding to each layer.
• The primitive types: In this work, a dictionary of primitives that serves solely as a search space is provided for brevity. However, additional primitives can be added by the user as
needed.” P405 ¶3).
Chen does not explicitly disclose: and wherein the processor is further caused to provide a link, file, or message to the user device that provides access to the selected stacked machine learning model pipeline architecture.
Marsden teaches: and wherein the processor is further caused to provide a link, file, or message to the user device that provides access to the selected stacked machine learning model pipeline architecture (“As seen in FIG. 21, once a model is created within any project the topmost menu includes a MODELS selector which may be selected to bring the user to a models list as depicted by models interface 2600 of FIG. 26. This list includes, for each model, the project with which the model is associated, the date and time the model was generated, the parameters and summary statistics of the model, and the model framework. A deploy selector is shown for each model and may be used to deploy the model. In FIG. 26 the user has already selected the DEPLOY selector for the bottommost model, which is the more accurate road signs model, and a popup notification indicates that the model has been sent to the CI system and that the user may select a VIEW PIPELINE selector to see the pipeline or a CANCEL selector to cancel.” [0200]).
Regarding claim 12, Chen discloses: The non-transitory computer-readable storage medium of claim 10, wherein the generating of the stacked machine learning model pipeline architectures which contain at least two layers comprises generating an object instance for each generated stacked machine learning model pipeline architecture (“Once the above steps are completed, the generated 2N pipelines are trained and evaluated through cross-validation. As a result, N pipelines with the highest validation accuracies are selected as the seed pipelines for the next generation of mutation and cross-over. Once the seed pipelines are completed, they are subjected to a further one-step mutation and cross-over, followed by evaluation and selection. This process is repeated until all the iterations are executed. It should be noted that the number of iterations M can be specified by the user.” P406 ¶4).
Regarding claim 13, Chen discloses: The non-transitory computer-readable storage medium of claim 12, wherein each object instance for each generated stacked machine learning model pipeline architecture includes methods for the machine learning models in each of the generated stacked machine learning model pipeline architectures (“Once the above steps are completed, the generated 2N pipelines are trained and evaluated through cross-validation. As a result, N pipelines with the highest validation accuracies are selected as the seed pipelines for the next generation of mutation and cross-over. Once the seed pipelines are completed, they are subjected to a further one-step mutation and cross-over, followed by evaluation and selection. This process is repeated until all the iterations are executed. It should be noted that the number of iterations M can be specified by the user.” P406 ¶4).
Regarding claim 14, Chen discloses: The non-transitory computer-readable storage medium of claim 13, wherein each object instance for each generated stacked machine learning model pipeline architecture includes generated hyperparameter values for the machine learning models in each of the generated stacked machine learning model pipeline architectures (“Once the above steps are completed, the generated 2N pipelines are trained and evaluated through cross-validation. As a result, N pipelines with the highest validation accuracies are selected as the seed pipelines for the next generation of mutation and cross-over. Once the seed pipelines are completed, they are subjected to a further one-step mutation and cross-over, followed by evaluation and selection. This process is repeated until all the iterations are executed. It should be noted that the number of iterations M can be specified by the user.” P406 ¶4, Algorithm 1 includes hyperparameter optimization).
Regarding claim 15, Chen discloses: The non-transitory computer-readable storage medium of claim 10, wherein the generating of the stacked machine learning model pipeline architectures comprises using genetic programming to generate generations of the stacked machine learning model pipeline architectures (Algorithm 1).
Regarding claim 16, Chen discloses: The non-transitory computer-readable storage medium of claim 10, wherein the selecting of the one of the generated stacked machine learning model pipeline architectures and a set of generated possible hyperparameter values as best performing comprises selecting an optimal generated stacked machine learning model pipeline architecture with an optimal set of hyperparameter values (“Once all pipelines are trained and validated, the first ten pipelines with the highest validation accuracies are selected as the final Autostacker output. This choice is based on the premise that these ten pipelines can provide better baselines for human experts when aiming to solve a specific problem.” P406 §3.4 ¶2).
Regarding claim 17, Chen discloses: A method performed by a processor of a computing device, comprising, via the processor:
generating stacked machine learning model pipeline architectures which contain at least two layers, with each layer including multiple ones of the machine learning models that may be used (Fig 1, note: users can select model types a priori and can select multiple of the same model per layer);
generating possible hyperparameter values for the generated stacked machine learning model pipeline architectures (“Figure 1: A typical pipeline generated by Autostacker. Each column represents a layer, whereas each node in a layer represents a machine learning primitive model (e.g., SVM, MLP). The number of layers and nodes per layer can be specified a priori, or treated as a hyperparameter. A raw dataset is used as input for the first layer. In the subsequent layers, the prediction results from each node will be added to the raw dataset as synthetic features (new colors). The new dataset generated by each layer serves as input to the next layer.” Fig 1, “type of each primitive
each model hyperparameter within each primitive
number of layers in each pipeline
number of nodes in each layer” p405 EQ4, types of hyperparameters);
scoring the generated stacked machine learning model pipeline architectures based on a performance with the generated possible hyperparameter values in processing a data set (“Once the above steps are completed, the generated 2N pipelines are trained and evaluated through cross-validation. As a result, N pipelines with the highest validation accuracies are selected as the seed pipelines for the next generation of mutation and cross-over. Once the seed pipelines are completed, they are subjected to a further one-step mutation and cross-over, followed by evaluation and selection. This process is repeated until all the iterations are executed. It should be noted that the number of iterations M can be specified by the user.” P406 ¶4, Fig 1);
selecting one of the generated stacked machine learning model pipeline architectures and a set of generated possible hyperparameter values based on a score associated with each of the generated stacked machine learning model pipeline architectures (“Once all pipelines are trained and validated, the first ten pipelines with the highest validation accuracies are selected as the final Autostacker output. This choice is based on the premise that these ten pipelines can provide better baselines for human experts when aiming to solve a specific problem.” P406 §3.4 ¶2).
Chen does not explicitly disclose: And sending a notification to a user device, the notification identifying the layers of the selected stacked machine learning model pipeline architecture and the machine learning models in each layer.
However, Marsden teaches: And sending a notification to a user device, the notification identifying the layers of the selected stacked machine learning model pipeline architecture and the machine learning models in each layer (“As seen in FIG. 21, once a model is created within any project the topmost menu includes a MODELS selector which may be selected to bring the user to a models list as depicted by models interface 2600 of FIG. 26. This list includes, for each model, the project with which the model is associated, the date and time the model was generated, the parameters and summary statistics of the model, and the model framework. A deploy selector is shown for each model and may be used to deploy the model. In FIG. 26 the user has already selected the DEPLOY selector for the bottommost model, which is the more accurate road signs model, and a popup notification indicates that the model has been sent to the CI system and that the user may select a VIEW PIPELINE selector to see the pipeline or a CANCEL selector to cancel.” [0200]).
Regarding claim 18, Chen discloses: The method of claim 17, wherein the generating of stacked machine learning model pipeline architectures which contain at least two layers is based on configuration input that specifies what machine learning models may be used in the stacked machine learning model ensemble pipeline architecture (Fig 1).
Chen does not explicitly disclose: and wherein the method further comprises providing a link, file, or message to the user device that provides access to the selected stacked machine learning model pipeline architecture.
However, Marsden teaches: and wherein the method further comprises providing a link, file, or message to the user device that provides access to the selected stacked machine learning model pipeline architecture (“As seen in FIG. 21, once a model is created within any project the topmost menu includes a MODELS selector which may be selected to bring the user to a models list as depicted by models interface 2600 of FIG. 26. This list includes, for each model, the project with which the model is associated, the date and time the model was generated, the parameters and summary statistics of the model, and the model framework. A deploy selector is shown for each model and may be used to deploy the model. In FIG. 26 the user has already selected the DEPLOY selector for the bottommost model, which is the more accurate road signs model, and a popup notification indicates that the model has been sent to the CI system and that the user may select a VIEW PIPELINE selector to see the pipeline or a CANCEL selector to cancel.” [0200]).
Regarding claim 19, Chen discloses: The method of claim 17, wherein the generating of possible hyperparameter values for the generated stacked machine learning model pipeline architectures is based on configuration information that specifies possible value ranges of the hyperparameters (“Figure 1: A typical pipeline generated by Autostacker. Each column represents a layer, whereas each node in a layer represents a machine learning primitive model (e.g., SVM, MLP). The number of layers and nodes per layer can be specified a priori, or treated as a hyperparameter. A raw dataset is used as input for the first layer. In the subsequent layers, the prediction results from each node will be added to the raw dataset as synthetic features (new colors). The new dataset generated by each layer serves as input to the next layer.”, “I and J : These parameters respectively denote the maximum number of layers and the maximum number of nodes corresponding to each layer.
• The primitive types: In this work, a dictionary of primitives that serves solely as a search space is provided for brevity. However, additional primitives can be added by the user as
needed.” P405 ¶3).
Regarding claim 20, Chen discloses: The method of claim 17, wherein the generating of the stacked machine learning model pipeline architectures which contain at least two layers comprises applying a mutation operation to a previous generation of stacked machine learning model pipeline architectures with at least two layers to generate another generation of stacked machine learning model pipeline architectures with at least two layers (Algorithm 1 uses evolutionary algorithm and mutation).
Response to Arguments
Applicant’s arguments with respect to claim(s) 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ERIC NILSSON whose telephone number is (571)272-5246. The examiner can normally be reached M-F: 7-3.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, James Trujillo can be reached at (571)-272-3677. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ERIC NILSSON/Primary Examiner, Art Unit 2151