Prosecution Insights
Last updated: April 19, 2026
Application No. 17/086,277

SCALABLE DISCOVERY OF LEADERS FROM DYNAMIC COMBINATORIAL SEARCH SPACE USING INCREMENTAL PIPELINE GROWTH APPROACH

Non-Final OA §101§103§112
Filed
Oct 30, 2020
Examiner
SMITH, KEVIN LEE
Art Unit
2122
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
5 (Non-Final)
37%
Grant Probability
At Risk
5-6
OA Rounds
4y 8m
To Grant
55%
With Interview

Examiner Intelligence

Grants only 37% of cases
37%
Career Allow Rate
49 granted / 134 resolved
-18.4% vs TC avg
Strong +18% interview lift
Without
With
+18.0%
Interview Lift
resolved cases with interview
Typical timeline
4y 8m
Avg Prosecution
45 currently pending
Career history
179
Total Applications
across all art units

Statute-Specific Performance

§101
30.7%
-9.3% vs TC avg
§103
36.4%
-3.6% vs TC avg
§102
10.1%
-29.9% vs TC avg
§112
17.3%
-22.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 134 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination 2. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 05 December 2025 [hereinafter Response] has been entered, where: Claims 1-4 and 6-20 have been amended. Claim 5 has been cancelled. New claim 21 is presented for examination. Claims 1-4 and 6-21 are pending. Claims 1-4 and 6-21 are rejected. Information Disclosure Statement 3. An information disclosure statement was submitted on 10 December 2025. The submission complies with the provisions of 37 CFR 1.97. Accordingly, the Examiner considered the information disclosure statement. Claim Rejections - 35 U.S.C. § 112 4. The following is a quotation of 35 U.S.C. § 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. 5. The rejection to claims 14-16 under 35 U.S.C. § 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention is WITHDRAWN in view of Applicant’s amendments to the claims. Claim Rejections - 35 U.S.C. § 101 6. 35 U.S.C. § 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 7. Claims 1-4 and 6-21 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to an abstract idea without significantly more. Claim 1 recites a method, which is a process and thus one of the statutory categories of patentable subject matter. (35 U.S.C. § 101). However, under Step 2A Prong One, the claim recites the limitations of “[(a)] generating a pipeline graph having a plurality of layers including a feature scaling stage, a feature selection stage, and a classification stage,” “[(b)] iteratively operating a plurality of pipelines through the pipeline graph over multiple rounds on a training dataset to determine a respective plurality of results,” “[(c)] comparing the respective plurality of results to known results based on a predetermined metric,” and “[(d)] identifying one or more leader pipelines based on the comparison.” The plain meaning of a “pipeline graph” is a visual representation of stages, steps, and flow of a process or workflow, showing how tasks, data, or materials move from one point to another. The activities of “[(a)] generating a pipeline graph,” “[(b)] iteratively operating,” “[(c)] comparing,” and “[(d)] identifying one or more leader pipelines,” are limitations that can practically be performed in the human mind, including, for example, observations, evaluations, judgments, and opinions, and accordingly, a mental process, (MPEP § 2106.04(a)(2) sub III), which is one of the groupings of abstract ideas. (MPEP §2106.04(a)(2)). The claim recites more details or specifics to the abstract idea of “[(a)] generating a pipeline graph,” where “[(a.1)] each layer of the plurality of layers having one or more machine learning components for performing a predictive modeling task,” and accordingly, is merely more specific to the abstract idea. The plain meaning of “one or more machine learning components” are modular, reusable, and self-contained pieces of code that perform a specific task in a pipeline graph simplify the development, testing, and deployment of ML workflows by breaking down complex tasks into smaller, manageable steps. Thus, “components for performing a predictive modeling task” can practically be performed in the human mind, including, for example, observations, evaluations, judgments, and opinions, and accordingly, is a mental process. (MPEP § 2106.04(a)(2) sub III). The claim also recites more details or specifics to the abstract idea of “[(d)] identifying one or more leader pipelines,” “[(d.1)] wherein among the plurality of pipelines, results of the one or more leader pipelines are closest to the known results based on the parameter-level parallelism,” and accordingly, is merely more specific to the abstract idea. Further, in relation to the activity of “[(b)] iteratively operating,” the claim recites “[(b.1)] at least one pipeline is removed from the pipeline graph . . . based on the respective results of the previous round,” “[(b.2)] a search space for identification of one or more leader pipelines is reduced based on the removal of the at least one pipeline of the plurality of pipelines within the pipeline graph,” and accordingly, is merely more specific to the abstract idea. Thus, claim 1 recites an abstract idea. Under Step 2A Prong Two, the abstract idea of claim 1 is not integrated into a practical application, because the additional elements recited in the claim beyond the identified judicial exception include a “computer-implemented” method, where instructions to apply the abstract idea on generic computer components (i.e. the computer-implemented method) does not serve to integrate the abstract idea into a practical application. (MPEP § 2106.05(f)). The claim also recites “[(b) iteratively operating . . . wherein:] . . . [(b.4)] an execution of each of the plurality of pipelines is accelerated by using a path-level parallelism or a parameter-level parallelism,” which is the use of a generic computer component (computer-implemented method) to implement the abstract idea, (MPEP § 2106.05(f)), that does not serve to integrate the abstract idea into a practical application. Thus, claim 1 is directed to the abstract idea. Finally, under Step 2B, the additional elements, taken alone or in combination, do not represent significantly more than the abstract idea itself. The additional elements recited in the claim include a “computer-implemented” method, where instructions to apply the abstract idea on generic computer components (i.e. the computer-implemented method) does not amount to significantly more than the abstract idea. (MPEP § 2106.05(f)). The claim also recites “[(b) iteratively operating . . . wherein:] . . . [(l.1)] an execution of each of the plurality of pipelines is accelerated by using a path-level parallelism or a parameter-level parallelism,” which is the use of a generic computer component (computer-implemented method) to implement the abstract idea, (MPEP § 2106.05(f)), that does not amount to significantly more than the abstract idea. The claim recites more details or specifics to the additional element of “[(l.1)] an execution . . . accelerated by using,” where “[(l.1.1)] the path-level parallelism includes executing the plurality of extended pipeline paths in parallel,” “[(l.1.2)] the parameter-level parallelism includes executing each of the plurality of extended pipeline paths for different hyperparameter combinations in parallel,” “[(l.1.3)] the different hyperparameter combinations correspond to a different hyperparameter point within a hyperparameter search space,” and “[(l.1.4)] the parameter-level parallelism reduces a size of the hyperparameter search space after each round of the multiple rounds,” and accordingly, is merely more specific to the additional element. Therefore, claim 1 is subject-matter ineligible. Claim 14 recites a method, which is a process and thus one of the statutory categories of patentable subject matter. (35 U.S.C. § 101). However, under Step 2A Prong One, the claim recites the limitations of “[(a)] generating a pipeline graph having a plurality of layers including a feature scaling stage, a feature selection stage, and a classification stage,” “[(c)] in a first round of the multiple rounds, selecting a first portion of the one or more machine learning components of the last layer, the first portion being closest to a known result of the predictive modeling task,” “[(e))] selecting a second portion of the first portion, the second portion being closest to the known result when the tuned set of hyperparameters are applied,” “[(g)] removing, after each round, at least one machine learning component of the last layer that is not included in the selected portion for that round,” “[(h)] adding an additional layer of the plurality of layers,” “[(i)] identifying a plurality of extended pipeline paths using each of the one or more machine learning components of the additional one of the plurality of layers and each of the second portion,” and “[(k)] selecting a third portion of the extended pipeline paths, the third portion being closest to the known result.” The plain meaning of a “pipeline graph” is a visual representation of stages, steps, and flow of a process or workflow, showing how tasks, data, or materials move from one point to another, which is subject to a mental process. The activities of “[(a)] generating,” “[(b)] iteratively operating,” “[(c), (e), (k)] . . . selecting,” “[(g)] removing,” “[(h)] adding,” and “[(i)] identifying” can practically be performed in the human mind, including, for example, observations, evaluations, judgments, and opinions, and accordingly, recite a mental process, (MPEP § 2106.04(a)(2) sub III), which is one of the groupings of abstract ideas. (MPEP § 2106.04(a)(2)). The claim recites more details or specifics to the abstract idea of “[(a)] generating a pipeline graph,” where “[(a.1)] each layer of the plurality of layers having one or more machine learning components for performing a predictive modeling task,” and accordingly, is merely more specific to the abstract idea. The plain meaning of “one or more machine learning components” are modular, reusable, and self-contained pieces of code that perform a specific task in a pipeline graph simplify the development, testing, and deployment of ML workflows by breaking down complex tasks into smaller, manageable steps. Thus, “components for performing a predictive modeling task” can practically be performed in the human mind, including, for example, observations, evaluations, judgments, and opinions, and accordingly, is a mental process. (MPEP § 2106.04(a)(2) sub III). The claim also recites more details or specifics of the abstract ideas of “[(c)] in a first round of the multiple rounds, selecting,” where “[(c) . . . selecting a first portion] . . . , [(c.1)] the first portion being closest to a known result of the predictive modeling task,” “[(e)] [selecting a second portion] . . . , [(e.1)] the second portion being closest to the known result when the tuned set of hyperparameters are applied, and [(e.2)] model metrics of the second portion closest to the known results optimize a result,” “[(i) identifying a plurality of extended pipeline paths (i.1)using each of the one or more machine learning components of the additional one of the plurality of layers and each of the second portion,” and “[(k) selecting a third portion] . . . , [(k.1)] the third portion being closest to the known result,” and accordingly, are merely more specific to the abstract idea. Also, the claim recites more details or specifics to the abstract idea of “[(g)] removing,” wherein “[(g.1) a search space for identification of a plurality of extended pipeline paths is reduced based on the removal of the at least one machine learning component of the one or more machine learning components,” and accordingly, is merely more specific to the abstract idea. Thus, claim 14 recites an abstract idea. Under Step 2A Prong Two, the abstract idea of claim 14 is not integrated into a practical application, because the additional elements recited in the claim beyond the identified judicial exception include a “computer-implemented” method, where instructions to apply the abstract idea on generic computer components (i.e. the computer-implemented method) that does not integrate the abstract idea into a practical application. (MPEP § 2106.05(f)). Further, the claim recites “[(b)] iteratively operating a plurality of pipelines through the pipeline graph over multiple rounds,” “[(d)] initiating a first hyperparameter tuning on the first portion to determine a first tuned set of hyperparameters for the first portion of the one or more machine learning components,” “[(f)] initiating a second hyperparameter tuning on the second portion to determine a second tuned set of hyperparameters for the second portion of the one or more machine learning components of the last layer;” “[(j)] operating the plurality of extended pipeline paths on the training dataset with the default hyperparameters for each of the one or more machine learning components of the additional one of the plurality of layers,” and “[(l)] initiating a third hyperparameter tuning on the third portion of the extended pipeline paths to determine a second tuned set of hyperparameters for each of the machine learning components of the additional one of the plurality of layers,” are activities directed to the use of generic computer components (computer-implemented method) to implement the abstract idea, which do not serve to integrate the abstract idea into a practical application. (MPEP § 2105(f)). Also, the claim recites the additional element of the “[(l.1)] an execution of each of the plurality of extended pipeline paths is accelerated by using a path-level parallelism or a parameter-level parallelism,” which is the use of the generic computer component (computer-implemented method) to implement the abstract idea, (MPEP § 2106.05(f)), that does not serve to integrate the abstract idea into a practical application. The claim recites more details or specifics of the additional element of “[(l.1)] an execution . . . is accelerated” where “[(l.1.1)] the path-level parallelism includes executing the plurality of extended pipeline paths in parallel, [(l.1.2)] the parameter-level parallelism includes executing each of the plurality of extended pipeline paths for different hyperparameter combinations in parallel, [(l.1.3)] the different hyperparameter combinations correspond to a different hyperparameter point within a hyperparameter search space, and [(l.1.4)] the parameter-level parallelism reduces a size of the hyperparameter search space after each round of the multiple rounds,” and accordingly, is merely more specific to the additional element. Thus, claim 14 is directed to the abstract idea. Finally, under Step 2B, the additional elements, taken alone or in combination, do not represent significantly more than the abstract idea itself. The additional elements recited in the claim beyond the identified judicial exception include a “computer-implemented” method, where instructions to apply the abstract idea on generic computer components (i.e. the computer-implemented method) do not amount to significantly more than the abstract idea. (MPEP § 2106.05(f)). Further, the claim recites “[(b)] iteratively operating a plurality of pipelines through the pipeline graph over multiple rounds,” “[(d)] initiating a first hyperparameter tuning on the first portion to determine a first tuned set of hyperparameters for the first portion of the one or more machine learning components,” “[(f)] initiating a second hyperparameter tuning on the second portion to determine a second tuned set of hyperparameters for the second portion of the one or more machine learning components of the last layer;” “[(j)] operating the plurality of extended pipeline paths on the training dataset with the default hyperparameters for each of the one or more machine learning components of the additional one of the plurality of layers,” and “[(l)] initiating a third hyperparameter tuning on the third portion of the extended pipeline paths to determine a second tuned set of hyperparameters for each of the machine learning components of the additional one of the plurality of layers,” are activities directed to the use of generic computer components (computer-implemented method) to implement the abstract idea, which do not amount to significantly more than the abstract idea. (MPEP § 2105(f)). Also, the claim recites the additional element of “[(l.1)] an execution of each of the plurality of extended pipeline paths is accelerated by using a path-level parallelism or a parameter-level parallelism,” which is the use of the generic computer component (computer-implemented method) to implement the abstract idea, (MPEP § 2106.05(f)), that does not serve to integrate the abstract idea into a practical application. The claim recites more details or specifics of the additional element of “[(l.1)] an execution . . . is accelerated” where “[(l.1.1)] the path-level parallelism includes executing the plurality of extended pipeline paths in parallel, [(l.1.2)] the parameter-level parallelism includes executing each of the plurality of extended pipeline paths for different hyperparameter combinations in parallel, [(l.1.3)] the different hyperparameter combinations correspond to a different hyperparameter point within a hyperparameter search space, and [(l.1.4)] the parameter-level parallelism reduces a size of the hyperparameter search space after each round of the multiple rounds,” and accordingly, is merely more specific to the additional element. Therefore, claim 14 is subject-matter ineligible. Claim 17 recites [a] non-transitory computer readable storage medium, which is product and thus one of the statutory categories of patentable subject matter. (35 U.S.C. § 101). However, under Step 2A Prong One, the claim recites the limitations of “[(a)] generating a pipeline graph having a plurality of layers including a feature scaling stage, a feature selection stage, and a classification stage,” “[(b)] iteratively operating a plurality of pipelines through the pipeline graph over multiple rounds on a training dataset to determine a respective plurality of results,” “[(c)] comparing the respective plurality of results to known results based on a predetermined metric,” and “[(d)] identifying one or more leader pipelines based on the comparison.” The plain meaning of a “pipeline graph” is a visual representation of stages, steps, and flow of a process or workflow, showing how tasks, data, or materials move from one point to another. The activities of “[(a)] generating a pipeline graph,” “[(b)] iteratively operating,” “[(c)] comparing,” and “[(d)] identifying one or more leader pipelines,” are limitations that can practically be performed in the human mind, including, for example, observations, evaluations, judgments, and opinions, and accordingly, a mental process, (MPEP § 2106.04(a)(2) sub III), which is one of the groupings of abstract ideas. (MPEP §2106.04(a)(2)). The claim recites more details or specifics to the abstract idea of “[(a)] generating a pipeline graph,” where “[(a.1)] each layer of the plurality of layers having one or more machine learning components for performing a predictive modeling task,” and accordingly, is merely more specific to the abstract idea. The plain meaning of “one or more machine learning components” are modular, reusable, and self-contained pieces of code that perform a specific task in a pipeline graph simplify the development, testing, and deployment of ML workflows by breaking down complex tasks into smaller, manageable steps. Thus, “components for performing a predictive modeling task” can practically be performed in the human mind, including, for example, observations, evaluations, judgments, and opinions, and accordingly, is a mental process. (MPEP § 2106.04(a)(2) sub III). The claim also recites more details or specifics to the abstract idea of “[(d)] identifying one or more leader pipelines,” “[(d.1)] wherein among the plurality of pipelines, results of the one or more leader pipelines are closest to the known results based on the parameter-level parallelism,” and accordingly, is merely more specific to the abstract idea. Further, in relation to the activity of “[(b)] iteratively operating,” the claim recites “[(b.1)] at least one pipeline is removed from the pipeline graph . . . based on the respective results of the previous round,” “[(b.2)] a search space for identification of one or more leader pipelines is reduced based on the removal of the at least one pipeline of the plurality of pipelines within the pipeline graph,” and accordingly, is merely more specific to the abstract idea. Thus, claim 17 recites an abstract idea. Under Step 2A Prong Two, the abstract idea of claim 1 is not integrated into a practical application, because the additional elements recited in the claim beyond the identified judicial exception include a “non-transitory computer readable storage medium tangibly embodying a computer readable program code having computer readable instructions that , when executed, causes a computer device to carry out a method in providing computing efficiency of a computing device operating a pipeline execution engine”, where instructions to apply the abstract idea on generic computer components (i.e. the non-transitory computer readable storage medium, a computer device, a pipeline execution engine) do not serve to integrate the abstract idea into a practical application. (MPEP § 2106.05(f)). The claim also recites “[(b) iteratively operating . . . wherein:] . . . [(b.4)] an execution of each of the plurality of pipelines is accelerated by using a path-level parallelism or a parameter-level parallelism,” which is the use of a generic computer component (computer-implemented method) to implement the abstract idea, (MPEP § 2106.05(f)), that does not serve to integrate the abstract idea into a practical application. The claim recites more details or specifics to the additional element of “[(b.4)] an execution . . . accelerated by using,” where “[(b.4.1)] the path-level parallelism includes executing the plurality of extended pipeline paths in parallel,” “[(b.4.2)] the parameter-level parallelism includes executing each of the plurality of extended pipeline paths for different hyperparameter combinations in parallel,” “[(b.4.3)] the different hyperparameter combinations correspond to a different hyperparameter point within a hyperparameter search space,” and “[(b.4.4)] the parameter-level parallelism reduces a size of the hyperparameter search space after each round of the multiple rounds,” and accordingly, is merely more specific to the additional element. Thus, claim 17 is directed to the abstract idea. Finally, under Step 2B, the additional elements, taken alone or in combination, do not represent significantly more than the abstract idea itself. The additional elements recited in the claim include a “non-transitory computer readable storage medium tangibly embodying a computer readable program code having computer readable instructions that , when executed, causes a computer device to carry out a method in providing computing efficiency of a computing device operating a pipeline execution engine”, where instructions to apply the abstract idea on generic computer components (i.e. the non-transitory computer readable storage medium, a computer device, a pipeline execution engine) do not amount to significantly more than the abstract idea. (MPEP § 2106.05(f)). The claim also recites “[(b) iteratively operating . . . wherein:] . . . [(b.4)] an execution of each of the plurality of pipelines is accelerated by using a path-level parallelism or a parameter-level parallelism,” which is the use of a generic computer component (computer-implemented method) to implement the abstract idea, (MPEP § 2106.05(f)), that does not amount to significantly more than the abstract idea. The claim recites more details or specifics to the additional element of “[(b.4)] an execution . . . accelerated by using,” where “[(b.4.1)] the path-level parallelism includes executing the plurality of extended pipeline paths in parallel,” “[(b.4.2)] the parameter-level parallelism includes executing each of the plurality of extended pipeline paths for different hyperparameter combinations in parallel,” “[(b.4.3)] the different hyperparameter combinations correspond to a different hyperparameter point within a hyperparameter search space,” and “[(b.4.4)] the parameter-level parallelism reduces a size of the hyperparameter search space after each round of the multiple rounds,” and accordingly, is merely more specific to the additional element. Therefore, claim 17 is subject-matter ineligible. Claims 2 and 3 depend from claim 1, respectively. The claims recite more details or specifics of the abstract idea of the “[(a)] generating a pipeline graph.” (claim 2: [(a.2)] wherein the pipeline graph is generated from one or more default pipeline graphs for the predictive modeling task;” claim 3: “wherein: [(a.2)] the one or more machine learning components include a no-operation component; and [(a.3)] the training dataset passes without operation when the pipeline includes the no-operation component”), and accordingly, are merely more specific to the abstract idea. The additional elements of the claim does not serve to integrate the abstract idea into integrated into a practical application, (see MPEP § 2106.04(d)), nor do the additional elements amount to significantly more than the abstract idea, (MPEP § 2106.05 sub I; see also MPEP § 2106.05(a) – (h)), and thus, the claim recites no more than the abstract idea. Therefore, claims 2 and 3 are subject-matter ineligible. Claim 4 depends from claim 1. Claim 18 depends from claim 17. The claims further recite an additional element of using the one or more machine learning components of the pipeline graph by applying a set of hyperparameters, (claims 4 and 18: “[(e)] applying a set of hyperparameters to one or more of the selected ones of the one or more machine learning components at each of the plurality of layers”), which is the use of the generic computer component (computer-implemented, non-transitory computer readable storage medium, computer device) to implement the abstract idea, (MPEP § 2106.05(f)), that does not serve to integrate the abstract idea into a practical application, nor amounts to significantly more than the abstract idea. Therefore, claims 4 and 18 are subject-matter ineligible. Claim 6 depends directly or indirectly from claim 1. The claims further recites using “[(e)] initially operating the one or more machine learning components at a last layer of the pipeline graph on the training dataset using a default hyperparameter for each of the one or more machine learning components of the last layer”, which is the use of generic computer components (computer-implemented method) to implement the abstract idea, (MPEP §2106.05(f), that does not serve to integrate the abstract idea into a practical application, nor amount to significantly more than the abstract ide. Therefore, claim 6 is subject-matter ineligible. Claims 7 and 8 depend directly or indirectly from claim 1. The claims further recite the limitation of “[(f)] selecting a first portion of the one or more machine learning components of the last layer, the first portion a selection of the one or more machine learning components of the last layer providing leading performance of a predictive model.” The activity of “[(f)] selecting a first portion” is a limitation that can practically be performed in the human mind, including, for example, observations, evaluations, judgments, and opinions, and accordingly, a mental process, (MPEP § 2106.04(a)(2) sub III), which is one of the groupings of abstract ideas. (MPEP §2106.04(a)(2)). Claim 8 recites more details or specifics of the abstract idea of “[(f)] selecting a first portion” “[(f.1)] wherein the first portion is about one-half of the one or more machine learning components of the last layer,” and accordingly, is merely more specific to the abstract idea. The additional elements of the claim does not serve to integrate the abstract idea into integrated into a practical application, (see MPEP § 2106.04(d)), nor do the additional elements amount to significantly more than the abstract idea, (MPEP § 2106.05 sub I; see also MPEP § 2106.05(a) – (h)), and thus, the claim recites no more than the abstract idea. Therefore, claims 7 and 8 are subject-matter ineligible. Claim 9 depends directly or indirectly from claim 1. The claim further recites “recites the additional element of “[(g)] initiating a first hyperparameter tuning on the first portion to determine a first tuned set of hyperparameters for each of the first portion of the one or more machine learning components,” which is a use of a generic computer component (computer-implemented method) to implement the abstract idea that does not integrate the abstract idea into a practical application, nor amount to significantly more than the abstract idea. (MPEP § 2106.05(f)). The claim also recites the limitation of “[(h)] selecting a second portion of the first portion, the second portion providing leading performance of the predictive model,” which is a mental process, (MPEP § 2106.04(a)(2) sub III), and is one of the groupings of abstract ideas. (MPEP § 2106.04(a)(2)). Therefore, claim 9 is subject-matter ineligible. Claim 10 depends directly or indirectly from claim 1. The claim recites more details or specifics of the abstract idea of “[(h)] selecting,” “[(h.1)] wherein the second portion is about one-half of the machine learning components of the first portion,” and accordingly, is merely more specific to the abstract idea. The additional elements of the claim does not serve to integrate the abstract idea into integrated into a practical application, (see MPEP § 2106.04(d)), nor do the additional elements amount to significantly more than the abstract idea, (MPEP § 2106.05 sub I; see also MPEP § 2106.05(a) – (h)), and thus, the claim recites no more than the abstract idea. Therefore, claim 10 is subject matter ineligible. Claims 11 and 12 depend directly or indirectly from claim 1. The claims further recite “[(i)] initiating a second hyperparameter tuning on the second portion to determine a second tuned set of hyperparameters for each of the second portion of the one or more machine learning components of the last layer of the pipeline graph,” which is the use of generic computer components (computer-implemented method) to implement the abstract idea, (MPEP § 2106.05(f)), that does not serve to integrate the abstract idea into a practical application, nor amounts to significantly more than the abstract idea. Claim 12 recites more details or specifics to the additional element of “[(g)] initiating a first hyperparameter,” and “[(i)] initiating a second hyperparameter” “wherein the first hyperparameter tuning and the second hyperparameter tuning both use a random search based hyperparameter tuning,” and accordingly, is merely more specific to the additional elements. The additional elements of the claim does not serve to integrate the abstract idea into integrated into a practical application, (see MPEP § 2106.04(d)), nor do the additional elements amount to significantly more than the abstract idea, (MPEP § 2106.05 sub I; see also MPEP § 2106.05(a) – (h)), and thus, the claim recites no more than the abstract idea. Thus, claims 11 and 12 are subject-matter ineligible. Claim 13 depends directly or indirectly from claim 1. The claim recites further limitations of “[(j)] adding an additional layer of the plurality of layers,” “[(k)] identifying a plurality of extended pipeline paths using each machine learning component of the one or more machine learning components of the additional layer of the plurality of layers and the second portion,” and “[(m)] selecting a first portion of the extended pipeline paths, the first portion of the plurality of extended pipeline paths providing leading performance for the predictive model”. These activities of “[(j)] adding,” “[(k)] identifying,” and “[(m)] selecting” are limitations that can practically be performed in the human mind, including, for example, observations, evaluations, judgments, and opinions, and accordingly, are a mental process. (MPEP § 2106.04(a)(2) sub III). The claim also recites “[(l)] operating the plurality of extended pipeline paths on the training dataset with the default hyperparameters for each machine learning component of the one or more machine learning components of each of the plurality of expanded pipeline paths,” and “[(n)] initiating a third hyperparameter tuning on the first portion of the plurality of extended pipeline paths to determine a third tuned set of hyperparameters for each machine learning component of the one or more machine learning components of the first portion of the plurality of extended pipeline paths,” which is the use of the generic computer component (computer-implemented method) to implement the abstract idea, (MPEP § 2106.05(f)), that does not serve to integrate the abstract idea into a practical application, nor amounts to significantly more than the abstract idea. Therefore, claim 13 is subject-matter ineligible. Claims 15 and 16 depend from claim 14. Claim 15 recites more details or specifics of the additional element “[(d)] initiating a first hyperparameter tuning,” and “[(f)] initiating a second hyperparameter turning,” “wherein the first hyperparameter tuning and the second hyperparameter tuning both use a random search based hyperparameter tuning,” and accordingly, is merely more specific to the additional element. Claim 16 recites more details or specifics to the abstract idea of [(c)] selecting a first portion,” “[(c.2)] wherein the first portion is about one-half of the one or more machine learning components of the last layer, and accordingly, is merely more specific to the abstract idea. The abstract idea of the claim is not integrated into a practical application, (see MPEP § 2106.04(d)), nor does the claim amount to significantly more than the abstract idea, (MPEP § 2106.05), because the claim recites no more than the abstract idea. Therefore, claims 15 and 16 are subject-matter ineligible. Claim 19 depends from claim 17. The claim further recites “[(f)] selecting a first portion of the one or more machine learning components of the last layer, the first portion being closest to a known result of the predictive modeling task,” and “[(h)] selecting a second portion of the first portion, the second portion being closest to the known result when the tuned set of hyperparameters are applied.” The activities of “[(f), (h)] selecting” are limitations that can practically be performed in the human mind, including, for example, observations, evaluations, judgments, and opinions, and accordingly, a mental process, (MPEP § 2106.04(a)(2) sub III), which is one of the groupings of abstract ideas. (MPEP §2106.04(a)(2)). The claim also recites additional elements of “[(e)] operating each of the one or more machine learning components at a last layer of the pipeline graph on a training dataset using a default hyperparameter for each of the one or more machine learning components of the last layer,” “[(g)] initiating a first hyperparameter tuning on the first portion to determine a tuned set of hyperparameters for each of the first portion of the one or more machine learning components,” and “[(i)] initiating a second hyperparameter tuning on the second portion to determine a second tuned set of hyperparameters for each of the second portion of the one or more machine learning components of the last layer of the pipeline graph.” These limitations used the generic computer component (computer-implemented method) to implement the abstract idea, (MPEP § 2106.05(f)), that does not serve to integrate the abstract idea into a practical application, nor does it amount to significantly more than the abstract idea. Thus, claim 19 is subject-matter ineligible. Claim 20 depends indirectly from claim 17. The claim recites further limitations of “[(j)] adding an additional layer of the plurality of layers,” “[(k)] identifying a plurality of extended pipeline paths using each machine learning component of the one or more machine learning components of the additional layer of the plurality of layers and the second portion,” and “[(m)] selecting a third portion of the extended pipeline paths, the third portion being closest to the known result.” These activities of “[(j)] adding,” “[(k)] identifying,” and “[(m)] selecting” are limitations that can practically be performed in the human mind, including, for example, observations, evaluations, judgments, and opinions, and accordingly, a mental process, (MPEP § 2106.04(a)(2) sub III), which is one of the groupings of abstract ideas. (MPEP §2106.04(a)(2)). The claim also recites “[(i)] operating the plurality of extended pipeline paths on the training dataset with the default hyperparameters for each of the one or more machine learning components of the additional one of the plurality of layers,” and “[(n)] initiating a third hyperparameter tuning on the third portion of the extended pipeline paths to determine a second tuned set of hyperparameters for each of the machine learning components of the additional one of the plurality of layers.” Using the generic computer component (computer-implemented method) to implement the abstract idea, (MPEP §2106.05(f)), that does not serve to integrate the abstract idea into a practical application, nor does it amount to significantly more than the abstract idea. (MPEP § 2106.05(f)). Thus, claim 20 is subject-matter ineligible. Claim 21 depends from claim 1. The claim recites more details or specifics to the abstract idea of “[(c)] comparing . . . based on a predetermined metric,” “[(c.1)] wherein the predetermined metric corresponds to at least one of a time constraint, a memory constraint, an accuracy, or a precision of a predictive model,” and accordingly, is merely more specific to the abstract idea. Therefore, claim 21 is subject-matter ineligible. Response to Arguments 8. Examiner has fully considered Applicant’s arguments, and responds below accordingly: Claim Rejections – 35 U.S.C. § 101 9. Under Step 2A Prong One, Applicant submits “that amended independent claim 1 of the instant application recites features that are performed by a machine (e.g., by a processor) and the human mind in not equipped to perform at least the claimed features of * * * [(b)] iteratively operating a plurality of pipelines through the pipeline graph over multiple rounds, on a training dataset to determine a respective plurality of results . . . : * * * [(b.2)] a search space for identification of one or more leader pipelines is reduced based on the removal of the at least one pipeline of the plurality of pipelines within the pipeline graph, * * * [(b.4)] an execution of each pipeline of the plurality of pipelines is accelerated by using a path-level parallelism or a parameter-level parallelism; [(b.4.1)] the path-level parallelism includes executing the plurality of pipelines in parallel, [(b.4.2)] the parameter-level parallelism includes executing each pipeline of the plurality of pipelines for different hyperparameter combinations in parallel, and [(b.4.3)] the different hyperparameter combinations correspond to a different hyperparameter point within a hyperparameter space, and [(b.4.4)] the parameter-level parallelism reduces a size of the hyperparameter search space after each round of the multiple rounds; * * * [(d) identifying . . . based on the comparison], [(d.1)] wherein among the plurality of pipelines, results of the one or more leader pipelines are closest to the known results based on the parameter-level parallelism." [(claim 1, lines 8-9, 13-15, 19-28, 31-34 (emphasis added by Applicant))]. “Therefore, the claimed features are inextricably tied to a machine and cannot be considered as Mental Processes. Therefore, the Applicant respectfully submits that the features of the amended independent claim 1 do not describe an abstract concept, or a concept similar to those found by the Courts to be Abstract. At least for the above-mentioned reasons, the Applicant respectfully submits that the amended independent claim 1 meets standards for patent eligibility under prong one of Step 2A of 2019 Revised Patent Subject Matter Eligibility Guidance. Therefore, the claims are not directed to the alleged abstract idea.” (Response at pp. 15-16). Examiner’s Response: Examiner respectfully submits that for Step 2A Prong One, the rejections identify the abstract idea (i.e., judicial exception) by referring to what is recited (i.e., set forth or described) in the claim and explain why it is considered an exception by identifying the abstract idea as it is recited and explain why it is an abstract idea. (MPEP § 2106.07(a)). For example, the activities of “[(a)] generating a pipeline graph,” “[(b)] iteratively operating,” “[(c)] comparing,” and “[(d)] identifying one or more leader pipelines,” are limitations that can practically be performed in the human mind, including, for example, observations, evaluations, judgments, and opinions, and accordingly, a mental process, (MPEP § 2106.04(a)(2) sub III), which is one of the groupings of abstract ideas. (MPEP §2106.04(a)(2)). Further, the plain meaning of a “pipeline graph” is a visual representation of stages, steps, and flow of a process or workflow, showing how tasks, data, or materials move from one point to another. Also, The plain meaning of “one or more machine learning components” are modular, reusable, and self-contained pieces of code that perform a specific task in a pipeline graph simplify the development, testing, and deployment of ML workflows by breaking down complex tasks into smaller, manageable steps. Accordingly, these elements are within the grasp of the human mind. Accordingly, the rejections comply with the Office guidance for Step 2A Prong One, as set out above in detail. 10. Under Step 2A Prong Two, Applicant submits that “[a]s per Ex parte Desjardins, 2024-000567, Appeals Review Panel Decision dated September 26, 2025 "the Appellant cites, identifies improvements in training the machine learning model itself . . . we are persuaded that the claims reflect such an improvement. For example, one improvement identified in the . . . Specification is to ‘effectively learn new tasks in succession whilst protecting knowledge about previous tasks.’ . . . The Specification also recites that the claimed improvement allows artificial intelligence (Al) systems to ‘us[e] less of their storage capacity’ and enables ‘reduced system complexity.’ Id.” (Response at pp. 16-17). In view of Desjardins, applicant submits that “the present application ensures scalable and efficient discovery of pipeline leaders having the best performance of a predetermined metric, by pruning the search space repeatedly and by parallelly executing the pipelines. Further, the present application describes reducing the size of hyperparameter search space by executing parameter-level parallelism. In this manner, the claimed system reduces the substantial computing resources and time to complete the calculations involved in the discovery. Therefore, the reduction in usage of computing resources and time leads to an improvement in the model executing the pipelines parallelly.” (Response at p. 17 (Applicant referring to Specification at ¶¶ [0009], [0015], [0029], [0031], [0050]-[0058], and [0062])). Examiner’s Response: Examiner respectfully submits that for Step 2A Prong Two, the rejections identify any additional elements by specifically point to claim features/limitations/steps recited in the claim beyond the identified judicial exception; and evaluate the integration of the judicial exception into a practical application by explaining that the claim as a whole, looking at the additional elements individually and in combination, does not integrate the judicial exception into a practical application using the considerations set forth in MPEP §§ 2106.04(d), 2106.05(a)- (c) and (e)- (h). (MPEP § 2106.07(a)). Under Step 2A Prong Two, the analysis considers the claim as a whole. That is, the limitations containing the abstract idea (i.e., judicial exception) as well as the additional elements in the claim besides the abstract idea need to be evaluated together to determine whether the claim integrates the abstract idea into a practical application. (MPEP § 2106.04(d) sub III). The additional elements identified in the claim are generic computer components (computer-implemented method) that are used to implement the abstract idea of the claims. For example, “executing,” “operating,” “tuning” is the use of the generic computer component (computer-implemented method) to implement the abstract idea, (MPEP § 2106.05(f)), that does not integrate the abstract idea into a practical application, nor amounts to significantly more than the abstract idea, as described above in detail. Moreover, such activities are directed to a pipeline graph search efficiency against a “known result.” (see, Specification ¶ 0011). The plain meaning of a pipeline graph is that of a visual representation of stages, steps, and flow of a process or workflow, showing how tasks, data, or materials move from one point to another. For example, for model exploration, the Specification recites “machine learning model exploration includes a dataset 200 on which a pipeline graph 202 operates. The dataset 200 may be a training dataset, where task results are known and each path through the pipeline graph 202 can provide a prediction that can be compared to a known prediction to determine pipeline leaders. Each path through the pipeline graph 202 may be operated on by one or more model metrics 204 to provide a result 206.” (Specification ¶ 0033). The fact that parallelism, inter alia, can be used to make the exploration process faster, this, however, does not necessarily render an abstract idea less abstract. That is, no such technological advance or improvement to computer functionality is evident here. Accordingly, the claims are subject-matter ineligible, as set out above in detail. Claim Rejections – 35 U.S.C. § 103 11. Applicant submits that “the combination of Iyengar and Zhang does not teach or suggest the features ‘at least one pipeline of the plurality of pipelines is removed from the pipeline graph after each round of the multiple rounds, based on the respective plurality of results of a previous round of the multiple rounds,’ as recited in the amended independent claim 1. . . . Furthermore, the Applicant submits that the amended independent claim 17 recites, inter-alia, features similar to those recited in the amended independent claim 1. Accordingly, the amended independent claim 17 is also not taught, suggested, or rendered obvious over the combination of Iyengar and Zhang at least for the reasons stated above with regard to the amended independent claim 1.” (Response at pp. 21-23). Examiner’s Response: Examiner has fully considered Applicant’s arguments and amendments, and in view thereof, WITHDRAWS the rejection under Section 103. Conclusion 12. The prior art made of record and not relied upon is considered pertinent to Applicant's disclosure: (US Published Application 2010188512 to Simske et al.) teaches testing image pipelines includes transforming an untransformed captured image multiple times using a plurality of pipelines, thereby generating a plurality of transformed images, and comparing a functional set of metrics associated with each of the plurality of transformed images with a functional set of metrics associated with each image in a ground truth set. From the comparison, a determination is made as to which of the plurality of pipelines generates a transformed image that is functionally closest to an image in the ground truth set. (Akiba et al., “Optuna: A Next-Generation Hyperparameter Optimization Framework,” arXiv (2019)) teaches a new design-criteria for next-generation hyperparameter optimization software. The criteria we propose include (1) define-by-run API that allows users to construct the parameter search space dynamically, (2) efficient implementation of both searching and pruning strategies, and (3) easy-to-setup, versatile architecture that can be deployed for various purposes, ranging from scalable distributed computing to light-weight experiment conducted via interactive interface. (Luo et al., “MLCask: Efficient Management of Component Evolution in Collaborative Data Analytics Pipelines,” arXiv (17 Oct 2020)) teaches MLCask focuses on versioning for the iterative development of ML pipelines instead of only finding a best-so-far pipeline for users. (Zhang et al., “FLASH: Fast Bayesian Optimization for Data Analytic Pipelines,” arXiv (2016)) teaches Fast LineAr SearcH (FLASH), an efficient method for tuning analytic pipelines. FLASH is a two-layer Bayesian optimization framework, which firstly uses a parametric model to select promising algorithms, then computes a nonparametric model tone-tune hyperparameters of the promising algorithms. FLASH also includes an effective caching algorithm which can further accelerate the search process. Extensive experiments on a number of benchmark datasets have demonstrated that FLASH significantly outperforms previous state-of-the-art methods in both search speed and accuracy. 13. Any inquiry concerning this communication or earlier communications from the Examiner should be directed to KEVIN L. SMITH whose telephone number is (571) 272-5964. Normally, the Examiner is available on Monday-Thursday 0730-1730. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, Applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the Examiner by telephone are unsuccessful, the Examiner’s supervisor, KAKALI CHAKI can be reached on 571-272-3719. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /K.L.S./ Examiner, Art Unit 2122 /KAKALI CHAKI/Supervisory Patent Examiner, Art Unit 2122
Read full office action

Prosecution Timeline

Oct 30, 2020
Application Filed
Jan 12, 2024
Non-Final Rejection — §101, §103, §112
Apr 22, 2024
Response Filed
Aug 31, 2024
Final Rejection — §101, §103, §112
Nov 06, 2024
Response after Non-Final Action
Dec 05, 2024
Request for Continued Examination
Dec 12, 2024
Response after Non-Final Action
Feb 18, 2025
Non-Final Rejection — §101, §103, §112
May 03, 2025
Response Filed
Jul 29, 2025
Final Rejection — §101, §103, §112
Sep 23, 2025
Interview Requested
Oct 06, 2025
Response after Non-Final Action
Dec 05, 2025
Request for Continued Examination
Dec 18, 2025
Response after Non-Final Action
Feb 24, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591815
METHOD AND SYSTEM FOR UPDATING MACHINE LEARNING BASED CLASSIFIERS FOR RECONFIGURABLE SENSORS
2y 5m to grant Granted Mar 31, 2026
Patent 12585917
REINFORCEMENT LEARNING USING ADVANTAGE ESTIMATES
2y 5m to grant Granted Mar 24, 2026
Patent 12547759
PRIVACY PRESERVING MACHINE LEARNING MODEL TRAINING
2y 5m to grant Granted Feb 10, 2026
Patent 12530613
SYSTEMS AND METHODS FOR PERFORMING QUANTUM EVOLUTION IN QUANTUM COMPUTATION
2y 5m to grant Granted Jan 20, 2026
Patent 12518214
DISTRIBUTED MACHINE LEARNING SYSTEMS INCLUDING GENERATION OF SYNTHETIC DATA
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
37%
Grant Probability
55%
With Interview (+18.0%)
4y 8m
Median Time to Grant
High
PTA Risk
Based on 134 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month