Prosecution Insights
Last updated: April 18, 2026
Application No. 18/194,486

SYNTHESIZING ML PIPELINES FOR AUTOMATED PIPELINE RECOMMENDATIONS

Final Rejection §103
Filed
Mar 31, 2023
Examiner
BROPHY, MATTHEW J
Art Unit
2191
Tech Center
2100 — Computer Architecture & Software
Assignee
Fujitsu Limited
OA Round
2 (Final)
69%
Grant Probability
Favorable
3-4
OA Rounds
3y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
425 granted / 614 resolved
+14.2% vs TC avg
Strong +34% interview lift
Without
With
+33.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
17 currently pending
Career history
631
Total Applications
across all art units

Statute-Specific Performance

§101
10.8%
-29.2% vs TC avg
§103
60.2%
+20.2% vs TC avg
§102
14.4%
-25.6% vs TC avg
§112
8.0%
-32.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 614 resolved cases

Office Action

§103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This office action is in response to the amendment filed January 21, 2026. Claims 1-20 are pending. Response to Arguments Applicant's arguments filed January 21, 2026 have been fully considered but they are not persuasive. Applicant’s argument regarding the claim rejections under §103 is unpersuasive. Specifically, Applicant’s argument regarding the teachings of Saha is unpersuasive. Specifically, a broad but reasonable interpretation of the claim limitation “extracting, from the set of code files, a plurality of application programming interface (API) methods associated with ML pipeline components” includes isolating APIs within the code snippet to collect all read assess in their input parameters as is done in ¶67 of Saha. Such a reading is further supported by applicant’s description of extracting API methods in the disclosure, e.g. Fig 4, which describes creating an AST used to identify the API methods used to load a data set and train an ML Model. Such an “extracting” method is similar to the method seen in Fig. 8 of Saha which also constructs and AST to determin the API calls related to the ML model. As such, the Examiner maintains this limitation is taught or suggested by Saha and the rejection is maintained. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-10, and 16-20 are is/are rejected under 35 U.S.C. 103 as being unpatentable over “Bansal” (US PG Pub 2021/0097444) in view of “Saha” (US PG Pub 2023/0080439). Regarding Claim 1 Bansal teaches: 1. A method, executable by a processor, comprising: receiving data that comprises a set of tabular datasets and a set of code files, each of which includes a computer-executable code for a Machine Learning (ML) task; (Bansal e.g. fig. 2, 4 ¶¶19,26 30-31, 44 teaches a system for identifying a tabular dataset and a set of algorithms and code for a processing model to be used in an ML pipeline exploration) generating a task specification corresponding to each tabular dataset of the set of tabular datasets; (Bansal fig. 2-4 and associated text, e.g. ¶¶31-33 creating an ML pipeline exploration corresponding to the submitted tabular dataset for generating ML Pipeline determining data type information for features of each tabular dataset of the set of tabular datasets; (Bansal ¶47 teaches generating metadata describing the dataset including the data type of columns in the dataset). generating an ML pipeline based on the data type information and the task specification; (Bansal e.g. Fig. 5 and Fig. 9, 910, ¶¶70-78 teaches generating an ML pipeline based on the exploration data and the submitted data type of column identified) obtaining variations of the ML pipeline based on options associated with at least one pipeline component of the ML pipeline; (Bansal e.g. Fig. 5 and Fig. 9, 915-25 ¶¶44-46, 56, 70-78 teaches generating additional variant ML pipeline based on the exploration data and the submitted data type of column identified ) generating a database of ML pipelines based on the ML pipeline and the variations for each tabular dataset of the set of tabular datasets; (Bansal e.g. Fig. 5 and Fig. 9, 915-25 ¶¶44-46, 56, 70-78 teaches generating a plurality of pipeline variants and comparing them to determine a recommended ML Pipeline) selecting a set of candidate ML pipelines from the database of ML pipelines based on an optimization approach; (Bansal e.g. Fig. 5 and Fig. 9, 915-25 ¶¶44-46, 56, 70-78 teaches generating a plurality of pipeline variants and comparing them to determine a recommended ML Pipeline) executing the set of candidate ML pipelines to evaluate a performance of each candidate ML pipeline of the set of candidate ML pipelines on test data; (Bansal e.g. Fig. 5 and Fig. 9, 915-25 ¶¶44-46, 56, 70-78 teaches generating a plurality of pipeline variants and comparing them to determine a recommended ML Pipeline) Bansal does not explicitly teach, but Saha teaches: extracting, from the set of code files, a plurality of application programming interface (API) methods associated with ML pipeline components; (Saha e.g. ¶67 teaches identifying APIs in code snippeds associated with ML pipeline components) and obtaining a training corpus of ML pipelines from the set of evaluated ML pipelines for an ML pipeline recommendation task based on the evaluation. (Saha 308 Fig. 3, ¶62 teaches obtaining a train corpus of ML pipeline and augmenting the database). In addition, it would have been obvious to one of ordinary skill in the art, prior to the effective filing date of the application to combine the teachings of Bansal and Saha as each is directed to ML Pipeline development and Saha recognized the need that “current techniques for the automatic generation of ML pipelines may not be able to generate accurate ML pipelines and may require a significant computation time and resources.”(¶3). Regarding Claim 2, Bansal further teaches: 2. The method according to claim 1, wherein the task specification for each tabular dataset of the set of tabular datasets includes a type of the ML task that is possible to perform using a corresponding tabular dataset and one or more target features of the corresponding tabular dataset that are required for the type of the ML task. (Bansal fig. 2-4 and associated text, e.g. ¶¶31-33, e.g. 220, F. 2 teaches and algorithms fig. 4 teaches specifying the target feature and available algorithms for the ML Pipeline exploration). Regarding Claim 4, Saha further teaches: 3. The method according to claim 2, wherein the task specification for the task specification is determined by: selecting one or more code files associated with each tabular dataset of the set of tabular datasets from the set of code files; (Saha e.g. ¶¶24, 93-95 teaches extraction of API’s from code snippets related to ML pipelines including forming and parsing an AST, as well as AST analysis for identification of feature program elements). and performing a static program analysis of the one or more code files to extract the type of the ML task and the one or more target features. (Saha e.g. ¶¶24, 93-95 teaches extraction of API’s from code snippets related to ML pipelines including forming and parsing an AST, as well as AST analysis for identification of feature program elements). In addition, it would have been obvious to one of ordinary skill in the art, prior to the effective filing date of the application to combine the teachings of Bansal and Saha as each is directed to ML Pipeline development and Saha recognized the need that “current techniques for the automatic generation of ML pipelines may not be able to generate accurate ML pipelines and may require a significant computation time and resources.”(¶3). Regarding Claim 4, Saha further teaches: 4. The method according to claim 1, wherein the extraction of the plurality of API methods comprises: selecting a code file from the set of code files; (Saha e.g. ¶¶93-94 teaches extraction of API’s from code snippets related to ML pipelines including forming and parsing an AST). parsing content of the code file to generate an abstract syntax tree (AST); (Saha e.g. ¶¶93-94 teaches extraction of API’s from code snippets related to ML pipelines including forming and parsing an AST). identifying, using the AST, a first API method that is used to load a tabular dataset of the set of tabular datasets and a second API method that is used to train an ML model on the tabular dataset; (Saha e.g. ¶¶93-95 teaches extraction of API’s from code snippets related to ML pipelines including forming and parsing an AST). identifying variables used in the second API method; (Saha e.g. ¶¶93-95 teaches extraction of API’s from code snippets related to ML pipelines including forming and parsing an AST). collecting an intermediate set of API methods that use at least one of the variables and occur between the first API method and the second API method in the code file; (Saha e.g. ¶¶93-95 teaches extraction of API’s from code snippets related to ML pipelines including forming and parsing an AST). and storing parent module names of the first API method, the second API method, and the intermediate set of API methods in a database, wherein the first API method, the second API method, and the intermediate set of API methods are part of the plurality of API methods. (Saha e.g. ¶¶93-95 teaches extraction of API’s from code snippets related to ML pipelines including forming and parsing an AST). In addition, it would have been obvious to one of ordinary skill in the art, prior to the effective filing date of the application to combine the teachings of Bansal and Saha as each is directed to ML Pipeline development and Saha recognized the need that “current techniques for the automatic generation of ML pipelines may not be able to generate accurate ML pipelines and may require a significant computation time and resources.”(¶3). Regarding Claim 5, Bansal further teaches: 5. The method according to claim 1, wherein the ML pipeline components include a data pre-processing component, (Bansal 515 Fig. 5) a feature selection component, (510A fig. 5 or alternatively 220, Fig. 2) a feature engineering component, (Bansal e.g. ¶44 extracting feature data from dataset) a model selection component, (Bansal Fig. 4, 405-415) and a model training component. (¶¶40,44 discussing training in ML pipelines). Regarding Claim 6, Saha further teaches: 6. The method according to claim 1, further comprising: generating a plurality of templates corresponding to the plurality of API methods; (Saha ¶¶88, Fig. 7 teaching templates corresponding to models corresponding to APIs) selecting a subset of templates from the plurality of templates based on the data type information, the task specification, and content of a corresponding tabular dataset of the set of tabular datasets; (Saha ¶¶88-89, Fig. 7 teaching templates for use in ML models according to parameters, task, data type info and generating models for the pipelines). and generating the ML pipeline based on the subset of templates. (Saha ¶¶88-89, Fig. 7 teaching templates for use in ML models according to parameters, task, data type info and generating models for the pipelines). In addition, it would have been obvious to one of ordinary skill in the art, prior to the effective filing date of the application to combine the teachings of Bansal and Saha as each is directed to ML Pipeline development and Saha recognized the need that “current techniques for the automatic generation of ML pipelines may not be able to generate accurate ML pipelines and may require a significant computation time and resources.”(¶3). Regarding Claim 7, Saha further teaches: 7. The method according to claim 6, wherein each template of the plurality of templates is an API call object with one or more features from the corresponding tabular dataset as an input for the API call object. (See Saha Call in ¶89) In addition, it would have been obvious to one of ordinary skill in the art, prior to the effective filing date of the application to combine the teachings of Bansal and Saha as each is directed to ML Pipeline development and Saha recognized the need that “current techniques for the automatic generation of ML pipelines may not be able to generate accurate ML pipelines and may require a significant computation time and resources.”(¶3). Regarding Claim 8, Saha further teaches: 8. The method according to claim 6, wherein the ML pipeline includes a set of API call objects corresponding to the subset of templates, and the ML pipeline is generated with default options which are different from the options associated with the at least one pipeline component of the ML pipeline. (See Saha templates annotated with input parameters in ¶89) In addition, it would have been obvious to one of ordinary skill in the art, prior to the effective filing date of the application to combine the teachings of Bansal and Saha as each is directed to ML Pipeline development and Saha recognized the need that “current techniques for the automatic generation of ML pipelines may not be able to generate accurate ML pipelines and may require a significant computation time and resources.”(¶3). Regarding Claim 9, Saha further teaches: 9. The method according to claim 1, wherein each of the options correspond to an optional parameter that is acceptable to an API method of the plurality of API methods, an algorithm that is acceptable to the API method, a choice to skip the API method for the generation of the ML pipeline, or a choice for an ML model for the ML pipeline. (See Saha templates annotated with input parameters in ¶89) In addition, it would have been obvious to one of ordinary skill in the art, prior to the effective filing date of the application to combine the teachings of Bansal and Saha as each is directed to ML Pipeline development and Saha recognized the need that “current techniques for the automatic generation of ML pipelines may not be able to generate accurate ML pipelines and may require a significant computation time and resources.”(¶3). Regarding Claim 10, Saha further teaches: 10. The method according to claim 1, wherein the selection of the set of candidate ML pipelines from the database of ML pipelines is performed iteratively based on an ML metadata model or an optimization search model. (See Saha iterative search in ¶¶3, 105, 112 for ML pipeline optimization). In addition, it would have been obvious to one of ordinary skill in the art, prior to the effective filing date of the application to combine the teachings of Bansal and Saha as each is directed to ML Pipeline development and Saha recognized the need that “current techniques for the automatic generation of ML pipelines may not be able to generate accurate ML pipelines and may require a significant computation time and resources.”(¶3). Regarding Claim 16, Saha further teaches: 16. The method according to claim 1, wherein the training corpus of ML pipelines includes at least a subset of the set of evaluated ML pipelines and the training corpus is obtained based on a determination that the performance for each pipeline of the subset is above a threshold performance. (Saha 308 Fig. 3, ¶62 teaches obtaining a train corpus of ML pipeline and augmenting the database, see further ¶¶58-60 describing comparing the pipeline to a threshold for evaluation). In addition, it would have been obvious to one of ordinary skill in the art, prior to the effective filing date of the application to combine the teachings of Bansal and Saha as each is directed to ML Pipeline development and Saha recognized the need that “current techniques for the automatic generation of ML pipelines may not be able to generate accurate ML pipelines and may require a significant computation time and resources.”(¶3). Regarding Claim 17, Saha further teaches: 17. The method according to claim 1, wherein the database of ML pipelines includes statistical features associated with the set of tabular datasets, learning-based meta features associated with the set of tabular datasets, or hybrid meta-features associated with the set of tabular datasets. (See e.g. meta-learning model of ¶29 in Saha). In addition, it would have been obvious to one of ordinary skill in the art, prior to the effective filing date of the application to combine the teachings of Bansal and Saha as each is directed to ML Pipeline development and Saha recognized the need that “current techniques for the automatic generation of ML pipelines may not be able to generate accurate ML pipelines and may require a significant computation time and resources.”(¶3). Regarding Claim 18, Bansal further teaches: 18. The method according to claim 1, further comprising: training a recommendation model for the ML pipeline recommendation task on the training corpus of ML pipelines; (See Bansal generating ML pipeline model in 905 Fig. 9, ¶¶70-78 teaches generating an ML pipeline, including trainings on prior pipelines (e.g. ¶48)) receiving, after a deployment of the recommendation model, a new tabular dataset that is different from the set of tabular datasets; (Bansal 920, Fig. 9, ¶¶70-78 teaches generating an additional ML pipeline subsequent to the first pipeline, including trainings on prior pipelines (e.g. ¶48) and different processing of subsequent pipelines and datasets e.g. ¶49) generating an input for the recommendation model based on the new tabular dataset; (Bansal 920, Fig. 9, ¶¶70-78 teaches generating an additional ML pipeline subsequent to the first pipeline, including trainings on prior pipelines (e.g. ¶48) and different processing of subsequent pipelines and datasets e.g. ¶49)feeding the input to the recommendation model; (Bansal 925, Fig. 9, ¶¶70-78 teaches generating an additional ML pipeline subsequent to the first pipeline, including trainings on prior pipelines (e.g. ¶48) and different processing of subsequent pipelines and datasets e.g. ¶49) and generating an ML pipeline recommendation as an output of the recommendation model for the input. (Bansal 930, Fig. 9, ¶¶70-78 teaches generating an additional ML pipeline subsequent to the first pipeline, including trainings on prior pipelines (e.g. ¶48) and different processing of subsequent pipelines and datasets e.g. ¶49) Claims 19 and 20 are rejected on the same basis as claim 1 above. Claim(s) 11 is rejected under 35 U.S.C. 103 as being unpatentable over “Bansal” (US PG Pub 2021/0097444) in view of “Saha” (US PG Pub 2023/0080439) as applied above and further in view of “Dolby” (US PG Pub 2023/0059857). Regarding Claim 11, Bansal et al teaches the limitations of claim 10 above, but does not further teach, while Dolby teaches: 11. The method according to claim 1, wherein the optimization approach uses a Bayesian Optimization approach. (See Dolby ¶62 teaches use of a Bayesian optimization approach) In addition, it would have been obvious to one of ordinary skill in the art, prior to the effective filing date of the application to combine the teachings of Bansal and Dolby as Dolby provides a system for “detecting and correcting errors in one or more machine learning pipelines.” (¶3). Claim(s) 12-15 are rejected under 35 U.S.C. 103 as being unpatentable over “Bansal” (US PG Pub 2021/0097444) in view of “Saha” (US PG Pub 2023/0080439) as applied above and further in view of “Ray” (Ray, Susmita. "A quick review of machine learning algorithms." 2019 International conference on machine learning, big data, cloud and parallel computing (COMITCon). IEEE, 2019.). Regarding Claim 12, Bansal further teaches: 12. The method according to claim 1, further comprising: training a ...model on records of the database of ML pipelines; (See Bansal generating ML pipeline model in 905 Fig. 9, ¶¶70-78 teaches generating an ML pipeline, including trainings on prior pipelines (e.g. ¶48)) and using the optimization approach with the trained ...model to select the set of candidate ML pipelines from a database of ML pipelines. (Bansal 925, Fig. 9, ¶¶70-78 teaches generating an additional ML pipeline subsequent to the first pipeline, including trainings on prior pipelines (e.g. ¶48) and different processing of subsequent pipelines and datasets e.g. ¶49, see further optimizing in ¶¶44-46) but Bansal does not further teach, but Ray teaches: a posterior distribution model (See posterior distribution in Ray VIII, page 37) In addition, it would have been obvious to one of ordinary skill in the art, prior to the effective filing date of the application to combine the teachings of Bansal and Ray as each is related to machine learning systems and Ray teaches use of a posterior distribution among other techniques in Ray’s review of “some of the most widely used machine learning algorithms” (Sec, I, page 35). Regarding Claim 13, Bansal further teaches: 13. The method according to claim 1, further comprising training a plurality of ...models on records of the database of ML pipelines, wherein the set of candidate ML pipelines is selected using the trained plurality of ...models. (Bansal teaches in ¶¶44-46 use of the optimizer model to select pipeline models based on higher performance of certain models and pipelines) but Bansal does not further teach, but Ray teaches: a posterior distribution model (See posterior distribution in Ray VIII, page 37) In addition, it would have been obvious to one of ordinary skill in the art, prior to the effective filing date of the application to combine the teachings of Bansal and Ray as each is related to machine learning systems and Ray teaches use of a posterior distribution among other techniques in Ray’s review of “some of the most widely used machine learning algorithms” (Sec, I, page 35). Regarding Claim 14, Bansal further teaches: 14. The method according to claim 13, wherein each ...model of the plurality of ...models is trained on features of the records corresponding to a tabular dataset of the set of tabular datasets. (Bansal teaches in ¶¶44-48 use of the optimizer model to select pipeline models based on higher performance of certain models and pipelines, and teaches training of models on user supplied datasets and identified target features) but Bansal does not further teach, but Ray teaches: a posterior distribution model (See posterior distribution in Ray VIII, page 37) In addition, it would have been obvious to one of ordinary skill in the art, prior to the effective filing date of the application to combine the teachings of Bansal and Ray as each is related to machine learning systems and Ray teaches use of a posterior distribution among other techniques in Ray’s review of “some of the most widely used machine learning algorithms” (Sec, I, page 35). Regarding Claim 15, Bansal further teaches: 15. The method according to claim 13, wherein the plurality of ...models are trained to be used for a hierarchical selection of the set of candidate ML pipelines. (Bansal teaches in ¶¶44-46 use of the optimizer model to select pipeline models based on higher performance of certain models and pipelines) but Bansal does not further teach, but Ray teaches: a posterior distribution model (See posterior distribution in Ray VIII, page 37) In addition, it would have been obvious to one of ordinary skill in the art, prior to the effective filing date of the application to combine the teachings of Bansal and Ray as each is related to machine learning systems and Ray teaches use of a posterior distribution among other techniques in Ray’s review of “some of the most widely used machine learning algorithms” (Sec, I, page 35). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. The prior art cited in the attached PTO-892 form includes prior art relevant to applicant’s disclosure related to ML pipeline optimization techniques. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MATTHEW J BROPHY whose telephone number is (571)270-1642. The examiner can normally be reached Monday-Friday, 9am-4:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Wei Zhen can be reached at 571-272-3708. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. MJB 3/31/2026 /MATTHEW J BROPHY/Primary Examiner, Art Unit 2191
Read full office action

Prosecution Timeline

Mar 31, 2023
Application Filed
Oct 17, 2025
Non-Final Rejection — §103
Jan 21, 2026
Response Filed
Mar 31, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585464
APPLICATION MATURITY DATA PROCESSING FOR SOFTWARE DEVELOPMENT
2y 5m to grant Granted Mar 24, 2026
Patent 12579257
SECURITY APPLIANCE EXTENSION
2y 5m to grant Granted Mar 17, 2026
Patent 12547516
SYSTEMS AND METHODS FOR DYNAMICALLY CONFIGURING A CLIENT APPLICATION
2y 5m to grant Granted Feb 10, 2026
Patent 12542008
CENTER DEVICE AND IN-VEHICLE ELECTRONIC CONTROL DEVICE
2y 5m to grant Granted Feb 03, 2026
Patent 12487901
ADAPTING DATA COLLECTION IN CLINICAL RESEARCH AND DIGITAL THERAPEUTICS
2y 5m to grant Granted Dec 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
69%
Grant Probability
99%
With Interview (+33.5%)
3y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 614 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month