Prosecution Insights
Last updated: April 19, 2026
Application No. 18/012,387

Platform for Automatic Production of Machine Learning Models and Deployment Pipelines

Final Rejection §103
Filed
Dec 22, 2022
Examiner
VU, TUAN A
Art Unit
2193
Tech Center
2100 — Computer Architecture & Software
Assignee
Google LLC
OA Round
2 (Final)
73%
Grant Probability
Favorable
3-4
OA Rounds
3y 5m
To Grant
95%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
718 granted / 980 resolved
+18.3% vs TC avg
Strong +21% interview lift
Without
With
+21.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
31 currently pending
Career history
1011
Total Applications
across all art units

Statute-Specific Performance

§101
10.4%
-29.6% vs TC avg
§103
54.1%
+14.1% vs TC avg
§102
10.2%
-29.8% vs TC avg
§112
10.5%
-29.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 980 resolved cases

Office Action

§103
,m DETAILED ACTION This action is responsive to the Applicant’s response filed 01/13/26. As indicated in Applicant’s response, claims 4, 15 have been cancelled, and claims 21-22 added. Claims 1-3, 5-14, 16-22 are pending a next office action. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3, 5-7, 10, 12, 14, 16-18, 20-22 is/are rejected under § 35 U.S.C. 103 as being unpatentable over Geist et al, USPubN: 2023/0094742 (herein Geist) in view of Dong et al, CN 112906907A (translation) 06-04-2021, 9 pgs (herein Dong), and Sarferaz, USPubN: 2021/0241170 (herein Sarferaz). As per claim 1, Geist discloses a computing system for automatic production of machine learning models and corresponding deployment pipelines, the computing system comprising: one or more processors; and one or more non-transitory computer-readable media (para 0078) that collectively store instructions that, when executed by the one or more processors, cause the computing system to perform operations, the operations comprising: importing a training dataset (format various types of client data to generate input for the particular models - para 0035; client users' ... demands and data analysis requirements, client data ... received from the client ... thereby generating input for subsequent models, normalize the client data ... according to a data specification - para 0005; client data as input data for a first machine learning model - para 0006-0007; training data includes ... sample user instructions - para 0021; user-selectable ... functions or types of training data - para 0022; models ... most relevant to particular types of inputs or user-selectable functions - para 0023-0024) associated with a user; executing an origination machine learning pipeline to: perform a model architecture search that selects (identifies adequate, available models ... based on the request submitted ... from the client - para 0032; identifies 202 - Fig. 2A; select the ML models from the catalog of models ... establish the pipeline ... needed to service the user's request -para 0021; automatically select and apply one or more ML models on the training data ... user-selectable function, determine and evaluate which ML models are most relevant t ....user-selectable functions - para 0023; select the comparatively best ML models - para 0024) and trains a machine learning model (for training, the ML model ... generate predicted outputs ... training data may indicate the expected output ... of the pipeline builder- para 0023) for the training dataset; and generate a deployment machine learning pipeline (ML models proposed for the pipeline ... proposed ML to include in the pipeline - para 0024; orchestration server is developing the pipeline of models - para 0026; models to employ in the pipeline - para 0032) for deployment of the machine learning model (establishing an execution pipeline2 l 5 of ML models 2 l 8a-2 l 8c ... on behalf of a client user- para 0037; Model Execution Pipeline 215 - Fig. 2B); A) Geist does not explicitly disclose exporting the machine learning model and the deployment machine learning pipeline for deployment of the machine learning model with the deployment machine learning pipeline. Geist discloses sending candidate ML models, subsequent to their pre-training and search from a database (para 0015), for selection by a end-users and receiving user confirmation as to which of the ML models to run a pipeline thereof (para 0024); and transmission of pipeline models to an end-user in order to implement a pipeline execution based on indication of the user is recognized. Dong discloses federated learning approach using DAG model and construction script via a Docker environment to configure and distribute a machine learning pipeline so that each ML model of the pipeline can implement one layer of the ML pipeline (pg. 2-3), the distribution using OCI standard coupled with a JSON specification manifest (pg. 6) for configuring each layer of the pipeline (pg. 3), the latter using API command line of a management client in connection with a Docker tool (pg. 6) supporting management of the learning pipeline in constructing self-defined mirror image and enabling distribution to different modules, each to execute a corresponding work (pg. 4, pg.7); hence distribution of a ML pipeline from a central Docker API tool using OCI standard, config manifest and management client command line (pg. 7) to manage and distribute the pipeline model so that each layer can execute a ML portion is recognized Further, Sarferaz discloses intelligent developer platform to select ML scenarios from use case repository whereby ML scenarios and pipelines can be exported to a transport repository for subsequent assembly destined for in-memory customer deployment (para 0098; Fig. 14) or for publishing from a consumer's in-memory database (para 0097); hence use of input/export API to export intelligent ML content including ML scenarios and pipelines either to in-memory store of customer deployment context or in-memory database of a user publishing context is recognized. Therefore, based on orchestration service in Geist (para 0027) with use of ML models to support client request and publishing services (para 0016, 0021), it would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to implement the client-based adaptation of ML models and generating of pipeline thereof in Geist so that the resulting ML models and created pipeline model can be subjected to distribution or export to other environments or execution platforms, the export - as in Sarferaz - or the distribution - as in Dong - of ML models and corresponding pipeline model for deploying one or more such the machine learning model with the deployment machine learning pipeline; e.g. at client machines or user API context; because orchestration service operative upon user-based search or selection of recommended ML models in Geist is purported for fulfill client request for the best combination of ML models and orchestration thereof into a efficient pipeline deployment so that, accordingly, a client servicing platform equipped with export capability or interface port by which ML models and assembled pipeline from the service platform identified/deemed as most suiting the user/client request can be distributed or sent to the designated client would fall into the business purpose of Geist' s service which entails a sustained commitment or federated agreement to supply to requesting clients a most optimally fitting set of ML scenarios and/or a representative pipeline model - as in Sarferaz or Dong - with which the client or users, upon receipt, can further enhance orchestration or customize their version of ML tasks. As per claim 3, Geist discloses computing system of claim 1, wherein the operations further comprise: receiving a problem statement from the user (request can include the instructions, ... client data 221 ... desired operations or outcome - para 0039; desired functions of client user's request - para 0040), wherein the problem statement is expressed in a natural language (text files 221c - para 0039; text files 221C - para 0040), wherein the problem statement is expressed in a natural language; and inferring (which models are optimal for a given ... function - para 0017; identify the optimal models - para 0032- Note1: selected optimal ML models to perform a desired function specified by a user request and optimizing the order of executing the pipeline models reads on inferring parameters of an optimal domain carried out by a optimization ML execution in accordance to a determined optimization setting - e.g. an order parameter - associated with implementing execution of models in the origination pipeline), based on the problem statement (see above), one or more parameters (determine an optimal order for executing the models of the pipeline - para 0032) of an optimization domain associated with the model architecture search performed by the origination machine learning pipeline (refer to claim 1). As per claim 5, Geist discloses computing system of claim 1, wherein executing the origination machine learning pipeline to perform the model architecture search comprises: detecting a semantic type for one or more features (labels indicating the expected results of applying the pipeline ... to the training data - para 0021; labels to determine and evaluate the accuracy of the pipeline builder - para 0022) of a plurality of features (training data includes ... sample user instructions or training labels - para 0021) included in the training dataset; and constraining the model architecture search (refer to claim 1) to candidate model architectures (certain ML models that will produce ... the most relevant results - para 0017; particular ML model ... relevant to user-selectable functions or types of training data, models as relevant to user-selected functions - para 0022; select and apply one or more ML models ... to perform user-selectable function - para 0023) capable of processing the semantic type detected for the one or more features (see labels from above) of the plurality of features included in the training dataset. As per claim 6, Geist discloses computing system of claim 1, wherein executing the origination machine learning pipeline to perform the model architecture search comprises: generating one or more statistics (predicted output ... using the training labels to determine ... accuracy of the pipeline builder ... generates the predicted outputs and ... may generate operational metrics - para 0022; proportion of correctly predicted classifications, proportion of true positive classification identifications ... correctly predicted over the total number of true positive ... and negative classifications - para 0022) descriptive of the training dataset; and predicting one or more system settings (determine whether the pipeline builder is properly trained - para 0022; determine and evaluate which ML models are most relevant to types of inputs or ... user-selectable functions, adjust various hyper-parameters or weight ... based upon a level of error between ... predicted and expected outputs - para 0023) of the model architecture search (refer to claim 1) based on the one or more statistics (see operational metrics from above; operational metrics - para 0023) descriptive of the training dataset. As per claim 7, Geist discloses computing system of claim 6, wherein predicting one or more system settings of the model architecture search based on the one or more statistics (metrics - para 0022, 0023) descriptive of the training dataset comprises processing data descriptive of the one or more statistics with a machine-learned settings (refer to architecture predicting settings in claim 6) prediction model to directly predict the system settings as an output (e.g. predicted outputs, predicted relevance score or predicted functionality for a given user-selectable function - para 0022; predicted outputs (e.g. predicted relevance score - para 0023) of the machine-learned settings prediction model (see claim 6). As per claim 10, Geist does not explicitly disclose computing system of claim 1, wherein the operations further comprise identifying one or more correlations between feature crosses and labels of the training dataset. The checking of existing specifications or annotated datasets in Geist include cross-checking significance of indicators of relevance (feature indicators ... particular features and relative strength ... clear identification ... related to ... sentences, extracting labels - para 0017) and labels included with a dataset geared for model training (training labels - para 0021; input training data and training labels - para 0022; referencing training labels - para 0023); thus, effect of parsing information destined to structure a best training set in Geist include processing of input data via cross checking those having significance indicators with those annotated or labeled as user-intent relevance or contextual weight can be viewed as identifying, by the orchestrator of the ML training in Geist, one or more correlations between feature crosses and labels- herein referred as (**) of the training dataset. The checking of existing specifications or annotated datasets in Geist, however, includes a cross-checking significance of indicators of relevance (feature indicators ... particular features and relative strength ... clear identification ... related to ... sentences, extracting labels - para 0017) and provision of labels included with a dataset geared for model training (training labels - para 0021; input training data and training labels - para 0022; referencing training labels - para 0023); thus, effect of parsing information destined to structure a best training set in Geist include processing of input data via cross checking those having significance indicators with those annotated or labeled as user-intent relevance or contextual weight can be viewed as identifying, by the orchestrator of the ML training in Geist, one or more correlations between feature crosses and labels -- herein referred as (**) -- of the training dataset. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to implement information collecting and extracting of user/design relevance data by the ML training orchestrator in Geist so that the information extracting would include identifying one or more correlations between feature crosses and labels of the training dataset as set forth above in(**); because annotated information via use of the orchestration engine conducting a cross-checked examination of dataset or input specifications or requirements obtained from client users according to which text elements or parsed constructs bearing a user contextual weight or significance are marked or labeled can be crossed as part of the process of extracting significant data elements into a selection set using labeled data tracking in Geist orchestration layer, and this cross check would help the orchestration engine in Geist to filter out non-relevant elements from training input and compact an original large dataset into a scaled-down but significant set of input being much more reflective of user relevance or context from which formatting or normalizing by the orchestrating stage would transform the training set into most optimally structured and machine compliant set of input for use by a prediction ML engine, facilitating thereby the inferring effect by the ML model to most likely return more context relevant set of output which in turn would prompt rapid convergence between the training input and expected output. As per claim 12, Geist discloses a computer-implemented method for automatic production of machine learning models and corresponding deployment pipelines, the method performed by one or more computing devices and comprising: importing a training dataset associated with a user; executing an origination machine learning pipeline, wherein executing the origination machine learning pipeline comprises: performing a model architecture search that selects and trains a machine learning model for the training dataset; and generating a deployment machine learning pipeline for deployment of the machine learning model; and exporting the machine learning model and the deployment machine learning pipeline for deployment of the machine learning model with the deployment machine learning pipeline. (all of which having been addressed in claim 1) As per claim 14, refer to rejection of claim 3. As per claim 16, Geist discloses computer-implemented method of claim 12, wherein executing the origination machine learning pipeline to perform the model architecture search comprises: detecting a semantic type for one or more features of a plurality of features included in the training dataset; and constraining the model architecture search to candidate model architectures capable of processing the semantic type detected for the one or more features of the plurality of features included in the training dataset. (refer to rejection of claim 5) As per claim 17, refer to rejection of claim 6. As per claim 18, refer to rejection of claim 7. As per claim 20, Geist discloses one or more non-transitory computer-readable media that collectively store instructions that, when executed by one or more processors of a computing system, cause the computing system to perform operations, the operations comprising: importing a training dataset associated with a user; executing an origination machine learning pipeline to: perform a model architecture search that selects and trains a machine learning model for the training dataset; and generate a deployment machine learning pipeline for deployment of the machine learning model; and exporting the machine learning model and the deployment machine learning pipeline for deployment of the machine learning model with the deployment machine learning pipeline. (all of which having been addressed in claim 1) As per claim 21, Geist discloses computing system of claim 1, wherein the operations further comprise: receiving a request for an operational machine learning model trained (client request … based upon the request and the client data – para 0006) using the training dataset associated with the user (based upon the request and the client data – para 0007), wherein the origination machine learning pipeline is executed (and execute the pipeline – para 0019) based on the request (para 0005-0006). As per claim 22, Geist discloses computing system of claim 21, wherein executing the origination machine learning pipeline further comprises: evaluating a performance quality (generate … operational metrics (e.g. accuracy, precision, recall) of the pipeline … whether the pipeline … is properly trained – para 0022; trains one or more classifiers of the pipeline … to satisfy one or more training thresholds and/or classification factors … accuracy, precision, and recall … proportion of correctly predicted classifications … classifications that were correctly predicted … proportion … that were correctly predicted over the total number of … classifications – para 0022) of the machine learning model after training the machine learning model (see claim 21) with the training dataset; and validating the machine learning model satisfies (to satisfy one or more training thresholds and/or classification factors … accuracy, precision, and recall … proportion of correctly predicted classifications – para 0022; the best relevance score for each user-selected function requested by the user – para 0024; server performs … normalization … on the client data … indicating the data formatting expectations for the inputted data …the outputted data may not satisfy the input data specification ... server may execute.. a data translation model … that satisfies the data specification – para 0025) the request for the operational machine learning model based on the performance quality (see accuracy, precision, relevance score from above) of the machine learning model. Claims 2, 13 is/are rejected under§ 35 U.S.C. 103 as being unpatentable over Geist et al, USPubN: 2023/0094742 (herein Geist) in view of Dong et al, CN 112906907A (translation) 06-04- 2021, 9 pgs (herein Dong), and Sarferaz, USPubN: 2021/0241170 (herein Sarferaz), further in view of Salman et al, USPubN: 2021/0406644 (herein Salman) As per claim 2, Geist discloses computing system of claim 1, wherein: the training dataset comprises a structured training dataset having data (feature indicators ... particular features and relative strength ... clear identification ... related to ... sentences, extracting labels - para 0017; training data includes ... training labels - para 0021; input training data and training labels - para 0022; referencing training labels - para 0023) associated with a number of labels. Geist does not explicitly disclose wherein the operations further comprise receiving a selection from the user of one of the labels as a predicted label to be predicted by the machine learning model. Salman discloses labels provided with input data for a active learning framework operative in iteratively executing machine learning models, using informative information layer for manual labeling associated with generating relevance identification upon results or metrics outputted from the machine learning (para 0012), where predicting labels are part of the labeled observations into the next computation model (para 0007-0009) to generate further relevance score, i.e. selection of unlabeled observations taken as input to an annotation selection for use in the next iteration of the computation model (para 0006), the labeling tool allowing human annotators to select (or correct) labels among labeled data as part of adjusting certainty in generating relevance score (para 0014). Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to implement analysis inputted content or selective trimming of training set or in Geist so that labels provided to impart relative relevance or weight to the certain portions of the training input/stream would be driven by manual selection by the user, the selection of labeled portions of the training set by the user - as in Salman - constituting the user effect of issuing or providing a predicted label to be tested or run by the machine learning model to which labeled training set is being configured for; because training input or data of contextual relevance or functional significance under direct consideration, manual filtering/trimming and selective adoption by a configurating user or developer as set forth above would consecrate processing capability of the ML to address only data of relevance as well as enable scaling down of the input set to the machine learning, in the sense that distribution of processing resources of the training engine would be geared toward a more focused/efficient inferring action thereby resulting in overall improved efficiency of the ML, where fast input/output convergence by the model would be achieved in optimal time and allocation of resources; e.g. the latter being direct consequence of configuring the training with user-based relevant data selection and size re-adjust optimization made to the model input space. As per claim 13, Geist discloses computer-implemented method of claim 12, wherein: the training dataset comprises a structured training dataset having data associated with a number of labels; and the method further comprises receiving a selection from the user of one of the labels as a predicted label to be predicted by the machine learning model. (all of which having been addressed in claim 2) Claims 8-9 is/are rejected under§ 35 U.S.C. 103 as being unpatentable over Geist et al, USPubN: 2023/0094742 (herein Geist) in view of Dong et al, CN 112906907A (translation) 06-04-2021, 9 pgs (herein Dong), and Sarferaz, USPubN: 2021/0241170 (herein Sarferaz), further in view of Durvasula et al, USPN: 11,416,754 (herein Durvasula) As per claims 8-9, Geist discloses computing system of claim 1, wherein executing the origination machine learning pipeline to perform the model architecture search comprises: generating one or more statistics (e.g. proportion of correctly predicted classifications, proportion of true positive classification identifications ... correctly predicted over the total number of true positive ... and negative classifications, operational metrics - para 0022) descriptive of the training dataset. Geist does not explicitly disclose (i) identifying, based on the one or more statistics descriptive of the training dataset, one or more previous searches performed on one or more previous datasets that have statistics similar to the statistics of the training dataset; and defining one or more system settings of the model architecture search based on previous system settings identified for the one or more previous searches; (ii) storing metadata descriptive of the training dataset and performance of the model architecture search; and tuning one or more parameters of the origination machine learning pipeline based on the metadata descriptive of the training dataset and performance of the model architecture search. As for (i) Geist discloses use of orchestration database for cataloguing (para 0026) or storing published model specification (para 0029) as well as formatting information thereof (para 0035) in form of data records containing information on ML capabilities, and feature indicators for each ML model being catalogued therewith, where the orchestration server will consult or reference feature indicators (para 0020) in search - referred herein as (*) - of ML models capable of desired functions indicated by the requesting user (para 0037, 0040) which include operation outcome or desired operations and type of client data (para 0039) Hence, identifying from information descriptive of training dataset or requirement, via one or more search of catalogued datasets (in orchestration DB) a particular dataset based on relevancy of the descriptive requirement in correlation to previous system settings recorded or catalogued from previous model implementations or model searches is recognized. Durvasula discloses generation of one or more pipeline pattern engines using machine learning where metadata from historical or repository data on dataset or quality is consulted (Fig. 6E; col. 22 li. 28-41; col.22 li. 60 to col. 23 li. 6) to remediate to performance issues in training models of the data pipeline, including identifying from historical data profile or past migration processes data lineage or traceability properties needed to train the ML model notably; i.e. based on its similarity to other known data sets as part of techniques to solve different quality issues (e.g. col 21, li. 34-65); e.g. organizing ML model data using machine learning pipelines previously used to train one or more ML models to generate data pipelines (col. 23 li. 8-18) ) in the purpose of improving machine learning training techniques (col. 18 li. 48-60) As for (ii) Recording or persisting information, metadata associated with performance issue of previously executed machine model in historical store as in Durvasula (col 21, li. 34-65) for effect of learning from previously experimented or deployed models to configure remediation to subsequent model instance entails using past model execution and metadata thereof as part of implementing correction to a current ML configuration to improve quality of the ML processes which in Durvasula, represents a M-based knowledge engine coupled with a remediation engine (col. 13 li. 35-44) for improving machine learning training techniques (col. 18 li. 48-60) by which multiple training iteration or retraining (col. 10 li. 4-19) are carried out supervised or unsupervised until a satisfactory model provides prediction accuracy as desired; hence storing metadata descriptive of the training dataset and performance of the model architecture for use in tuning one or more parameters of the origination machine learning pipeline, via iterative training or retraining as part of a remediation part of a knowledge engine, based on the stored metadata descriptive of the training dataset is recognized. Therefore, based on consulting of pre-established indicators or specifications catalogued with known good models in Geist implementation of search (per(*) from above), it would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to implement past recordation of ML training in form of past or previous configured models so that for a current configuration of the ML pipeline in response to a statistical response of a ML model, the system would be programmed to 1) perform identifying, based on statistics descriptive of the trained dataset, one or more previous searches on one or more previous datasets - as per the historical record as in Durvasula - that have statistics similar to the statistics of the dataset training; and defining one or more system settings - e.g. a issue or performance remediating settings as in Durvasula - (of the model architecture search) based on previous system settings identified for the one or more previous searches - using historical data as in Durvasula; 2) store metadata descriptive of the training dataset and performance of the model architecture and search for the best model - as per repository of past ML experiments in Durvasula - in relation to intent of tuning one or more parameters of the origination machine learning pipeline, via iterative model training as in Durvasula as part of a remediation part of a knowledge engine, based on metadata descriptive of the training dataset and performance a the model architecture which seeks out the best models - as in Durvasula training and retraining to correct performance issues from above; because use of historical data or learning as knowledge and metadata from past ML implementation in terms of previous datasets having statistics or functional metrics similar to the those of a intended training dataset or ML model as reference for defining one or more system settings for instance(s) of the model architecture geared for searching a best model as in Geist, based on previous system settings identified for the one or more previous searches would enable a development effort to reduce resource cost via reuse pre-existing knowledge from historical records in the endeavor seek out and/or construct a more optimal model or set of models that would efficiently carry out an intended pipeline; said model search being conducted or finetuned by correlating respective configuration values and performance metrics between current or intended training instances and those from past experiments or deployed models - stored as historical data as in Durvasula - to identify or extract the best knowledge and settings in order to select a known good model configuration that bears similarity to the intended training, and reconfigure a target ML training with corresponding parametric modifications or via iterative test runs, and/or retraining aimed at mitigating operational issues of the model(s) in the course of the ML training and possibly reaching a best representation of a pipeline deployment using the searched models as endeavored in Geist. Claims 11, 19 is/are rejected under§ 35 U.S.C. 103 as being unpatentable over Geist et al, USPubN: 2023/0094742 (herein Geist) in view of Dong et al, CN 112906907A (translation) 06-04-2021, 9 pgs (herein Dong), and Sarferaz, USPubN: 2021/0241170 (herein Sarferaz), further in view of Durvasula et al, USPN: 11,416,754 (herein Durvasula) and Saha et al, USPubN: 2023/0080439 (herein Saha) As per claim 11, Geist does not explicitly disclose computing system of claim 1, wherein the deployment machine learning pipeline comprises both: a fixed feature training component configured to retrain the machine learning model with a fixed list of feature columns; and a re-tuning component configured to perform a second model architecture search to identify a new machine learning model for the training dataset. Saha discloses techniques associated for automatically generate ML pipeline following an exploratory approach, that iteratively search (para 0003, 0023) to find the most optimal pipeline from among multiple candidate ML pipelines received for a ML project, including presentation of meta-features extracted from the dataset of corresponding ML projects, the latter (para 0117) expressed as predefined values in rows/columns tagged with a indicator or flag (para 0125; dataset 1108 - Fig. 11), upon which applying a meta-learning model training (Fig. 10) to the dataset generates subset of components for the ML pipeline denoted with a subset of meta-features mutated from the original dataset and representing an improved quality thereof to be subjected to the ML pipeline (para 0128) whose implementation includes iterative execution of selected set of operations (Fig. 9) of models instantiated each from parameterized templates (Fig. 4-5, 6A). Hence, provision of fixed feature training component for use in filtering and converting into a more compact subset of meta-features shown in terms of tabular format bearing better relationship with context of iterative construction and code runs of a ML pipeline is recognized. Use of historical metadata representing previous ML models to assist in determining which dataset to apply to correct performance in training instances whereby to retrain the ML models forming a intended pipeline is shown in Durvasula on basis of ML models tracking similarity between past models and the current training context (see rationale of claims 8-9) Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to implement identification of dataset of relevance for use by a ML orchestration layer in implementing the most optimum pipeline in Geist system so that deployment machine learning pipeline by the orchestration layer would include 1) a fixed feature training component configured to retrain the machine learning model with a fixed list of feature columns - as per the row/column of meta-features in Saha; and 2) a retuning component configured to perform a second model architecture search to identify a new machine learning model - as in Durvalula - for the pipeline training dataset; because use of tabular list of feature and metadata items representing likely dataset elements candidate inputs or configuration values in rows/columns representation with which to configure, reconfigure, or mutate in the course of constructing or refining training parameters, settings or seeking, identifying similar or newer models as set forth above, would enable a large set of training set to be filtered and adjusted at the tabular level representation on basis of relevance criteria analytics, such that, once edited or mutated the portion of the original dataset can express a more optimal and relevant set of training data that further enhance finetuning or retraining of the ML, according to which, upon being properly formatted by the orchestration layer, the filtered training set would be deemed (i) conformant to a runtime environment in which a ML engine or ML pipeline is to be deployed, and (ii) facilitating realization of the pipeline deployment in accordance with user request/design. As per claim 19, Geist discloses computer-implemented method of claim 12, wherein the deployment machine learning pipeline comprises both: a fixed feature training component configured to retrain the machine learning model with a fixed list of feature columns; and a re-tuning component configured to perform a second model architecture search to identify a new machine learning model for the training dataset. (All of which having been addressed in claim 11) Response to Arguments Applicant's arguments filed 01/13/26 have been fully considered but they are not persuasive. Following are the Examiner’s observations in regard thereto. Applicants have submitted that the relied upon phrases from Geist’s non-provisional reference (USPubN: 2023/0094742) do not constitute a self-evident link with Geist;s provisional reference, as they are completely absent in Geist Provisional reference (App. # 63247226), where the word “pretrained” in the Provisional from para 0001,0002,0004,0011, 0018 amounts to a generic recitation of a ”Field” and therefore does not describe the subject matter relied upon in the non-provisional reference by the Office Action, making the Geist reference not prior art. As for the Legal Standard for Written Description (35 U.S.C. 112(a)) Applicant’s argument relies on the absence of specific "phrases" or "evidence" within the text of the provisional. However, the Federal Circuit has consistently held (emphasis here) that the written description requirement of 35 U.S.C. 112(a) (and by extension, priority under 119(e)) does not require ipsis verbis (identical words) support for the later-claimed or cited features. "The test for sufficiency of support... is whether the disclosure of the application relied upon reasonably conveys to the artisan that the inventor had possession at that time of the later claimed subject matter." (In re Kaslow, 707 F.2d 1366). The "silence" or “entirely absent” characterization as alleged by the Applicant is a failure to acknowledge the technical concepts inherently and explicitly disclosed to a Person Having Ordinary Skill In The Art (PHOSITA). Analysis of the Geist Provisional Teaching (provisional No 63247226) While the Applicant focuses on the lack of specific nomenclature, the Examiner notes that the functional and structural descriptions in the Geist Provisional (herein ‘226) provide clear possession of the features cited in the Non-Provisional (NP) reference by Geist. For instance, para [0001]-[0004] in ‘226: Paragraphs 0001-0002 establish the field of the invention and the specific problem being solved (e.g. solving users for having to access ML models from various providers). By defining the "Technical Field" and "Background," the provisional sets a structural context that necessitates the features cited in the NP reference. With para 0003 raising the problem that input data into the ML is not always standardized, and para 0004, summarizing effect of deploying pre-trained models that suit client business or data requirements, via accessing a catalogued model and constructing an execution pipeline according to user specification data using formatting and normalizing support of a orchestration engine, it is found that all of which are evidenced in the NP document by Geist Paragraph [0011]: Applicant claims that feature cited in the NP document are rather silent in this paragraph. However, the Examiner points out that this section describes pre-trained models that suit client business or data requirements, via accessing a catalogued model and constructing an execution pipeline according to user specification data borrowing the capability of formatting and normalizing of an orchestration engine, into which the user data is provided as input and wherein output from an executed ML pipeline per a preceding stage is being processed. A person having ordinary skill in the art (PHOSITA) would recognize this description as the foundational logic for the ML execution pipeline and reprocessing of the outcome to satisfy an users’ requirement as cited in the NP reference. Paragraph [0018] and Associated Figures: This paragraph provides the implementation details to the orchestration support for standardizing or formatting ML data. Even if the NP reference uses more "refined" terminology, the physical components and actions flow described in paragraph [0018] of the Geist Provisional provide the requisite "possession" (by Geist) that enables the ‘226 reference to serve as a compliant prior art that antedates the Applicant application. The Applicant's rebuttal appears to be based on a literal comparison of vocabulary rather than a technical analysis of the disclosure. Because the Geist Provisional provides a sufficient roadmap for a PHOSITA to arrive at the features in the NP reference without undue experimentation, the NP reference is entitled to the earlier priority date. Accordingly, the NP reference by Geist (USPubN: 2023/0094742) remains "prior art" for the purposes of this rejection, and the rejection of claims 1-3, 5-14, 16-22 is maintained. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Tuan A Vu whose telephone number is (571) 272-3735. The examiner can normally be reached on 8AM-4:30PM/Mon-Fri. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Chat Do can be reached on (571)272-3721. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-3735 ( for non-official correspondence - please consult Examiner before using) or 571-273-8300 ( for official correspondence) or redirected to customer service at 571-272-3609. Any inquiry of a general nature or relating to the status of this application should be directed to the TC 2100 Group receptionist: 571-272-2100. /Tuan A Vu/ Primary Examiner, Art Unit 2193 March 11, 2026
Read full office action

Prosecution Timeline

Dec 22, 2022
Application Filed
Oct 11, 2025
Non-Final Rejection — §103
Dec 29, 2025
Applicant Interview (Telephonic)
Dec 29, 2025
Examiner Interview Summary
Jan 13, 2026
Response Filed
Mar 11, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596557
SYSTEM AND METHOD FOR GENERATING RECOMMENDATIONS FOR DATA TAGS
2y 5m to grant Granted Apr 07, 2026
Patent 12591718
Application Development Platform, Micro-program Generation Method, and Device and Storage Medium
2y 5m to grant Granted Mar 31, 2026
Patent 12585573
ASSEMBLING LOW-CODE APPLICATIONS WITH OBSERVABILITY POLICY INJECTIONS
2y 5m to grant Granted Mar 24, 2026
Patent 12582796
METHODS, DEVICES, AND SYSTEMS FOR IMPROVED OXYGENATION PATIENT MONITORING, MIXING, AND DELIVERY
2y 5m to grant Granted Mar 24, 2026
Patent 12541384
COMPONENT TESTING FRAMEWORK
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
73%
Grant Probability
95%
With Interview (+21.4%)
3y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 980 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month