Prosecution Insights
Last updated: April 19, 2026
Application No. 17/106,029

CAPTURING FEATURE ATTRIBUTION IN MACHINE LEARNING PIPELINES

Non-Final OA §103
Filed
Nov 27, 2020
Examiner
TAN, DAVID H
Art Unit
2145
Tech Center
2100 — Computer Architecture & Software
Assignee
Amazon Technologies, Inc.
OA Round
5 (Non-Final)
31%
Grant Probability
At Risk
5-6
OA Rounds
4y 1m
To Grant
46%
With Interview

Examiner Intelligence

Grants only 31% of cases
31%
Career Allow Rate
30 granted / 98 resolved
-24.4% vs TC avg
Strong +16% interview lift
Without
With
+15.8%
Interview Lift
resolved cases with interview
Typical timeline
4y 1m
Avg Prosecution
41 currently pending
Career history
139
Total Applications
across all art units

Statute-Specific Performance

§101
8.5%
-31.5% vs TC avg
§103
63.5%
+23.5% vs TC avg
§102
19.8%
-20.2% vs TC avg
§112
6.7%
-33.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 98 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This Non-Final Rejection is filed in response to Request for Continued Examination (RCE) filed 12/01/2025. Claims 1, 5, and 14, are amended. In light of the amendments the U.S.C. 101 rejection is respectfully withdrawn. Claims 1-20 remain pending. Response to Arguments Argument 1, Applicant argues in Applicant Arguments/Remarks Made in an Amendment filed 12/01/2025 pg. 12-14, the primary claim limitation, "receiving a request to start a monitoring job for the machine learning model after deployment of the machine learning model, wherein the monitoring job specifies a configuration for performing monitoring of feature attribution drift of the machine learning model according to the feature attribution requested in the training job; identifying the report, generated and stored as part of executing the training job according to the feature attribution requested in the training job, to compute feature attribution drift of the machine learning model for the requested monitoring job", Response to Argument 1, the examiner respectfully disagrees and notes that Cataltepe teaches an online machine learning system that selects and evaluates features for a model and includes a visual display explanation model of the relevance and drift of the selected features for a model task. The BRI for the primary claim limitation, “identifying the report, generated and stored as part of executing the training job according to the feature attribution requested in the training job, to compute feature attribution drift of the machine learning model for the requested monitoring job”, encompasses how the Online Robust Feature Selection Engine (ORFSE) module is able to evaluate features selected for a particular machine learning task. Wherein the BRI for a report encompasses evaluation of the selected feature data for a selected ML algorithm or model which is evaluated and displayed for relevancy and drift. It is noted that the claims do not require a human user to select features for a training job, but rather that a training job specifies a reference data set for training and further evaluates the drift in the selected features from the referenced data. Cataltepe teaches that feature data is selected and evaluated for a machine learning model while also monitoring the level of drift. The following paragraphs of Cataltepe support this interpretation. [0085] At step 362, an OPrE receives streaming data including an instance including a vector of inputs including multiple continuous or categorical features. The OPrE is able to, and may, discretize features, impute missing feature values, normalize features, and detect drift or change in features. [0086] At step 363, an OFEE produces features. [0087] At step 364, an ORFSE evaluates and selects features. [0099] The OMLS also contains an Online Robust Feature Selection Engine (ORFSE) module where all the features are continuously and robustly evaluated in terms of how relevant they are for the particular machine learning task. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-2, 4-7, 10, 12-13, 14-16, 18, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication NO. 20220067460 "Raj" and further in light of U.S. Patent Application Publication NO. 20190113973 "Coleman", and in further light of U.S. Patent Application Publication NO. 20190279102 "Cataltepe". Claim 5: Raj teaches a method, comprising: receiving, by a machine learning system, a training job that includes a request to determine feature attribution from a specified reference data set out of a training data set (i.e. para. [0045], Fig. 3, "In 310, a past data snapshot is retrieved (e.g., by variance server 120)", wherein it is noted that a data snapshot from the past may be specified out of all potential data snapshots for training, wherein the data snapshot that is specifically from the past is used to tune, train, and evaluate a model) as (i.e. para. [0039-0040], Fig. 2-3, "Data processor 210 is a component for performing the variance characterization. Data processor 210 receives requests for data analyses or predictions (e.g., from user devices 110) and perform the variance characterization analysis. Data processor 210 also may be configured to calculate the feature attribution values associated with features that are detected and provided by model trainer 220", wherein a variance characterization server receives a request for a data analysis from a user and subsequently generates and trains a machine learning model as part of the job to determine the variance of a data set, consisting of current and past data snapshots, by calculating feature attributions with respect to a submitted data snapshot which was used to generate and train the model); executing, by the machine learning system, the training job to train the machine learning model (i.e. para. [0039,0046], "the analysis may include generating and modify machine learning models, identifying similarities and differences between data sets, and predicting and analyzing the differences between the data sets using the generated and modified machine learning models... a machine learning (or prediction) model is fitted onto the retrieved data snapshot", wherein the user request for a data analysis of a data snapshot may be analyzed via a generated machine learning model) , wherein the executing comprises: identifying a reference data set for determining the feature attribution of the machine learning model according to the request (i.e. para. [0040], "Examples of feature attribution values include Shapley values and are used to reflect a numerical measure of a feature's impact on the overall differences between compared data sets. For example, data processor 210 may be requested to analyze data sets from different years that have a variance between a feature", wherein an earlier data set may be used as a reference to determine a feature's impact); determining the feature attribution of the trained machine learning model (i.e. para. [0050], each model behavior (e.g., credit score, age segment) is assigned a feature attribution value that that reflects the influence of that feature on the behavior (e.g., delinquency) in the past data snapshot); storing, a report (i.e. para. [0032], “Results may include a waterfall chart that depicts the cumulative effect of different values as they are added or subtracted. In some embodiments, a waterfall chart is produced by variance characterization server 120 based on the feature attributions that are assigned to different parameters associated with the analyzed data set”, wherein the BRI for a report encompasses a waterfall chart that displays the feature attribution impact on the model) that includes the feature attribution of the machine learning model (i.e. para. [0052], "This aggregation allows feature attributions assigned within the data snapshot to be grouped based on the specific feature.. the feature attributions for these features may be aggregated across the separate members (e.g., loan accounts) to provide a single feature attribution value for each the feature", wherein the feature attributions are stored as they are aggregated into a single attribution value for a specific feature of the model); identifying the report, generated and stored as part of executing the training job according to the feature attribution requested in the training job, to compute feature attribution (i.e. para. [0064], “In some embodiments, the feature attributions for each member (e.g., account) are aggregated into a waterfall chart that indicates how each feature associated with that account is moving the prediction for the machine learning model”, wherein feature attributions for the data set are identified and stored in the waterfall chart as part of the model training and analysis); and While Raj teaches determining feature attribution and a trained machine learning model, Raj may not explicitly teach feature attribution as part of a machine learning pipeline; receive a request to start a monitoring job for the machine learning model after deployment of the machine learning model, wherein the monitoring job specifies a configuration for performing monitoring of feature attribution drift of the machine learning model according to the feature attribution requested in the training job; identifying the report, generated and stored as part of executing the training job according to the feature attribution requested in the training job, to compute feature attribution drift of the machine learning model for the requested monitoring job; monitor feature attribution drift of the machine learning model deployed and generating inferences on live data according to the configuration for performing feature attribution monitoring specified in the monitoring job, wherein the stored feature attribution is accessed as part of monitoring feature attribution drift. However, Coleman teaches to determine feature attribution as part of a machine learning pipeline that trains a machine learning model (i.e. para. [0150, 0206, 0218], "Searching for meaningful patterns may be used to predict whether a feature event belongs to one category Once a meaningful pattern has been discovered, the system or another platform may modify or create a pipeline to predict one of the feature event values from the other two", wherein it is noted that the BRI for feature attribution a part of a machine learning pipeline encompasses how a workflow in the cloud can apply machine learning to learn new pipelines in order to find a feature event value which describes a predictive pattern for a certain event); store the feature attribution that includes the feature attribution of the machine learning model (i.e. para. [0048], "FIG. 1, in an embodiment of the system there is shown a trusted network of local client devices 113 and a software as a service (SAAS) platform 200 operating at one or more computer servers", wherein it is noted in para. [0219] that the system platform may maintain a database store of all of the combination of feature events); and It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to feature attribution as part of a machine learning pipeline, to the feature attribution and training models of Raj, with how finding a predictive score for an event of part of machine learning model is part of a machine learning pipeline, as taught by Coleman. One would have been motivated to combine Coleman with Raj and would have had a reasonable expectation of success the addition of a machine learning pipeline to the machine learning systems minimizes the efforts, number of steps, and time required to train different applications to function based on relevant features or attributes (Coleman, para. [0166]). While Raj-Coleman teach a machine learning model for feature attribution that stores data that includes the feature attribution and stores a report for data about the selected features for training, Raj-Coleman may not explicitly teach to receive a request to start a monitoring job for the machine learning model after deployment of the machine learning model, wherein the monitoring job specifies a configuration for performing monitoring of feature attribution drift of the machine learning model according to the feature attribution requested in the training job; identifying the report, generated and stored as part of executing the training job according to the feature attribution requested in the training job, to compute feature attribution drift of the machine learning model for the requested monitoring job; and monitor feature attribution drift of the machine learning model deployed and generating inferences on live data according to the configuration for performing feature attribution monitoring specified in the monitoring job, wherein the stored feature attribution is accessed as part of monitoring feature attribution drift. However, Cataltepe teaches to receive a request to start a monitoring job for the machine learning model after deployment of the machine learning model (i.e. para. [0214], "When the individual models in the OMLS request feedback, the requests can be ordered based on the variance of all the model outputs for the feedback instance or the accuracy of the model that requests the instance ", wherein it is noted that a current model may be deployed and in use when a user may request to monitor and display feedback insights of the features in the current model), wherein the monitoring job specifies a configuration for performing monitoring of feature attribution drift of the machine learning model (i.e. para. [0058], "the OPrE may receive streaming data including an instance including a vector of inputs including multiple continuous or categorical features, and is able to discretize features, impute missing feature values, normalize features, and detect drift or change in features. The OFEE may produce features. The ORFSE may evaluate and select features", wherein a monitoring job request may be for select features in which the OMLS monitors a deployed model for feature drift. It is further noted in para. [0125] that that classes are learned by the OMLS in proportion to their importances, and that a class weight may be assigned to each class) according to the feature attribution requested in the training job (i.e. para. [0176-0177], “The current invention includes an Online (model) Explanation System (OES) that learns continuously while the online machine learning system (OMLS) keeps being updated based on the data stream and also according to the initial and changing preferences of the human domain experts… Let x denote the original features which the domain expert knows about and z denote the combination of the original and engineered features. While OMLS keeps learning continuously, its output g.sub.t(z) is taught to an ensemble of simpler explanation machine learning models, such as online decision trees or linear models. The explanation machine learning models h.sub.t(x) are trained using x, so that they are understandable by the human experts. In addition to x, features from z that are human understandable can also be included for training the explanation model”, wherein the BRI for the feature attribution requested in the training job encompasses an initial set of features that the OMLS system is trained on that m ay be evaluated after deployment for feature attribution drift); identifying the report, generated and stored as part of executing the training job according to the feature attribution requested in the training job, to compute feature attribution drift of the machine learning model for the requested monitoring job (i.e. para. [0087-0088, 0099], At step 364, an ORFSE evaluates and selects features… At step 365, the OMLE incorporates and utilizes one or more machine learning algorithms…. The OMLS also contains an Online Robust Feature Selection Engine (ORFSE) module where all the features are continuously and robustly evaluated in terms of how relevant they are for the particular machine learning task”, wherein the BRI for the report encompasses an understandable explanation that is an evaluation for the importance of the original features requested in an initial training job, which is then used as part of a monitoring job to find feature attribution drift of these original features); and monitor feature attribution drift of the machine learning model deployed and generating inferences on live data (i.e. para. [0084-0085], Fig. 3E, "Block 361 represents continuous operation of the OMLE, during which, and as part of which, operation the other steps take place. At step 362, an OPrE receives streaming data including an instance including a vector of inputs including multiple continuous or categorical features. The OPrE is able to, and may, discretize features, impute missing feature values, normalize features, and detect drift or change in features", wherein a deployed model is a current model being continuously monitored by the OMLS and the BRI for inferences encompasses the reports of detected drift or change in features) according to the configuration for performing feature attribution monitoring specified in the monitoring job, wherein the stored feature attribution is accessed as part of monitoring feature attribution drift (i.e. para. [0099], Fig. 16-20, "The OMLS also contains an Online Robust Feature Selection Engine (ORFSE) module where all the features are continuously and robustly evaluated in terms of how relevant they are for the particular machine learning task", wherein the model configuration may be specified to current features such that the current features are being continuously evaluated as part of the monitoring, wherein the evaluation may be a view as seen in fig. 18 which displays a model's current feature being evaluated). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to add receive a request to start a monitoring job for the machine learning model after deployment of the machine learning model, wherein the monitoring job specifies a configuration for performing monitoring of feature attribution drift of the machine learning model according to the feature attribution requested in the training job; identifying the report, generated and stored as part of executing the training job according to the feature attribution requested in the training job, to compute feature attribution and monitoring feature attribution drift of the machine learning model deployed and generating inferences on live data according to the configuration for performing feature attribution monitoring specified in the monitoring job, wherein the report is accessed as part of monitoring feature attribution drift, to the feature attribution and training models of Raj-Coleman, with how a report with inferences on currently ingested data for drift in feature attribution is stored and displayed to users, as taught by Cataltepe. One would have been motivated to combine Cataltepe with Raj-Coleman and would have had a reasonable expectation of success to save a user time by providing a remedy to such a detected performance decline and a quick analysis of the findings associated with the structural and/or generative shifts, which may help to do governance and/or backward analysis. Claim 6: Raj, Coleman, and Cataltepe teach the method of claim 5. Raj further teaches wherein the feature attribution is determined according to a specified feature attribution technique out of a plurality of feature attribution techniques supported by the machine learning system (i.e. para. [0042], "model trainer 220 generates a prediction model such as a gradient boosting machine (GBM) model as a prediction model, or other random forest techniques. In an alternative embodiment, model trainer 220 may employ a different model such as a neural network. In an example, the prediction model is a tree-based model that is utilized for detecting interactions between variables in a data set and making predictions of how those variables would impact the generated model", wherein different techniques, such as GMB or random forest models, determine feature attribution via a predictive model and would be supported by the model trainer of the variance characterization server). Claim 7: Raj, Coleman, and Cataltepe teach the method of claim 5. Raj further teaches wherein the reference data set is identified according to one or more data values specified for the reference data set in the training job (i.e. para. [0041], Model trainer 220 may utilize the feature attribution values to generate machine learning models that are specific to predictions and analyses based on particular data sets and particular variables). Claim 10: Raj, Coleman, and Cataltepe teach the method of claim 5. Raj further teaches wherein the stored feature attribution is associated with a trial report for the machine learning pipeline (i.e. para. [0064], "the feature attributions for each member (e.g., account) are aggregated into a waterfall chart that indicates how each feature associated with that account is moving the prediction for the machine learning model", wherein the BRI for a trial report encompasses how the results of the feature attribution trials may be displayed in a waterfall chart for a user to review the performance of a feature over a machine learning model). Claim 12: Raj, Coleman, and Cataltepe teach the method of claim 5. Cataltepe further teaches wherein the training job fairness and explainability processing container offered by a machine learning service of a provider network (i.e. para. [0011], "sharing may involve providing direct access to the individual data or results or the aggregated data or results or it may mean providing access to the data, results, or both through an application programming interface (API), or other known means", wherein after a machine learning process reviews discovered patterns, the workflows in the cloud can apply machine learning methods that can learn new pipelines, change which pipelines to apply to specific contexts, and learn new patterns that can help a User achieve their goals). Cataltepe further teaches wherein the training job and monitoring job are specified according to one or more Application Programming Interfaces (APIs) of a fairness and explainability processing container offered by a machine learning service of a provider network (i.e. para. [0225-0226], Fig. 16-17, "The explanation models that are trained continuously are first copied and taken to a staging area (FIG. 16, FIG. 17) At the staging area, the user is able to examine (for an example, see FIG. 18) the details of the model by means of different filtering mechanisms, such as visualizing only the nodes that have a certain training/test accuracy or confidence", wherein the BRI for wherein the BRI for the training job encompasses the selection of a certain batch of streaming data and the BRI for the monitoring job being specified encompasses the specification of the current model for monitoring and display of feature information that may include detected drift or change in features) Claim 13: Raj, Coleman, and Cataltepe teach the method of claim 5. Coleman further teaches wherein the machine learning system is implemented on one or more training nodes of a machine learning service offered by a provider network and wherein the feature attribution is stored as part of a report in a data storage service offered by the provider network (i.e. para. [0048], "FIG. 1, in an embodiment of the system there is shown a trusted network of local client devices 113 and a software as a service (SAAS) platform 200 operating at one or more computer servers", wherein it is noted in para. [0219] that the system platform may maintain a database store of all of the combination of feature events). Claim 1: Claim 1 is the system claim reciting similar limitations to Claim 5 and is rejected for similar reasons. Claim 2: Claim 2 is the system claim reciting similar limitations to Claim 10 and is rejected for similar reasons. Claim 4: Claim 4 is the system claim reciting similar limitations to Claim 12 and is rejected for similar reasons. Claim 14: Claim 14 is the machine claim reciting similar limitations to Claim 5 and is rejected for similar reasons. Claim 15: Claim 14 is the machine claim reciting similar limitations to Claim 6 and is rejected for similar reasons. Claim 16: Claim 14 is the machine claim reciting similar limitations to Claim 7 and is rejected for similar reasons. Claim 18: Claim 18 is the machine claim reciting similar limitations to Claim 10 and is rejected for similar reasons. Claim 20: Claim 20 is the machine claim reciting similar limitations to Claim 12 and is rejected for similar reasons. Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication NO. 20220067460 "Raj", in light of U.S. Patent Application Publication NO. 20190113973 "Coleman", and in further light of U.S. Patent Application Publication NO. 20190279102 "Cataltepe", as applied to Claim 5 above, and further in light of U.S. Patent Application Publication NO. 9779362 "Gold". Claim 9: Raj, Coleman, and Cataltepe teach the method of claim 5. Raj-Coleman-Cataltepe may not explicitly teach further comprising: receiving, by the machine learning system, a request for a particular feature attribution for a specific inference generated by the trained machine learning model; determining, by the machine learning system, the particular feature attribution for the specific inference according to the identified reference data set; and sending, by the machine learning system, the particular feature attribution for the specific inference in response to the request. However, Gold teaches receiving, by the machine learning system (i.e. Col. 3, lines 18-21, features are then analyzed using a machine learning technique to determine weighted values representative of their respective contribution to causation), a request for a particular feature attribution for a specific inference generated by the trained machine learning model (i.e. Col. 11, lines 43-53, "An inference can be employed to identify a specific context or action. Such an inference can result in the construction of new events or actions from a set of observed events and/or stored event data" wherein a user may request a specific context or action in which an inference page explaining the contribution of the specific context is generated after being analyzed by a machine learning system); determining, by the machine learning system, the particular feature attribution for the specific inference according to the identified reference data set (i.e. Col. 11, lines 40-45, "inference component 208 can examine the entirety or a subset of the data to which it is granted access and can provide for reasoning about or infer states of the system, environment, etc. from a set of observations", wherein an inference for a specific context would be identified by the machine learning system according to the respective data); and sending, by the machine learning system, the particular feature attribution for the specific inference in response to the request (i.e. Col. 11, lines 24-28, "inference component 208 can facilitate analysis component 108 by inferring weights to associate with quality features based on an inferred level of contribution Inference component 208 can further facilitate recommendation component 206 with inferring changes to quality feature to recommend.", wherein the display of an analysis of feature contribution to a user). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to add receiving, by the machine learning system, a request for a particular feature attribution for a specific inference generated by the trained machine learning model; determining, by the machine learning system, the particular feature attribution for the specific inference according to the identified reference data set; and sending, by the machine learning system, the particular feature attribution for the specific inference in response to the request, to the feature attribution and training models of Raj-Coleman-Cataltepe, with an inference for a specific feature may be determined, calculated, and displayed to a user, as taught by Gold. One would have been motivated to combine Gold with Raj-Coleman-Cataltepe and would have had a reasonable expectation of success in order to analyze features and select ones that could predictively improve a system if the changes were implemented (Gold, Col. 3, lines 24-27). Claim(s) 11 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication NO. 20220067460 "Raj", in light of U.S. Patent Application Publication NO. 20190113973 "Coleman", and in further light of U.S. Patent Application Publication NO. 20190279102 "Cataltepe", as applied to Claim 5 above, and further in light of U.S. Patent Application Publication NO. 20210241115 "Ibrahim". Claim 11: Raj, Coleman, and Cataltepe teach the method of claim 5. Raj may not explicitly teach wherein the training job further specifies determining bias metrics at one or more stages of the machine learning pipeline and wherein the executing further comprises: determining the one or more bias metrics at the one or more stages of the machine learning model; and storing the one or more bias metrics for the machine learning model. However, Ibrahim teaches wherein the training job further specifies determining bias metrics at one or more stages of the machine learning pipeline and wherein the executing further comprises: determining the one or more bias metrics at the one or more stages of the machine learning model (i.e. para. [0040], GAM insight logic 112 may determine global explanations to individual samples, supplement other techniques, determine bias in the neural network); and storing the one or more bias metrics for the machine learning model (i.e. para. [0041], "The system 150 includes the GAM system 101, a storage system 103 including one or more data sets 114, and one or more neural networks", wherein insights related to determining bias may be stored for use in training the neural network). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to add determining the one or more bias metrics at the one or more stages of the machine learning model; and storing the one or more bias metrics for the machine learning model, to the feature attribution and training models Raj-Coleman-Cataltepe Raj, with how insights related to determining training bias may be stored and used, as taught by Ibrahim. One would have been motivated to combine Ibrahim with Raj-Coleman-Cataltepe and would have had a reasonable expectation of success as the combination creates more transparent predictions, which can help ensure neural network decisions are generated for the right reasons (Ibrahim, para. [0011]). Claim 19: Claim 19 is the machine claim reciting similar limitations to Claim 11 and is rejected for similar reasons. Claim(s) 3, 8, and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication NO. 20220067460 "Raj", in light of U.S. Patent Application Publication NO. 20190113973 "Coleman", and in further light of U.S. Patent Application Publication NO. 20190279102 "Cataltepe", as applied to Claim 5 above, and further in light of U.S. Patent Application Publication NO. 20200183035 "Liu". Claim 8: Raj, Coleman, and Cataltepe teach the method of claim 5. While Raj further teaches wherein the machine learning system comprises a cluster of nodes, and wherein determining the feature attribution of the trained machine learning model (i.e. para. [0035], "server 120 may be implemented as a plurality of servers that function collectively as a distributed database", wherein the distributed group of servers may be used to perform the feature attribution as variance characterization functions between data sets retrieved from data sources as part of the machine learning process), Raj may not explicitly teach: wherein the machine learning system comprises a cluster of nodes, and wherein determining the feature attribution of the trained machine learning model as part of the machine learning pipeline dividing, by a leader node of the cluster of nodes, an input data set into different portions; assigning, by the leader node, the different portions to different worker nodes of the cluster of nodes; calculating, by the different worker nodes, respective feature attribution measurements for the different portions of the input data set using a respective copy of the reference data set at the worker nodes; combining, by the leader node, the respective feature attribution measurements into the feature attribution for the trained machine learning model. However, Liu teaches wherein the machine learning system comprises a cluster of nodes, and wherein determining the feature attribution of the trained machine learning model as part of the machine learning pipeline (i.e. para. [0061], Fig. 2, "Method 200 may be designed in a distributed, asynchronous workflow wherein the BRI for a machine learning pipeline encompasses the workflow for augmenting a ML system in Fig. 2) comprises: dividing, by a leader node of the cluster of nodes, an input data set into different portions (i.e. para. [0061], During ML training, the master node may load the original seismic volume image and labels into its main memory (at block 212). The master node may randomly extract some 3-D patches (at block 270). The master node may put the patches into a queue system.); assigning, by the leader node, the different portions to different worker nodes of the cluster of nodes (i.e. para. [0061], Each of the patches in the queue may be dispatched to one of the worker nodes to perform transformation); calculating, by the different worker nodes, respective feature attribution measurements for the different portions of the input data set using a respective copy of the reference data set at the worker nodes (i.e. para. [0061], Once a worker node receives the assigned patches, it runs the transformation routine, and returns the augmented data to the queuing system of the master node); combining, by the leader node, the respective feature attribution measurements into the feature attribution for the trained machine learning model (i.e. para. [0061], runs the transformation routine, and returns the augmented data to the queuing system of the master node. The master node may then use the augmented data for ML trainings). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to add dividing, by a leader node of the cluster of nodes, an input data set into different portions; assigning, by the leader node, the different portions to different worker nodes of the cluster of nodes; calculating, by the different worker nodes, respective feature attribution measurements for the different portions of the input data set using a respective copy of the reference data set at the worker nodes; combining, by the leader node, the respective feature attribution measurements into the feature attribution for the trained machine learning model, to the feature attribution techniques that may be executed by across the distributed servers of Raj-Coleman-Cataltepe with how a leader node may calculate how to distribute a machine learning task among a cluster of nodes, as taught by Liu. One would have been motivated to combine Liu with Raj-Coleman-Cataltepe and would have had a reasonable expectation of success the combination results a distributed computing system may be utilized to improve the efficiency of a ML system through higher throughput with parallel input/output (Liu, para. [0055]). Claim 3: Claim 3 is the system claim reciting similar limitations to Claim 8 and is rejected for similar reasons. Claim 17: Claim 17 is the machine claim reciting similar limitations to Claim 8 and is rejected for similar reasons. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. U.S. Patent Application Publication NO. 20200280578 "Hearty" teaches in para. [0091], teaches that, the OAO feature drift hardening process 1200 includes the following software components: 1) a data ingestor 1202, 2) a labeled data ingestor 1204, 3) a feature calculator 1206, 4) an OAO drift monitoring component 1208, 5) an alerting component 1210, 6) an OAO drift weighting component 1212, 7) an OAO model set 1214, 8) an OAO model retraining component 1216, 9) longer-term models 1218, 10) shorter-term models 1220, 11) an OAO model selector 1222, 12) a score resolution component 1224, 13) an OAO model evaluation and monitoring component 1226, and 14) an OAO model output visualization component 1228 that are executable by the fraud prevention server 1135 or across one or more servers for distributed processing. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID H TAN whose telephone number is (571)272-7433. The examiner can normally be reached M-F 7:30-4:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Cesar Paula can be reached at (571) 272-4128. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /D.T./ Examiner, Art Unit 2145 /CESAR B PAULA/ Supervisory Patent Examiner, Art Unit 2145
Read full office action

Prosecution Timeline

Nov 27, 2020
Application Filed
Mar 21, 2024
Non-Final Rejection — §103
Jul 26, 2024
Response Filed
Oct 02, 2024
Final Rejection — §103
Jan 08, 2025
Response after Non-Final Action
Feb 28, 2025
Request for Continued Examination
Mar 05, 2025
Response after Non-Final Action
Mar 20, 2025
Non-Final Rejection — §103
Jun 26, 2025
Response Filed
Sep 23, 2025
Final Rejection — §103
Dec 01, 2025
Response after Non-Final Action
Dec 29, 2025
Request for Continued Examination
Jan 17, 2026
Response after Non-Final Action
Feb 18, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12443336
INTERACTIVE USER INTERFACE FOR DYNAMICALLY UPDATING DATA AND DATA ANALYSIS AND QUERY PROCESSING
2y 5m to grant Granted Oct 14, 2025
Patent 12282863
METHOD AND SYSTEM OF USER IDENTIFICATION BY A SEQUENCE OF OPENED USER INTERFACE WINDOWS
2y 5m to grant Granted Apr 22, 2025
Patent 12182378
METHODS AND SYSTEMS FOR OBJECT SELECTION
2y 5m to grant Granted Dec 31, 2024
Patent 12111956
Methods and Systems for Access Controlled Spaces for Data Analytics and Visualization
2y 5m to grant Granted Oct 08, 2024
Patent 12032809
Computer System and Method for Creating, Assigning, and Interacting with Action Items Related to a Collaborative Task
2y 5m to grant Granted Jul 09, 2024
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
31%
Grant Probability
46%
With Interview (+15.8%)
4y 1m
Median Time to Grant
High
PTA Risk
Based on 98 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month