DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This action is in response to the Application filed on 02/27/2026. Claims 1-4, 6-11 and 13-20 are pending in the case. This action is Final.
Applicant Response
In Applicant’s response dated 02/27/2026, Applicant amended Claims 1, 10, 13, 15, 16 and 20 cancelled claims 5 and 12 and argued against all objections and rejections previously set forth in the Office Action dated 12/30/2025.
Claim Interpretation
4. The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
6. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
A model creator engine … claim 1
a rules engine of the processor … claim 1
a monitoring engine to: monitor…. claim 1
a model proxy engine to: receive, … Claim 2
ground-truth engine to: collect, … Claim 3
a metrics engine … claim 4
validation engine … claim 11
a self-healing reconciliation loop engine to: perform … claim 13
a control plane reconciliation loop engine … claim 15
a self-healing strategy engine to: execute … Claim 15
rules engine … claim 16 and claim 20
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
The Applicant’s specification shows in Figure 1, Figure 3, Figure 4A, Figure 5A and Figure 5B and the corresponding paragraph has explicit algorithmic support. The specification describes for each engine the specific processing steps, data flows, decision logic and component interactions that accomplish the stated functions.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 101
4. 35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
5. Claims 1-4, 6-11 and 13-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed towards an abstract idea, without significantly more.
Step 1
According to the first part of the analysis, in the instant case, claims 1-4, 6-11 and 13-20 falls within one of the four statutory categories (i.e., process, machine, manufacture, or composition of matter).).
Regarding independent Claim 1, 16 and 20,
At step 2A, Prong 1, Does the claim recite a judicial exception?
Claim 1 recites the steps of:
a model creator to generate, based on a pre-defined template and a pre-defined input, a configuration artifact pertaining to expected attributes of a Machine Learning (ML) model to be created (This step involves preparing templates, applying rules and generating data which is the understood to be a Mental process /a certain method of organizing human activity (planning/ organizing ML lifecycle) groping of abstract idea).);
wherein the pre-defined input includes at least one of a pre-stored information and an input received from a user, wherein the configuration artifact corresponds to at least one of an automated training pipeline, the model attributes, a data source and a release pipeline, and wherein the data source is a cloud-based computing platform (This step involves data organization and database schema definition which is the understood to be a Mental process groping of abstract idea).);
wherein the pre-defined template facilitates incorporation of a set of rules including at least one of monitoring rules and validation rules for the ML model, wherein the set of rules are stored in a rule’s engine of the processor (This step involves rule base decision which is the understood to be a Mental process groping of abstract idea).); and
generate, based on the configuration artifact, the ML model that is trained and validated for performing prediction or inference, wherein the ML model is stored in a model registry that stores a plurality of ML models, each ML model being provided with a version tag indicative of a specific version of the ML model (This step involves training and validating on an ML model which is mathematical Concept/ Mental groping of abstract idea); and
monitoring engine to: monitor, based on the monitoring rules stored in the rules engine, a model attribute associated with each ML model to identify an event associated with alteration in the model attribute from a pre- defined value, wherein the identified event pertains to a drift indicative of deterioration in an expected performance of the prediction or the inference of the ML model, wherein the drift pertains to at least one of a model drift, a data drift and a concept drift (This step involves training and validating on an ML model which is mathematical Concept/ Mental groping of abstract idea);; and
wherein, based on the identified event, the system executes an automated response including at least one of an alert and a remedial action to mitigate the event (This step involves monitoring and detecting a drift using statistical and mathematical analysis which is mathematical Concept/ Mental groping of abstract idea)
further the processor is coupled with: a database comprising a serverless configuration database, and a machine learning operations (MLOps) database, wherein the serverless configuration database stores the configuration artifact and facilitates information related to an expected state pertaining to configuration of components of the ML model and the MLOps database facilitates information related to an actual state pertaining to the components of the ML model (This step involves storing and record keeping of expected and actual state of a database which is mental process groping of abstract idea)
The claim recites a judicial exception, a mathematical concept and mortal process applied in the field of machine learning. A person can collect data, evaluate options using metric s or criteria based on the log data which falls within the “Mental Processes” groupings of abstract ideas. Accordingly, the claims recite an abstract idea.
Step 2A prong 2: Does the claim recite additional elements? Do those additional elements, individually and in combination, integrate the judicial exception into a practical application?
Further, the claim does not recite any additional element which could integrate this abstract idea into a practical application, because the additional elements recited of consist of:
“a configuration artifact pertaining to expected attributes of a Machine Learning (ML) model to be create … “the set of rules are stored in a rules engine of the processor” … monitor, based on the monitoring rules stored in the rule’s engine, … executes an automated response including at least one of an alerts and a remedial action to mitigate the event. “As a tool to perform the abstract idea step of generating an output (see MPEP 2106.05(f)), and
A method comprising: a processor; and a storage medium storing instructions which, when executed by the processor (claim 16), cause the system to which is a generic computer component on which to implement the abstract idea (see MPEP 2106.05(f));
A computer-readable storage medium storing instructions which, when executed by a computing device, cause the computing device to perform acts comprising (claim 20) which is a generic computer component on which to implement the abstract idea (see MPEP 2106.05(f));
pre-defined input includes at least one of a pre-stored information and an input received from a user, wherein the configuration artifact corresponds to at least one of an automated training pipeline, the model attributes, a data source and a release pipeline, and wherein the data source is a cloud-based computing platform, (as a tool to perform the abstract idea step of generating an output (see MPEP 2106.05(f)),
The additional elements as disclosed above alone or in combination do not integrate the judicial exception into practical application as they are generic computer functions in combination with limitations that are generally linking the use of the judicial exception to a particular technological environment or field of use that are implemented to perform the disclosed abstract idea above. Thus, the claim is directed towards the abstract idea.
Step 2B: Do the additional elements, considered individually and in combination, amount to significantly more than the judicial exception?
No, as shown above with respect to integration of the abstract idea into a practical application, the additional element of:
reactions of an environment to actions taken by an agent, i.e. an agent is software policy in reinforcement learning and using a ML model to make decision that is not meaningful technical improvement.” as a tool to perform the abstract idea step of generating an output (see MPEP 2106.05(f)), and
A method comprising: a processor; and a storage medium storing instructions which, when executed by the processor (claim 16), cause the system to which is a generic computer component on which to implement the abstract idea (see MPEP 2106.05(f));
A computer-readable storage medium storing instructions which, when executed by a computing device, cause the computing device to perform acts comprising (claim 20) which is a generic computer component on which to implement the abstract idea (see MPEP 2106.05(f));
pre-defined input includes at least one of a pre-stored information and an input received from a user, wherein the configuration artifact corresponds to at least one of an automated training pipeline, the model attributes, a data source and a release pipeline, and wherein the data source is a cloud-based computing platform, (as a tool to perform the abstract idea step of generating an output (see MPEP 2106.05(f)),
The additional elements as disclosed above alone or in combination do not integrate the judicial exception into practical application as they are generic computer functions in combination with limitations that are generally linking the use of the judicial exception to a particular technological environment or field of use that are implemented to perform the disclosed abstract idea above.
Thus, the claims are not patent eligible. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. Neither can insignificant extra-solution activity. All of these additional elements as generically claimed are thus considered well-understood, routine, and conventional. Therefore, these limitations, taken alone or in combination, do not integrate the abstract idea into a practical application or recite significantly more that the abstract idea.
Thus, these independent claims are not patent eligible.
The dependent claims respectively recite a judicial exception in limitations of: “wherein the processor comprises: a model proxy engine to: receive, from at least one user application, through an application programming interface (API), a request for performing the prediction or the inference in a consumption stage, wherein the consumption stage pertains to a given timeline in which the version of the ML model is available for performing the prediction or the inference; and identify, from the plurality of ML models in the model registry, the ML model suitable to perform the prediction or the inference, wherein the ML model is identified based on at least one of a requirement of the prediction or the inference and a traffic information for consumption of the ML model, and wherein the model proxy engine directs the request to a model endpoint pertaining to the ML model for facilitating the prediction or the inference.” (claims 2), “a ground-truth engine to: collect, from the user application, through an application programming interface (API), a set of inferences pertaining to ground truth of the prediction or the inference performed by the ML models, wherein the set of inferences include a pre-defined number of inferences collected over a definite period of time in the consumption stage.” (claims 3), “a metrics engine to: evaluate the set of inferences received from the ground truth engine to obtain a set of metrics including at least one of model metrics pertaining to the ML model and data metrics pertaining to the pre-stored inputs associated with the ML model, wherein the set of metrics include indicators to facilitate tracking performance of the plurality of ML models.” (claims 4), “wherein the identified event comprises at least one of a variance in state of components of the ML model, increase in execution time of the ML model beyond a predefined limit, modification in compliance requirements of the system, modification in policy requirements of the system, modification in the version of the ML model, deviation in the model attributes beyond a pre-defined threshold, and observed deviation in data associated with the ML mode.”, (claims 6), “wherein the remedial action includes execution of at least one of an automated training pipeline, automated update of the configuration artifact, an automatic version rollback and an automated release pipeline of the ML model, wherein the automated release pipeline includes execution of release of the ML model based on the configuration artifact corresponding to the release pipeline.” (claims 7), “wherein the release pipeline pertains to at least one of a basic rolling update release pipeline and a champion challenger release pipeline.”(Claim 8), “wherein the champion challenger release pipeline evaluates performance of a challenger corresponding to a new version of the ML model in comparison to a champion corresponding to an existing version of the ML model, wherein the champion challenger release pipeline is activated by creation of a variant model endpoint corresponding to the new version for collecting inference for the new version, wherein the new version is released if the performance of the new version exceeds the performance of the existing version, and wherein the new version is not released if the performance of the new version fails to exceeds the performance of the existing version.”(claim 9), “wherein the ML model is trained based on the configuration artifact corresponding to the automated training pipeline.”(Claim 10), “wherein the ML model is validated after training based on the validation rules such that the output of the validation engine is transmitted to the rules engine, wherein if the validation rules are satisfied, the ML model is registered for subsequent step of release, and wherein if the validation rules are not satisfied, the system facilitates a notification/recommendation indicating a requirement for correction or confirmation of changes in at least one of the validation rules or dataset for performing re-training of the ML model based on another configuration artifact.”(claim 11), “wherein the processor comprises: a self-healing reconciliation loop engine to: perform an assessment loop to identify the variance in states of components pertaining to the ML model by assessing a difference between the expected state and the actual state pertaining to configuration of components associated with the version of the ML model, wherein the absence of the variance in states is indicative of an expected functioning of the model, and the presence of variance in state is indicative of a factor pertaining to at least one of the model drift and introduction of the new version of the ML model; and a self-healing strategy engine to: execute, upon identification of the difference in the expected state and the actual state, an automated self-healing action to facilitate mitigation of the difference in the expected state and the actual state.” (claim 13), “wherein the automated self-healing action corresponds to an action related to at least one of deletion of a component, addition of a component, and update of an existing component of the ML mode.” (claim 14), “wherein the actions comprise recommending electronic items, the reactions indicate whether users selected the recommended electronic items, and the context comprises information about the users.” (claim 16), “receiving, by the processor, from at least one user application, through an application programming interface (API), a request for performing the prediction or the inference in a consumption stage, wherein the consumption stage pertains to a given timeline in which the ML model is available for performing the prediction or the inference; and identifying, by the processor, from the plurality of ML models in the model registry, the ML model suitable to perform the prediction or the inference, wherein the ML model is identified based on at least one of a requirement of the prediction or the inference and a traffic information for consumption of the ML model, and wherein the request is directed to a model endpoint pertaining to the ML model for facilitating the prediction or the inference.” (claim 17), performing, by the processor, an assessment loop to identify the variance in states of components of the ML model by assessing a difference between the expected state and the actual state associated with the version of the ML model, wherein the absence of the variance in states is indicative of an expected functioning of the model, and the presence of variance in state is indicative of a factor pertaining to at least one of the model drift and introduction of the new version of the ML model; and executing, by the processor, upon identification of the difference in the expected state and the actual state, an automated self-healing action to facilitate mitigation of the difference in the expected state and the actual state.” (claim 18), “assessing, by the processor, the configuration artifact pertaining to the specific version of the model, upon detection of a new configuration artifact pertaining to the new version of the ML model, updating automatically, by the processor, the configuration database to include the new configuration artifact.” (claim 19).
These additional limitations in claims 2-4, 6-11, 13-15 and 17-19 also constitute concepts performed in the human mind which fall within the “Mental Processes” groupings of abstract ideas.
This judicial exception is not integrated into a practical application. Additional elements “computer readable medium comprising: computer program code in claims 2-4, 6-11, 13-15 and 17-19 all amount to no more than adding insignificant extra-solution activity/specifications related to data gathering, data input, or data transmittal. These additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The dependent claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of non-transitory computer readable medium comprising: computer program code are again insignificant extra-solution activity steps that cannot provide an inventive concept. All of these additional elements as generically claimed are considered well-understood, routine, and conventional.
Therefore, these limitations, taken alone or in combination, do not integrate the abstract idea into a practical application or recite significantly more that the abstract idea. Thus, all of the dependent claims are also not patent eligible.
Examiner Comments
6. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claim Rejections - 35 USC § 103
7. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
8. Claims 1-4, 6-11 and 13-20 are rejected under 35 U.S.C. 103 as being unpatentable over Maughan (US 20170330109 A1, 2017-11-16) in view of WALTERS (Pub. No.: US 20200012900 A1, Pub. Date: 2020-01-09) in further view of Schierz (Pub. No.: US 20240394595 A1, Pub. Date: 2024-11-28)
Regarding independent Claim 1,
Maughan teaches a system comprising: a processor (see Maughan: Fig.1, [0038], “system 100 for predictive analytic.”) comprising:
a model creator (see Maughan: Fig.11, [0165], “predictive analytics factory”) to:
generate (see Maughan: Fig.11, [0167], The function evaluator module 512 evaluates 1114 the combined 1110 learned functions and generates additional evaluation metadata.”), based on a pre-defined template and a pre-defined input (see Maughan: Fig.11, [0166], “The data receiver module 402 receives 1104 training data for the new ensemble, as initialization data or the like (i.e. a pre-defined input). The function generator module 404 generates 1106 a plurality of learned functions (i.e. a pre-defined template) based on the received 1104 training data, from different predictive analytics classes.”), a configuration artifact pertaining to expected attributes of a Machine Learning (ML) model to be (see Maughan: Fig.11, [0167], The function evaluator module 512 evaluates 1114 the combined 1110 learned functions and generates additional evaluation metadata.”, i.e. a configuration artifact), wherein the pre-defined input includes at least one of a pre-stored information (see Maughan: Fig.11, [0166], “If the interface module 602 receives 1102 a new ensemble request, the data receiver module 402 receives 1104 training data for the new ensemble, as initialization data or the like.”), and an input received from a user (see Maughan: Fig.9, [0163], “If the method 900 determines 908 that the user has selected new data, the retrain module 302 and/or the prediction module 202 receive 910 the new training data.”), wherein the configuration artifact corresponds to at least one of an automated training pipeline, the model attributes, a data source and a release pipeline (see Maughan: Fig.11, [0166], “The function evaluator module 512 evaluates 1108 the plurality of generated 1106 learned functions to generate evaluation metadata (i.e. the model attributes).”,[… ]
wherein the pre-defined template facilitates incorporation of a set of rules including at least one of monitoring rules and validation rules for the ML model (see Maughan: Fig.1 [0038], “the predictive analytics module 102 provides a predictive analytics framework allowing clients 104 to request predictive ensembles or other machine learning, to make analysis requests, and/or to receive predictive results, such as a classification, a confidence metric, an inferred function, a regression function, an answer, a prediction, a recognized pattern, a rule, a recommendation, or other results.”), wherein the set of rules are stored in a rules engine of the processor (see Maughan: Fig.5 [0145], “The metadata library 514, in various embodiments, may store or maintain evaluation metadata in a database format, as one or more flat files, as one or more lookup tables, as a sequential log or log file, or as one or more other data structures. In one embodiment, the metadata library 514 may index evaluation metadata by learned function, by feature, by instance, by training data, by test data, by effectiveness, and/or by another category or attribute and may provide query access to the indexed evaluation metadata.”) and
generate, based on the configuration artifact, the ML model that is trained and validated for performing prediction or inference (see Maughan: Fig.8, [0162], “prediction module 202 generates 802 one or more predictive results by applying a model to workload data. In a certain embodiment, the model may include one or more learned functions based on training data. A drift detection module 204 detects 804 a drift phenomenon relating to the one or more predictive results. In response to detecting the drift phenomenon, a retrain module 302 retrains 806 the model based on updated training data, and the method 800 ends.”), wherein the ML model is stored in a model registry that stores a plurality of ML models, each ML model being provided with a version tag indicative of a specific version of the ML model (see Maughan: Fig.9, [0163], “A drift detection module 204 detects 904 a drift phenomenon relating to the one or more predictive results. In response to detecting the drift phenomenon, a retrain module 302 prompts 906 a user to select whether to use new training data or modified training data for retraining the module. If the method 900 determines 908 that the user has selected new data, the retrain module 302 and/or the prediction module 202 receive 910 the new training data. If the method 900 determines 908 that the user has selected modified training data, the retrain module 302 and/or the prediction module 202 modify 912 existing training data. The retrain module 302 retrains 914 the model using the new or modified training data. The retrain module 302 presents a predictive result from the original model and a modified predictive result from the retrained model to a user, and prompts 916 the user to select the original model or the retrained model, and the method 900 ends.”)
Maughan does not teach a processor comprising a monitoring engine to:
wherein the data source is a cloud-based computing platform
monitor, based on the monitoring rules stored in the rules engine, a model attribute associated with each ML model to identify an event associated with alteration in the model attribute from a pre- defined value, wherein the identified event pertains to a drift indicative of deterioration in an expected performance of the prediction or the inference of the ML model, wherein the drift pertains to at least one of a model drift, a data drift and a concept drift; and wherein, based on the identified event, the system executes an automated response including at least one of an alert and a remedial action to mitigate the event; and
further the processor is coupled with: a database comprising a serverless configuration database, and a machine learning operations (MLOps) database, wherein the serverless configuration database stores the configuration artifact and facilitates information related to an expected state pertaining to configuration of components of the ML model and the MLOps database facilitates information related to an actual state pertaining to the components of the ML model.
However, WALTERS teaches a processor comprising a monitoring engine (see WALTERS: Fig.1, [0045], “environment 100 can be configured to expose an interface for communication with other systems. Environment 100 can include computing resources 101, dataset generator 103, database 105, model optimizer 107, model storage 109, model curator 111, and interface 113.”), to:
the data source is a cloud-based computing platform (see WALTERS: Fig.1, [0042], “Database 105 can include one or more databases configured to store data for use by system 100. The databases can include cloud-based databases (e.g., AMAZON WEB SERVICES S3 buckets) or on-premises databases.”)
monitor, based on the monitoring rules stored in the rule’s engine, a model attribute associated with each ML model to identify an event associated with alteration in the model attribute from a pre- defined value (see WALTERS: Fig.1, [0045], “Model curator 111 can be configured to impose governance criteria on the use of data models. For example, model curator 111 can be configured to delete or control access to models that fail to meet accuracy criteria. As a further example, model curator 111 can be configured to limit the use of a model to a particular purpose, or by a particular entity or individual. In some aspects, model curator 11 can be configured to ensure that data model satisfies governance criteria before system 100 can process data using the data mod”), wherein the identified event pertains to a drift indicative of deterioration in an expected performance of the prediction or the inference of the ML model (see WALTERS: Fig.18, [0184], “data drift is detected. In some embodiments, detecting data drift is a based on a comparison of predicted data to event data to determine a difference between predicted data and event data. In the embodiments, detecting data drift may be based on known statistical methods. For example, detecting data drift at step 1812 may be based on at least one of a least squares error method, a regression method, a correlation method, or other known statistical method.”), wherein the drift pertains to at least one of a model drift, a data drift and a concept drift (see WALTERS: Fig.18, [0184], “detecting a difference between predicted data and event data includes determining whether a difference between generated data and event data meets or exceeds a threshold difference. In some embodiments, detecting data drift includes determining a difference between the data profile of the predicted data and the data profile of the event data. For example, drift may be detected based on a difference between the covariance matrix of the predicted data and a covariance matrix of the event data.”) and wherein, based on the identified event, the system executes an automated response including at least one of an alerts and a remedial action to mitigate the event (see WALTERS: Fig.1, [0185], “the model may be corrected (updated) based on detected drift. Correcting the model may include model training and/or hyperparameter tuning, consistent with disclosed embodiments. Correcting the model may be involve model training or hyperparameter tuning using the received event data and/or other data.”)
a database comprising a serverless configuration database (see WALTERS: Fig.1, [0152], “system 1600 comprises a serverless architecture and the development instance may be an ephemeral container instance or computing instance (see WALTERS: Fig.1, [0152], “In some embodiments, system 1600 comprises a serverless architecture and the development instance may be an ephemeral container instance or computing instance, … “Termination or assignment may be based on performance of the development instance or the performance of another development instance. In this way, the serverless architecture may more efficiently allocate resources during hyperparameter tuning than traditional, server-based architectures.”)
Because both Maughan and WALTERS are in the same/similar field of endeavor of machine learning model lifecycle management, accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the invention to modify teaching of Maughan to include a cloud base platform system that monitor ML model attribute by detecting and identifying model/data drift and performing automated correction actions by retraining the deployed ML models as taught by WALTERS. After modification of Maughan, the ML model framework that create, configure, validate, and version ML models by generating configuration artifact and validation rule can incorporate the mechanism of detecting, identifying and mitigating model/data drift in deployed model as taught WALTERS. One would be motivated to make such a combination in order to improve reliability, correctness and performance of deployed machine learning models and scalability of RL model training (see WALTERS [0011])
Maughan and WALTERS does not teach the processor is coupled with a machine learning operations (MLOps) database, wherein the serverless configuration database stores the configuration artifact and facilitates information related to an expected state pertaining to configuration of components of the ML model and the MLOps database facilitates information related to an actual state pertaining to the components of the ML model.
However, Schierz teaches a processor coupled with a machine learning operations (MLOps) database (see Schierz: Fig.1, [0093], “The model package 102 can be managed and controlled by an MLOps controller 120, which acts as an interface between a prediction environment (e.g., including the model package 102) and an internal or MLOps environment.”), wherein the serverless configuration database stores the configuration artifact and facilitates information related to an expected state pertaining to configuration of components of the ML model (see Schierz: Fig.1, [0119], “Tables 11-13 include examples of a few of the experiments created to test bucketing strategies for different two-sample scenarios. Sample 1 in these examples is a feature from training data and Sample 2 is a feature from scoring data. Each scenario is labeled with whether drift should be expected for that test.”) and the MLOps database facilitates information related to an actual state pertaining to the components of the ML model (see Schierz: Fig.12C, [0158], “As predictions are made and actual values are received, the predictions and actual values can be stored in a database and/or analyzed to determine model accuracy. For example, referring to FIG. 12D, the systems and methods can review previous predictions 1240 and corresponding actual values 1242 to assess model performance. When the model predictions 1240 deviate considerably from the actual values 1242, the systems and methods can determine that unexpected drift has been encountered.”)
Because Maughan, WALTERS and Schierz are in the same/similar field of endeavor of machine learning model lifecycle management, accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the invention to modify teaching of Maughan to include a cloud base platform system that monitor ML model attribute by detecting and identifying model/data drift and performing automated correction actions by facilitating information related to an expected state and actual state as taught by Schierz. One would be motivated to make such a combination in order to improve reliability, correctness and performance of deployed machine learning models and scalability of RL model training.
Regarding Claim 2,
Maughan, WALTERS and Schierz teach all the limitations of Claim 1. Maughan further teaches the system wherein a model proxy engine to:
receive, from at least one user application, through an application programming interface (API), a request for performing the prediction or the inference in a consumption stage, wherein the consumption stage pertains to a given timeline in which the version of the ML model is available for performing the prediction or the inference model (see Maughan: Fig.1, [0047], “The predictive analytics module 102 may service predictive analytics requests to clients 104 locally, executing on the same host computing device as the predictive analytics module 102, by providing an API to clients 104, receiving function calls from clients 104, providing a hardware command interface to clients 104, or otherwise providing a local channel 108 to clients 104.”) and
identify, from the plurality of ML models in the model registry, the ML model suitable to perform the prediction or the inference, wherein the ML model is identified based on at least one of a requirement of the prediction or the inference and a traffic information for consumption of the ML model, and wherein the model proxy engine directs the request to a model endpoint pertaining to the ML model for facilitating the prediction or the inference (see Maughan: Fig.9, [0163], “A drift detection module 204 detects 904 a drift phenomenon relating to the one or more predictive results. In response to detecting the drift phenomenon, a retrain module 302 prompts 906 a user to select whether to use new training data or modified training data for retraining the module. If the method 900 determines 908 that the user has selected new data, the retrain module 302 and/or the prediction module 202 receive 910 the new training data. If the method 900 determines 908 that the user has selected modified training data, the retrain module 302 and/or the prediction module 202 modify 912 existing training data. The retrain module 302 retrains 914 the model using the new or modified training data.”)
Regarding Claim 3,
Maughan, WALTERS and Schierz teach all the limitations of Claim 2. Maughan further teaches the system wherein:
a ground-truth engine to: collect, from the user application, through an application programming interface (API), a set of inferences pertaining to ground truth of the prediction or the inference performed by the ML models, wherein the set of inferences include a pre-defined number of inferences collected over a definite period of time in the consumption stage (see Maughan: Fig.2, [0067], “The predict-time fix module 206, in one embodiment, is configured to modify at least one predictive result from the prediction module 202 in response to the drift detection module 204 detecting a drift phenomenon. In one embodiment, the predict-time fix module 206 may modify a predictive result by changing one or more portions of the predictive result. For example, in one embodiment, the drift detection module 204 may detect an out-of-range value in the workload data, and the predict-time fix module 206 may modify a predictive result by reapplying the model of the prediction module 202 to modified workload data, in which the out-of-range value is omitted.”)
Regarding Claim 4,
Maughan, WALTERS and Schierz teach all the limitations of Claim 3. Maughan further teaches the system:
a metrics engine to: evaluate the set of inferences received from the ground truth engine to obtain a set of metrics including at least one of model metrics pertaining to the ML model and data metrics pertaining to the pre-stored inputs associated with the ML model, wherein the set of metrics include indicators to facilitate tracking performance of the plurality of ML models (see WALTERS: Fig.2, [0051], “model optimizer 107 can be configured to determine one or more values for similarity and/or predictive accuracy metrics, as described herein. In some embodiments, based on values for similarity metrics, model optimizer 107 can be configured to assign a category to the synthetic data model.”)
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the invention to modify teaching of Maughan to include the system evaluate the set of inferences received from the ground truth engine to obtain a set of metrics as taught by WALTERS. One would be motivated to make such a combination in order to improve reliability, correctness and performance of deployed machine learning models and scalability of RL model training (see WALTERS [0011])
Regarding Claim 6,
Maughan, WALTERS and Schierz teach all the limitations of Claim 1. Maughan further teaches the system wherein:
the identified event comprises at least one of a variance in state of components of the ML model, increase in execution time of the ML model beyond a predefined limit, modification in compliance requirements of the system, modification in policy requirements of the system, modification in the version of the ML model, deviation in the model attributes beyond a pre-defined threshold, and observed deviation in data associated with the ML model (see WALTERS: Fig.1, [0109], “The similarity metric can depend on one or more of the above criteria. For example, the similarity metric can depend on one or more of (1) a covariance of the output data and a covariance of the normalized reference dataset, (2) a univariate value distribution of an element of the synthetic dataset, (3) a univariate value distribution of an element of the normalized reference dataset, (4) a number of elements of the synthetic dataset that match elements of the reference dataset, (5) a number of elements of the synthetic dataset that are similar to elements of the normalized reference dataset, (6) a distance measure between each row of the synthetic dataset (or a subset of the rows of the synthetic dataset) and each row of the normalized reference dataset (or a subset of the rows of the normalized reference dataset), (7) a frequency of duplicate elements in the synthetic dataset and the normalized reference dataset, (8) a relative prevalence of rare values in the synthetic and normalized reference dataset, and (9) differences in the ratios between the synthetic dataset and the normalized reference dataset.”)) See the motivation to combine n Claim 1.
Regarding Claim 7,
Maughan, WALTERS and Schierz teach all the limitations of Claim 1. Maughan further teaches the system wherein a model proxy engine to:
the remedial action includes execution of at least one of an automated training pipeline, automated update of the configuration artifact, an automatic version rollback and an automated release pipeline of the ML model, wherein the automated release pipeline includes execution of release of the ML model based on the configuration artifact corresponding to the release pipeline (see Maughan: Fig.3, [0077], “The retrain module 302 may retrain a new ensemble, portion thereof, or other machine learning using one or more outcomes received from a user or other client 104, as described above. The retrain module 302, in certain embodiments, may periodically request outcome data and/or training data from a user or other client 104 regardless of whether drift has occurred, so that the retrain module 302 may automatically retrain machine learning in response to the drift detection module 204 detecting drift, without additional input from the user or other client 104.”)
Regarding Claim 8,
Maughan, WALTERS and Schierz teach all the limitations of Claim 7. Maughan further teaches the system wherein:
the release pipeline pertains to at least one of a basic rolling update release pipeline and a champion challenger release pipeline (see Maughan: Fig.11, [0168], “The synthesizer module 510 synthesizes 1124 the selected 1122 learned functions into synthesized learned functions 524. The function evaluator module 512 evaluates 1126 the synthesized learned functions 524 to generate a synthesized metadata rule set 522. The synthesizer module 510 organizes 1128 the synthesized 1124 learned functions 524 and the synthesized metadata rule set 522 into a predictive ensemble 504.”)
Regarding Claim 9,
Maughan, WALTERS and Schierz teach all the limitations of Claim 8. Maughan further teaches the system wherein a model proxy engine to:
the champion challenger release pipeline evaluates performance of a challenger corresponding to a new version of the ML model in comparison to a champion corresponding to an existing version of the ML model (see Maughan: Fig.11, [0167], “The function evaluator module 512 evaluates 1120 the extended 1116 learned functions. The function selector module 516 selects 1122 at least two learned functions, such as the generated 1106 learned functions, the combined 1110 learned functions, the extended 1116 learned functions, or the like, based on evaluation metadata from one or more of the evaluations 1108, 1114, 1120.”), wherein the champion challenger release pipeline is activated by creation of a variant model endpoint corresponding to the new version for collecting inference for the new version, wherein the new version is released if the performance of the new version exceeds the performance of the existing version, and wherein the new version is not released if the performance of the new version fails to exceeds the performance of the existing version (see Maughan: Fig.11, [0169], “If the interface module 602 receives 1102 an analysis request, the data receiver module 402 receives 1132 workload data associated with the analysis request. The orchestration module 520 directs 1134 the workload data through a predictive ensemble 504 associated with the received 1102 analysis request to produce a result, such as a classification, a confidence metric, an inferred function, a regression function, an answer, a prediction, a recognized pattern, a rule, a recommendation, and/or another result. The interface module 602 provides 1130 the produced result to the requesting client 104, and the interface module 602 continues to monitor 1102 requests.”)
Regarding Claim 10,
Maughan, WALTERS and Schierz teach all the limitations of Claim 1. Maughan further teaches the system wherein:
the ML model is trained based on the configuration artifact corresponding to the automated training pipeline (see Maughan: Fig.11, [0168], “The synthesizer module 510 synthesizes 1124 the selected 1122 learned functions into synthesized learned functions 524. The function evaluator module 512 evaluates 1126 the synthesized learned functions 524 to generate a synthesized metadata rule set 522. The synthesizer module 510 organizes 1128 the synthesized 1124 learned functions 524 and the synthesized metadata rule set 522 into a predictive ensemble 504.”)
Regarding Claim 11,
Maughan, WALTERS and Schierz teach all the limitations of Claim 1. Maughan further teaches the system wherein:
the ML model is validated after training based on the validation rules such that the output of the validation engine is transmitted to the rules engine, wherein if the validation rules are satisfied, the ML model is registered for subsequent step of release, and wherein if the validation rules are not satisfied, the system facilitates a notification /recommendation indicating a requirement for correction or confirmation of changes in at least one of the validation rules or dataset for performing re-training of the ML model based on another configuration artifact (see Maughan: Fig.9, [0163], “A drift detection module 204 detects 904 a drift phenomenon relating to the one or more predictive results. In response to detecting the drift phenomenon, a retrain module 302 prompts 906 a user to select whether to use new training data or modified training data for retraining the module. If the method 900 determines 908 that the user has selected new data, the retrain module 302 and/or the prediction module 202 receive 910 the new training data. If the method 900 determines 908 that the user has selected modified training data, the retrain module 302 and/or the prediction module 202 modify 912 existing training data. The retrain module 302 retrains 914 the model using the new or modified training data. The retrain module 302 presents a predictive result from the original model and a modified predictive result from the retrained model to a user, and prompts 916 the user to select the original model or the retrained model, and the method 900 ends.”)
Regarding Claim 13,
Maughan, WALTERS and Schierz teach all the limitations of Claim 12. Maughan further teaches the system wherein:
the processor comprises: a self-healing reconciliation loop engine to: perform an assessment loop to identify the variance in states of components pertaining to the ML model by assessing a difference between the expected state and the actual state pertaining to configuration of components associated with the version of the ML model, wherein the absence of the variance in states is indicative of an expected functioning of the model, and the presence of variance in state is indicative of a factor pertaining to at least one of the model drift and introduction of the new version of the ML model (see WALTERS: Fig.20, [0212] “data drift is detected based on a difference between the event data and the predicted data. For example, data drift may be detected if a difference meets or exceeds a threshold difference between predicted data to event data, consistent with disclosed embodiments. In some embodiments, model optimizer 107 may detect data drift in a manner consistent with the disclosed embodiments”; and
a self-healing strategy engine to: execute, upon identification of the difference in the expected state and the actual state, an automated self-healing action to facilitate mitigation of the difference in the expected state and the actual state (see WALTERS: Fig.18, [0185], “the model may be corrected (updated) based on detected drift. Correcting the model may include model training and/or hyperparameter tuning, consistent with disclosed embodiments. Correcting the model may be involve model training or hyperparameter tuning using the received event data and/or other data.”)
See motivation to combine in Claim 1.
Regarding Claim 14,
Maughan, WALTERS and Schierz teach all the limitations of Claim 13. Maughan further teaches the system wherein:
the automated self-healing action corresponds to an action related to at least one of deletion of a component, addition of a component, and update of an existing component of the ML model (see WALTERS: Fig.18, [0184], “detecting data drift is a based on a comparison of predicted data to event data to determine a difference between predicted data and event data. In the embodiments, detecting data drift may be based on known statistical methods. For example, detecting data drift at step 1812 may be based on at least one of a least squares error method, a regression method, a correlation method, or other known statistical method. In some embodiments, the difference is determined using at least one of a Mean Absolute Error, a Root Mean Squared Error, a percent good classification, or the like.”)
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the invention to modify teaching of Maughan to include the system that automated self-healing action corresponds to an action related to at least one of deletion of a component, addition of a component, and update of an existing component of the ML model as taught by WALTERS. One would be motivated to make such a combination in order to improve reliability, correctness and performance of deployed machine learning models and scalability of RL model training (see WALTERS [0011])
Regarding Claim 15,
Maughan, WALTERS and Schierz teach all the limitations of Claim 1. Maughan further teaches the system wherein:
a control plane reconciliation loop engine to: assess the configuration artifact pertaining to the specific version of the model, wherein upon detection of a new configuration artifact pertaining to the new version of the ML model, the configuration database is automatically updated to include the new configuration artifact (see Maughan: Fig.9, [0163], “A drift detection module 204 detects 904 a drift phenomenon relating to the one or more predictive results. In response to detecting the drift phenomenon, a retrain module 302 prompts 906 a user to select whether to use new training data or modified training data for retraining the module. If the method 900 determines 908 that the user has selected new data, the retrain module 302 and/or the prediction module 202 receive 910 the new training data.”)
Regarding independent Claim 16,
Claim 16 is directed to a method and has similar claim limitations as claim 1 and is rejected with the same rationale.
Regarding Claim 17,
Claim 17 is directed to a method and has similar claim limitations as claim 2 and is rejected with the same rationale.
Regarding Claim 18,
Maughan, WALTERS and Schierz teach all the limitations of Claim 16. Maughan further teaches the system wherein:
performing, by the processor, an assessment loop to identify the variance in states of components of the ML model by assessing a difference between the expected state and the actual state associated with the version of the ML model, wherein the absence of the variance in states is indicative of an expected functioning of the model, and the presence of variance in state is indicative of a factor pertaining to at least one of the model drift and introduction of the new version of the ML model (see WALTERS: Fig.20, [0212], “ data drift is detected based on a difference between the event data and the predicted data. For example, data drift may be detected if a difference meets or exceeds a threshold difference between predicted data to event data, consistent with disclosed embodiments. In some embodiments, model optimizer 107 may detect data drift in a manner consistent with the disclosed embodiments.”); and
executing, by the processor, upon identification of the difference in the expected state and the actual state, an automated self-healing action to facilitate mitigation of the difference in the expected state and the actual state (see WALTERS: Fig.18, [0184], “At step 1814, the model may be corrected (updated) based on detected drift. Correcting the model may include model training and/or hyperparameter tuning, consistent with disclosed embodiments. Correcting the model may be involve model training or hyperparameter tuning using the received event data and/or other data.”)
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the invention to modify teaching of Maughan to include the system that performing, by the processor, an assessment loop to identify the variance in states of components of the ML model and executing an automated self-healing action as taught by WALTERS. One would be motivated to make such a combination in order to improve reliability, correctness and performance of deployed machine learning models and scalability of RL model training (see WALTERS [0011])
Regarding Claim 19,
Maughan, WALTERS and Schierz teach all the limitations of Claim 1. Maughan further teaches the system wherein:
assessing, by the processor, the configuration artifact pertaining to the specific version of the model, upon detection of a new configuration artifact pertaining to the new version of the ML model, updating automatically, by the processor, the configuration database to include the new configuration artifact (see Maughan: Fig.9, [0163], “A drift detection module 204 detects 904 a drift phenomenon relating to the one or more predictive results. In response to detecting the drift phenomenon, a retrain module 302 prompts 906 a user to select whether to use new training data or modified training data for retraining the module. If the method 900 determines 908 that the user has selected new data, the retrain module 302 and/or the prediction module 202 receive 910 the new training data.”)
Regarding Claim independent 20,
Claim 20 is directed to a non-transitory computer readable medium and has similar claim limitations as claim 1 and is rejected with the same rationale.
Response to Arguments
Claim Rejections - 35 U.S.C. § 101,
Regarding the 35 U.S.C. 101 rejection for being directed non-statutory subject matter has been sustained based on applicant amendments and. Examiner notes that the applicant’s amendments including the recitation of a serverless configuration database, an MLOPs database, and expected versus actual state information, merely involve organizing, storing and comparing information which are abstract mental processer. The additional elements are implemented using generic computing components performed well-understood, routine and conventional functions that do not improve computer functionality or machine learning technology. Therefore, the 35 U.S.C. 101 rejection has been sustained.
Claim Rejections - 35 U.S.C. § 112(f),
The rejection to the claims as being indefinite under - 35 U.S.C. § 112(f), has been sustained based on applicant amendment.
Claim Rejections - 35 U.S.C. § 103,
Applicant’s arguments with respect to claim amendments have been considered but are moot considering the new combination of references being used in the current rejection. The new combination of references was necessitated by Applicant’s claim amendments. Therefore, the claims are rejected under the new combination of references as indicated above.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
PGPUB
NUMBER:
INVENTOR-INFORMATION:
TITLE / DESCRIPTION
US 20220405611 A1
SINGLA; Kushal
Title: SYSTEMS AND METHODS FOR VALIDATING FORECASTING MACHINE LEARNING MODELS
Description: A machine learning model that generates forecasts based on univariate time series data may be referred to as a forecasting machine learning model or a forecasting model. Forecasting machine learning models include classical models, such as linear models and exponential smoothing models, and more sophisticated models, such as decision tree models, multilayer perceptron model, long short-term memory (LSTM) network models, and/or the like.
US 10296848 B1
Mars; Jason
Title: Systems And Method for Automatically Configuring Machine Learning Models
Description: The inventions herein relate generally to the machine learning field, and more specifically to a new and useful system and method for intelligently training machine learning models in the machine learning field.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZELALEM W SHALU whose telephone number is (571)272-3003. The examiner can normally be reached M- F 0800am- 0500pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Cesar Paula can be reached at (571) 272-4128. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Zelalem Shalu/Examiner, Art Unit 2145
/CESAR B PAULA/Supervisory Patent Examiner, Art Unit 2145