Prosecution Insights
Last updated: April 19, 2026
Application No. 18/147,967

TECHNIQUES FOR EVALUATING AN EFFECT OF CHANGES TO MACHINE LEARNING MODELS

Final Rejection §101§103
Filed
Dec 29, 2022
Examiner
MARU, MATIYAS T
Art Unit
2148
Tech Center
2100 — Computer Architecture & Software
Assignee
Equifax Inc.
OA Round
2 (Final)
58%
Grant Probability
Moderate
3-4
OA Rounds
4y 6m
To Grant
70%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
23 granted / 40 resolved
+2.5% vs TC avg
Moderate +12% lift
Without
With
+12.5%
Interview Lift
resolved cases with interview
Typical timeline
4y 6m
Avg Prosecution
39 currently pending
Career history
79
Total Applications
across all art units

Statute-Specific Performance

§101
35.9%
-4.1% vs TC avg
§103
50.9%
+10.9% vs TC avg
§102
1.9%
-38.1% vs TC avg
§112
11.3%
-28.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 40 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Argument Applicant's arguments filed 12/16/2025 ("Arguments/Remarks") have been fully considered but they are not persuasive. Argument – 1: (page: 11) Applicant contends: “i. The Claims, Like Those in SRI International are not Practically Capable of Being Performed in the Human Mind and are Therefore Eligible. For one, the claims cannot practically be performed in the human mind and therefore do not recite a mental process as alleged. 1 The M.P.E.P. cites to SRI Int 'l, Inc. v. Cisco Systems, Inc…” Regarding the above argument, the Examiner respectfully disagrees with Applicant’s assertion that the rejected claim do not contain limitations that cannot practically be performed in the human mind. The Examiner notes that the rejected claims recite a mental process, such as: determining one or more performance metrics based on comparing the first output data to the second output data – which recites abstract idea: mental process of determining one or more performance metrics by comparing the first output data to the second output data. classifying, based on the one or more performance metrics, the second machine learning model with a classification – which recites abstract idea: mental process of grouping a machine learning model based on the performance metric. (see claim rejections - 35 USC § 101 section). Thus, the rejected claim recites limitations that can practically be performed in the human mind and therefor constitute a mental process. In contrast to SRI Int 'l, Inc. v. Cisco Systems, Inc, where the human mind is not equipped to perform the claimed operations, the limitations of the present application can reasonably be carried out mentally or with pen and paper. Accordingly, the present claim recites a mental process and is therefore directed to an abstract idea. Argument – 2: (page: 12 – 13) Applicant contends: “… The operations for optimizing the performance of machine learning models, as reflected in the claims, persuaded the Appeals Review Panel that the claims constituted an improvement to how the machine learning model itself operates and were therefore eligible at Step 2A Prong Two. The reasoning under SRI and Enfish, and as applied by the Appeals Review Panel's decision in the '567 Rehearing is instructive here. The current application describes specific techniques for addressing problems in the software arts - particularly determining changes in machine learning models during transfers between platforms and updating performance of machine learning models in response to failed performance. Applicant's detailed description states that "[c]ertain aspects described herein enable dynamically modifying model parameters to achieve a desirable implementation of the update model Such dynamic modification of model performance can reduce network downtime by eliminating a need for operator intervention to change model parameters. Moreover, Applicant's specification describes improvements to network performance "because computing environment processes associated with executing a model for which a negative effect of a change is determined can be paused…” Regarding the above argument, the Examiner notes that the cited disclosure does not provide the required technical detail demonstrating how the modification of the parameter results in the asserted improvement. While the specification generally states that model parameters maybe dynamically modified to improve performance or reduce network downtime, it does not explain the specific mechanism, algorithm or technical procedure by which the parameter is modified or how such modification concretely produces the claimed improvement. To determine if the disclosure provides sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing an improvement. The specification need not explicitly set forth the improvement, but it must describe the invention such that the improvement would be apparent to one of ordinary skill in the art. Conversely, if the specification explicitly sets forth an improvement but in a conclusory manner (i.e., “reduce network downtime” a bare assertion of an improvement without the detail necessary to be apparent to a person of ordinary skill in the art), the examiner should not determine the claim improves technology. Second, if the specification sets forth an improvement in technology, the claim must be evaluated to ensure that the claim itself reflects the disclosed improvement.,” MPEP 2106.04(d)(1). Argument – 3: (page: 13) Applicant contends: “The described technical benefits are analogous to those in SRI and in the '567 Rehearing. Like in SRI, the claims recite techniques for improving network performance, specifically by pausing operations when failed performance is detected. Like in the claims at issue in the '567 Rehearing, the claims recite automated techniques for updating model performance. The claims therefore recite improvements to computer technology including to computer networks as in SRI and to machine learning model systems as in the '567 Rehearing, and the claims are amended to render the claims' focus towards these improvements more explicit. Because the specification here asserts improvements in computer technology, and the improvements are reflected in the claims, the claims are eligible as they are directed to technical improvements in computer technology in a manner similar to the claims in SRI and the '567 Rehearing…” Regarding the above argument, the Examiner notes that the rejected and amened claim limitations do not recite required technical details or mechanism on how it improve network performance. Rather, the claims merely recite pausing operations in response to detected failed performance, without providing any technical details regarding how such pausing improves the functioning of the network itself. As such, the claim describe a result oriented action rather than a specific technological improvement to network performance. Applicant’s arguments (pg. 15) with respect to amended claim(s) have been considered but are moot, because arguments/remarks are directed to amended claim limitations that were not previously examined by the examiner. The rejections are noted in the current office action to address amended claim limitations. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim(s) 1 – 20 rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e. an abstract idea) without significantly more. In step 1, of the 101-analysis set forth in the MPEP 2106, the examiner has determined that the following limitations recite a process that, under the broadest reasonable interpretation, falls within one or more statutory categories (processes). In step 2A prong 1, of the 101-analysis set forth in MPEP 2106, the examiner has determined that the following limitations recite a process that, under broadest reasonable interpretation, covers a mental process but for the recitation of generic computer components: 2. Regarding claim 1: determining one or more performance metrics based on comparing the first output data to the second output data (i.e.: the broadest reasonable interpretation, the claim recites abstract idea: mental process: It involves determining one or more performance metrics by comparing the first output data to the second output data. See (MPEP 2106.04)). classifying, based on the one or more performance metrics, the second machine learning model with a classification (i.e.: the broadest reasonable interpretation, the claim recites abstract idea: mental process: It involves grouping a machine learning model based on the performance metric. See (MPEP 2106.04)). causing the second machine learning model to be modified; (i.e.: the broadest reasonable interpretation, the claim recites abstract idea: mental process: It involves evaluating a classification, if the model failing, update or adjust the model. See (MPEP 2106.04)). responsive to classifying the second machine learning model with the passing classification: validating the data migration of the repository data by comparing the first data repository to the second data repository. (i.e.: the broadest reasonable interpretation, the claim recites abstract idea: mental process: It involves evaluating a classification result, reviewing data contained in two repositories and comparing the information to determine whether the migration was successful. See (MPEP 2106.04)). If the claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mental process, but for the recitation of generic computer components, then it falls within the mental process. Accordingly, the claim recites an abstract idea. Step 2A Prong 2 of the 101-analysis, set forth in MPEP 2106, the examiner has determined that the following additional elements do not integrate this judicial exception into a practical application: As evaluated below: The preamble is deemed insufficient to transform the judicial exception to a patentable invention to a patentable invention because the preamble generally links the use of a judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h). executing a first machine learning model on a first computing platform using input data accessed from a first data repository located on the first computing platform to generate first output data; (i.e.: deemed insufficient to transform the judicial exception to a patentable invention because the claim recites limitation which does not amount to more than a recitation of the words "apply it" (or an equivalent), such as mere instructions to implement an abstract idea on a computer. See MPEP 2106.05(f)). migrating repository data from the first data repository to a second data repository located on a second computing platform; (i.e.: deemed insufficient to transform the judicial exception to a patentable invention because the claim recites limitation directed to mere data gathering as deemed insufficient to transform the judicial exception because claimed elements are considered insignificant extra-solution activity. See MPEP (2106.05(g))). executing a second machine learning model on the second computing platform using the input data accessed from the second data repository to generate second output data (i.e.: deemed insufficient to transform the judicial exception to a patentable invention because the claim recites limitation which does not amount to more than a recitation of the words "apply it" (or an equivalent), such as mere instructions to implement an abstract idea on a computer. See MPEP 2106.05(f)). wherein the second machine learning model is generated by migrating the first machine learning model to the second computing platform; (i.e.: deemed insufficient to transform the judicial exception to a patentable invention because the claim recites limitation simply links the judicial exception to a field of use and/or technology environment, see MPEP 2106.05(h). wherein the classification comprises a passing classification or a failing classification; (i.e.: deemed insufficient to transform the judicial exception to a patentable invention because the claim recites limitation simply links the judicial exception to a field of use and/or technology environment, see MPEP 2106.05(h). responsive to classifying the second machine learning model with the failing classification: pausing access to the second machine learning model; (i.e.: deemed insufficient to transform the judicial exception to a patentable invention because the claim recites limitation which does not amount to more than a recitation of the words "apply it" (or an equivalent), such as mere instructions to implement an abstract idea on a computer. See MPEP 2106.05(f)). In Step 2B of the 101-analysis set forth in the 2019 PEG, the examiner has determined that the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception: Regarding limitation (I, III and VI), recite mere application of the abstract idea or mere instructions to implement an abstract idea on a computer are deemed insufficient to transform the judicial exception to a patentable invention because the limitations generally apply the use of a generic computer and/or process with the judicial exception, see MPEP 2106.05(f). Regarding limitation (II), additional elements considered extra/post solution activity, as analyzed above, are activity that are well-understood routine and conventional, specifically: the courts have recognized the computer functions as well‐understood, routine, and conventional functions. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TL| Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network). See MPEP 2106.05(d)(II). Regarding limitation (IV and V), additional elements are deemed insufficient to transform the judicial exception to a patentable invention to a patentable invention because they generally link the judicial exception to the technology environment, see MPEP 2106.05(h). As analyzed above, the additional elements, analyzed above, do not integrate the noted judicial exception into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Therefore, the claim is directed to an abstract idea. Regarding claim 8, The rest of the limitations recite similar subject matter as claim 1, so are rejected under the same rationale. A system comprising: a processing device; and Deemed insufficient to transform the judicial exception to a patentable invention because the limitation is directed to mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea and are considered to adding the words “apply it” (or an equivalent) with the judicial exception, See MPEP 2106.05(f). a memory device in which instructions executable by the processing device are stored for causing the processing device to perform operations comprising: Deemed insufficient to transform the judicial exception to a patentable invention because the limitation is directed to mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea and are considered to adding the words “apply it” (or an equivalent) with the judicial exception, See MPEP 2106.05(f) In Step 2B of the 101-analysis set forth in the 2019 PEG, the examiner has determined that the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception: Regarding limitation (I and II), recite mere application of the abstract idea or mere instructions to implement an abstract idea on a computer are deemed insufficient to transform the judicial exception to a patentable invention because the limitations generally apply the use of a generic computer and/or process with the judicial exception, see MPEP 2106.05(f). Regarding claim 15, The rest of the limitations recite similar subject matter as claim 1, so are rejected under the same rationale. A non-transitory computer-readable storage medium having program code that is executable by a processor device to cause a computing device to perform operations comprising: Deemed insufficient to transform the judicial exception to a patentable invention because the limitation is directed to mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea and are considered to adding the words “apply it” (or an equivalent) with the judicial exception, See MPEP 2106.05(f). In Step 2B of the 101-analysis set forth in the 2019 PEG, the examiner has determined that the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception: Regarding limitation (I), recites mere application of the abstract idea or mere instructions to implement an abstract idea on a computer are deemed insufficient to transform the judicial exception to a patentable invention because the limitations generally apply the use of a generic computer and/or process with the judicial exception, see MPEP 2106.05(f). Regarding claim 2, dependent upon claim 1, and fail to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. The claim recites: wherein the second model has one or more parameters that are different from the first model. The recitation in the additional limitation simply links the judicial exception to a field of use and/or technology environment, see MPEP 2106.05(h). Limitations directed to field of use cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Claim(s) 9 and 16, recite similar subject matter as claim 2, so are rejected under the same rationale. Regarding claim 3, dependent upon claim 2, and fail to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. The claim recites: wherein modifying the one or more parameters of the second model comprises: modifying one or more scoring rules of the first model. (i.e.: the broadest reasonable interpretation, the claim recites abstract idea: mental process: It involves changing the rules applied to a machine learning model. See (MPEP 2106.04)). Claim(s) 10 and 17, recite similar subject matter as claim 3, so are rejected under the same rationale. Regarding claim 4, dependent upon claim 3, and fail to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. The claim recites: responsive to classifying the second model with the failing classification, pausing a data migration operation between the first platform and the second platform. Deemed insufficient to transform the judicial exception to a patentable invention because the limitation is directed to mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea and are considered to adding the words “apply it” (or an equivalent) with the judicial exception, See MPEP 2106.05(f). Limitations directed to using the computer as a tool for implementing an abstract idea cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Claim(s) 11 and 18, recite similar subject matter as claim 4, so are rejected under the same rationale. Regarding claim 5, dependent upon claim 1, and fail to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. The claim recites: wherein the performance metrics comprise one or more of a difference count or difference percentage between the first output data and the second output data, a number or percentage of entities with scores that change between the first output data and the second output data, a minimum score change between the first output data and the second output data, a maximum score change between the first output data and the second output data, or an average score change between the first output data and the second output data. The recitation in the additional limitation simply links the judicial exception to a field of use and/or technology environment, see MPEP 2106.05(h). Limitations directed to field of use cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Claim(s) 12 and 19, recite similar subject matter as claim 5, so are rejected under the same rationale. Regarding claim 6, dependent upon claim 1, and fail to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. The claim recites: for each of the set of determined performance metrics, compare the performance metric to a predefined criterion; (i.e.: the broadest reasonable interpretation, the claim recites abstract idea: mental process: It involves comparing each determined performance metric to a predefined value. See (MPEP 2106.04)). responsive to determining that the performance metric meets the predefined criteria, assign the performance metric to a first category; (i.e.: the broadest reasonable interpretation, the claim recites abstract idea: mental process: It involves assigning a performance metric to a first category when it meets a predefined criteria. See (MPEP 2106.04)). responsive to determining that the performance metric does not meet the predefined criteria, assign the performance metric to a second category, (i.e.: the broadest reasonable interpretation, the claim recites abstract idea: mental process: It involves assigning a performance metric to a second category when it does not meet a predefined criteria. See (MPEP 2106.04)). wherein the first category comprises a pass designation and the second category comprises a fail designation. The recitation in the additional limitation simply links the judicial exception to a field of use and/or technology environment, see MPEP 2106.05(h). Limitations directed to field of use cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Claim(s) 13 and 20, recite similar subject matter as claim 6, so are rejected under the same rationale. Regarding claim 7, dependent upon claim 1, and fail to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. The claim recites: wherein classifying the first model comprises: assigning a category to the first model based on categories assigned to each performance metric of the set of determined performance metrics. (i.e.: the broadest reasonable interpretation, the claim recites abstract idea: mental process: It involves assign a category to a machine learning model based on a categories already assigned to its performance metrics . See (MPEP 2106.04)). Claim 14, recite similar subject matter as claim 7, so is rejected under the same rationale. Claim Rejections - 35 USC § 103 Claim(s) 1 – 3, 5 – 6, 8 – 10, 12 – 13, 15 – 17 and 19 – 20 are rejected under 35 U.S.C. 103 as being unpatentable over Burg et al., Pub. No.: US20220067589A1, in view of He et al., Pub. No.: US20250021837A1, Akinapelli et al., Pub. No.: US20220147390A1, Singh et al., Pub. No.: US10331656B2, and Saillet et al., Pub No.: US20220414401A1. Regarding claim 1, Burg teaches: A method that includes one or more processing devices performing operations comprising: (Burg, “[0037] FIG. 1 is a schematic diagram showing hardware of an embedded system in the form of a board 11. The board 11 may, for example, be an Arduino® board or NXP® microcontroller. It will be appreciated that such a board 11 may include further components that are not shown nor described. The board 11 includes a processor 12 [A method that includes one or more processing devices performing operations], and a data storage unit 13. A wireless communication module 14 is also provided to enable wireless communication. The wireless communication module 14 enables communication over Wi-Fi® and/or Bluetooth®. In other implementations, the wireless communication module 14 may allow communication over a mobile telecommunications network. The wireless communication module 14 is configured to allow transmission of data to and from the board 11.”) executing a first machine learning model on a first computing platform using input data [ ] to generate first output data; (Burg, “[0023] The electronic device may be configured to perform unsupervised learning. In such embodiments, executing the first machine learning model may generate first output values [executing a first machine learning model on a first computing platform using input data [ ] to generate first output data] and executing the second machine learning model may generates second output values. The method may comprise running the program on a plurality of sets of input data to generate a plurality of sets of first and second output values; analyzing the first and second output values to identify a property of each of the first and second output values, and selecting one of the first machine learning model and the second machine learning model based the identified properties. The method may further comprise analyzing the first and second output values to see if the property has deviated beyond a threshold amount from a desired performance of the machine models. The property may be an intra-class entropy and/or an extra-class entropy.”) accessed from a first data repository located on the first computing platform (Burg, “[0029] A second embodiment provides electronic device comprising a processing element and a data storage element [accessed from a first data repository located on the first computing platform], the storage element storing code that, when executed by the processing element, causes the electronic device to perform a method for testing machine learning models, the method comprising: the electronic device receiving a machine learning model update data package; partially or fully updating a first machine learning model to generate a second machine learning model using the machine learning model update data package; executing the program, whereby the program executes both the first machine learning model and the second machine learning model using a common set of input data; collecting outputs from the first machine learning model and the second machine learning model for analysis.”) executing a second machine learning model on the second computing platform using the input data accessed from the second data repository to generate second output data, (Burg, “[0023] The electronic device may be configured to perform unsupervised learning. In such embodiments, executing the first machine learning model may generate first output values and executing the second machine learning model may generates second output values [executing a second machine learning model on a second computing platform using the input data [ ] to generate second output data]. The method may comprise running the program on a plurality of sets of input data [accessed from the second data repository] to generate a plurality of sets of first and second output values; analyzing the first and second output values to identify a property of each of the first and second output values, and selecting one of the first machine learning model and the second machine learning model based the identified properties. The method may further comprise analyzing the first and second output values to see if the property has deviated beyond a threshold amount from a desired performance of the machine models. The property may be an intra-class entropy and/or an extra-class entropy.”) determining one or more performance metrics based on comparing the first output data to the second output data; (Burg, “[0041] A/B testing as performed by the application 26 is a method of testing two different machine learning models, in this case machine learning model A 32 and machine learning model B 33, using the same input data. By collecting statistics on the performance of the two machines learning models, the performance of the machine learning models can be evaluated against each other [determining one or more performance metrics based on comparing the first output data to the second output data]. This allows the better performing machine learning model to be selected for use in further inference processing.”) Burg does not teach: wherein the second machine learning model is generated by migrating the first machine learning model to the second computing platform; classifying, based on the one or more performance metrics, the second machine learning model with a classification, wherein the classification comprises a passing classification or a failing classification; causing the second machine learning model to be modified migrating repository data from the first data repository to a second data repository located on a second computing platform; responsive to classifying the second machine learning model with the failing classification: pausing access to the second machine learning model; responsive to classifying the second machine learning model with the passing classification: validating the data migration of the repository data by comparing the first data repository to the second data repository He teaches: wherein the second machine learning model is generated by migrating the first machine learning model to the second computing platform; (He, “[0033] After testing the model on the training platform, the model can be migrated [by migrating the first machine learning model to the second computing platform] (i.e.: the testing model is the first machine learning model) to an inference platform to process live data. After migrating the model to the integration platform (e.g., by downloading a new model package) [wherein the second machine learning model is generated] (i.e.: the second machine learning model is the newly downloaded model in the inference platform), the model may not have optimal data processing characteristics (e.g., latency, throughput). This may be due to the testing of the model being done at the training platform using training data and training configurations/devices, not the live data and devices provided in the inference platform.”) He and Burg are related to the same field of endeavor (i.e.: machine learning model training). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teaching of He with teachings of Burg to add that a search-based optimization process explores parameter combinations to identify optimal settings for improving model performance (He, Abstract). Burg in view of He do not teach: classifying, based on the one or more performance metrics, the second machine learning model with a classification, wherein the classification comprises a passing classification or a failing classification; causing the second machine learning model to be modified. migrating repository data from the first data repository to a second data repository located on a second computing platform; responsive to classifying the second machine learning model with the failing classification: pausing access to the second machine learning model; responsive to classifying the second machine learning model with the passing classification: validating the data migration of the repository data by comparing the first data repository to the second data repository Akinapelli teaches: classifying, based on the one or more performance metrics, the second machine learning model with a classification, wherein the classification comprises a passing classification or a failing classification; and (Akinapelli, “[0026] In some embodiments, an auto-scaling engine (e.g., operating on a manager node or computing device of a computing cluster) may monitor performance of a computing cluster with respect to one or more performance metrics and/or performance thresholds. By way of example, each computing node in the computing cluster may be configured to perform operations to report one or more performance metrics (e.g., a number of pending queries, a number of pending tasks, a latency measurement, processing utilization, memory utilization, etc.). These performance metrics can be collected and stored in a centralized data store accessible to the auto-scaling engine. The auto-scaling engine may monitor these performance metrics to identify when the performance of the computing cluster [classifying, based on the one or more performance metrics, the second machine learning model with a classification] fails to meet a performance requirement (e.g., a performance metric falls below or exceeds one or more predefined performance thresholds) [wherein the classification comprises a passing classification or a failing classification;]. In response to detecting this scenario, the auto-scaling engine may adjust the number of computing nodes in the cluster. As a non-limiting example, if the performance metric indicates a latency of the system has exceeded a predefined performance requirement (e.g., a predefined performance requirement related to latency of task completion), the auto-scaling engine may perform operations to increase the number of computing nodes in the cluster in an effort to decrease the latency associated with task execution. As another example, if the performance metric indicates the number of idle computing nodes exceeds a predefined threshold (e.g., 1, 3, 4, 20, etc.), the auto-scaling engine may perform operations to reduce the number of computing nodes in the cluster.”) causing the second machine learning model to be modified (Akinapelli, “[0027] Initially, the adjustments [causing the second machine learning model to be modified] to the cluster due to the cluster's actual performance may be made in accordance with a predefined scheme. By way of example, the system may be configured to increase or decrease the number of computing nodes in the cluster by a predefined default amount (e.g., 1, 5, 10, etc.) depending on the manner by which the cluster's actual performance fails a performance requirement (e.g., as identified from one or more performance metrics provided by any suitable number of computing nodes of the cluster). After an adjustment has been made (e.g., an increase is made), the performance metrics of the computing nodes may be monitored to identify changes in performance.”) Akinapelli, Burg and He are related to the same field of endeavor (i.e.: machine learning model training). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teaching of Akinapelli with teachings of Burg and He to add predicting future performance changes and guide further adjustments for improved efficiency (Akinapelli, Abstract). Burg in view of He and Akinapelli do not teach: migrating repository data from the first data repository to a second data repository located on a second computing platform; responsive to classifying the second machine learning model with the failing classification: pausing access to the second machine learning model; responsive to classifying the second machine learning model with the passing classification: validating the data migration of the repository data by comparing the first data repository to the second data repository Singh teaches: migrating repository data from the first data repository to a second data repository located on a second computing platform; (Singh, (col. 1 line [51 – 56]), One embodiment includes a method that may be practiced in a computing environment. The method includes acts for migrating entity data from a first data store to a second data store [migrating repository data from the first data repository to a second data repository located on a second computing platform] and validating the migration. The method comprises migrating entity data for a particular entity from a first data store to a second data store using a first data protocol.”) responsive to classifying the second machine learning model with the passing classification: validating the data migration of the repository data (Singh, (col. 2 line [53 – 65]), “When data is migrated over from a first source system to a second destination system, typical validation includes generating a list of items which could not be converted between the two systems. Embodiments described herein are configured to validate and identify inconsistencies between data at the source system and the destination system, using a separate pipeline which is different from the pipeline used to migrate the data from the source system to the destination system in the first instance. Particular attention may be placed on validating migrations [responsive to classifying the second machine learning model with the passing classification: validating the data migration of the repository data] for individual entity data as each entity is migrated from a source system to a destination system. Such entities may be, for example, users, folders, directories, or other entities.”) by comparing the first data repository to the second data repository. (Singh, (col. 3 line [28 – 38]), “In particular, a user's data is identified individually, and migrated based on the data belonging to the particular user. Once the data has been migrated, operations may be performed, again at a user account level, such as by performing operations on a user's mailbox at both the source system and the destination system to obtain the same data from each system for the user. The data from the different systems is then compared to determine if there are any differences. If migration of data is sufficiently error free, then the particular data for the particular user can be released to the user such that the user can obtain the data on the destination system.”) Singh, Burg, He and Akinapelli are related to the same field of endeavor (i.e.: machine learning model training). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teaching of Singh with teachings of Burg, He and Akinapelli to add validation or comparison mechanism across repositories to confirm successful migration (Singh, Abstract). Burg in view of He, Akinapelli and Singh do not teach: responsive to classifying the second machine learning model with the passing classification: validating the data migration of the repository data by comparing the first data repository to the second data repository Saillet teaches: responsive to classifying the second machine learning model with the failing classification: pausing access to the second machine learning model; and (Saillet, “[0049] In some examples, once controller 110 determines that model 132 has less than a threshold amount of accuracy, controller 110 may remove model 132 from production environment 130 (or otherwise stop model 132 from providing “live” predictions of production data 134) [pausing access to the second machine learning model]. In other examples, controller 110 may have a first version of model 132 continue providing predictions of production data 134 while a copy of model 132 is being retrained with supplemental training dataset 124 (312), where this copy of model 132 will replace the production version of model 132 upon the completion of retraining. Where controller 110 keeps model 132 in production environment 130 upon detecting that model 132 is failing an accuracy threshold, controller 110 [responsive to classifying the second machine learning model with the failing classification] may take a remedial action to avoid inaccurate predictions causing problems of production data 134.”) Saillet, Burg, He, Akinapelli and Singh are related to the same field of endeavor (i.e.: machine learning model training). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teaching of Saillet with teachings of Burg, He, Akinapelli and Singh to add performance monitoring of the deployed model to detect model’s prediction accuracy against a predetermined threshold (Saillet, Abstract). Regarding claim 8, Burg teaches: A system comprising: a processing device; and a memory device in which instructions executable by the processing device are stored for causing the processing device to perform operations comprising: (Burg, “[0037] FIG. 1 is a schematic diagram showing hardware of an embedded system in the form of a board 11. The board 11 may, for example, be an Arduino® board or NXP® microcontroller. It will be appreciated that such a board 11 may include further components that are not shown nor described. The board 11 includes a processor 12 [a processing device], and a data storage unit 13 [and a memory device in which instructions executable by the processing device are stored for causing the processing device to perform operations comprising]. A wireless communication module 14 is also provided to enable wireless communication. The wireless communication module 14 enables communication over Wi-Fi® and/or Bluetooth®. In other implementations, the wireless communication module 14 may allow communication over a mobile telecommunications network. The wireless communication module 14 is configured to allow transmission of data to and from the board 11.”) The rest of the limitations are analogous to claim 1, so are rejected under similar rationale. Regarding claim 15, Burg teaches: A non-transitory computer-readable storage medium having program code that is executable by a processor device to cause a computing device to perform operations comprising: (Burg “[0005] According to a second aspect there is provided an electronic device with a processing element and a data storage element, the storage element containing code that [A non-transitory computer-readable storage medium having program code], when executed by the processing element, causes the electronic device to perform a method for testing machine learning models, the method comprising: the electronic device receiving a machine learning model update data package; partially or fully updating a first machine learning model to generate a second machine learning model using the machine learning model update data package; executing the program, [that is executable by a processor device to cause a computing device to perform operations] whereby the program executes both the first machine learning model and the second machine learning model using a common set of input data; collecting outputs from the first machine learning model and the second machine learning model for analysis.”) The rest of the limitations are analogous to claim 1, so are rejected under similar rationale. Regarding claim 2, Burg in view of He, Akinapelli, Singh and Saillet teach the method of claim 1. He further teaches: wherein the second model has one or more parameters that are different from the first model. (He, “[0035] The present embodiments relate to onboarding a model from a training platform [that are different from the first model] (i.e.: the testing model is the first machine learning model) to an inference platform and selecting parameters of the model to optimize performance of the model [wherein the second model has one or more parameters] (i.e.: the inference model is not a direct copy of the training model, some parameters (like threshold, feature combination), are modified or optimized for better performance). For example, the onboarding of the model to the inference platform can be based on a series of interactions between a model onboarding systems at the training platform and at the inference platform. An optimization process can include a searching-based process to derive optimal settings for the model. For example, a Simulated Annealing Heuristic Searching algorithm can be executed to simulate feature combinations of the model and identify an optimal combination of settings of the model for increased model performance.”) It would have been obvious to one of ordinary skill in the art before the effective filling date of the present application to combine the teachings of He with teachings of Burg, Akinapelli, Singh and Saillet for the same reasons disclosed for claim 1. Claim (s) 9 and 16 recite analogous limitation as claim 2, so are rejected under the same rationale. Regarding claim 3, Burg in view of He, Akinapelli, Singh and Saillet teach the method of claim 2. He further teaches: wherein modifying the one or more parameters of the second model comprises modifying one or more scoring rules of the first model. (He, “[0043] At 125, the model can be updated based on the derived accuracy. For example, one or more parameters [wherein modifying the one or more parameters of the second model] (i.e.: parameters of inference model are updated to improve performance) relating to the performance of the model can be updated to improve the performance of the model. Example parameters include weights in a neural network or thresholds used in a decision tree [comprises modifying one or more scoring rules of the first model] (i.e.: scoring rule, decision boundaries).”) It would have been obvious to one of ordinary skill in the art before the effective filling date of the present application to combine the teachings of He with teachings of Burg, Akinapelli, Singh and Saillet for the same reasons disclosed for claim 1. Claim (s) 10 and 17 recite analogous limitation as claim 3, so are rejected under the same rationale. Regarding claim 5, Burg in view of He, Akinapelli, Singh and Saillet teach the method of claim 1. Burg further teaches: wherein the performance metrics comprise: one or more of a difference count or difference percentage between the first output data and the second output data, a number or percentage of entities with scores that change between the first output data and the second output data, a minimum score change between the first output data and the second output data, a maximum score change between the first output data and the second output data, or an average score change between the first output data and the second output data. (Burg, “[0048] FIG. 6 is a schematic diagram showing the application for A/B testing after the update. The A/B testing code 31, when run, causes input data to be processed by machine learning model Mi-1 and machine learning model Mi. The two machine learning models 32 and 33 may be executed sequentially, or in parallel by the board 11. The results, in the form of node activations, from running both model A 32 and model B 33 are then compiled into test metrics 34, which provide a measure for the performance of both models. These test metrics 34 may be a set of performance metrics, a set of features, mathematical expressions, measures of entropy, or any other suitable metrics [wherein the performance metrics comprise: one or more of a difference count or difference percentage between the first output data and the second output data, a number or percentage of entities with scores that change between the first output data and the second output data, a minimum score change between the first output data and the second output data, a maximum score change between the first output data and the second output data, or an average score change between the first output data and the second output data]. The test metrics could be an accuracy measurement e.g. 82% accuracy, a measure of false positives and/or a measure of false negatives.”) Claim (s) 12 and 19 recite analogous limitation as claim 5, so are rejected under the same rationale. Regarding claim 6, Burg in view of He, Akinapelli, Singh and Saillet teach the method of claim 1. Burg further teaches: further comprising: for each of the set of determined performance metrics, compare the performance metric to a predefined criterion; (Burg, “[0026] The program may be configured to send a request to receive an updated model if the first entropy and second entropy do not meet a predetermined criteria. The predetermined criteria may based on a deviation of the value of the first entropy and/or second entropy from a threshold value associated with the machine learning models [for each of the set of determined performance metrics, compare the performance metric to a predefined criterion].”) Akinapelli further teaches: responsive to determining that the performance metric meets the predefined criteria, assign the performance metric to a first category; and (Akinapelli, “[0027] …If the adjust results in the performance of the cluster approaching the performance requirement, an additional adjustment of a similar nature (e.g., adding an additional set of nodes) may be performed. Once the auto-scaling engine detects the cluster is meeting the performance requirement [responsive to determining that the performance metric meets the predefined criteria, assign the performance metric to a first category;] (e.g., a latency metric is below a predefined latency threshold while a number of idle computing nodes is less than a predefined idle threshold), the auto-scaling engine may store any suitable data applicable to the time period between when the performance degradation was identified and when the performance degradation was rectified. Said another way, any performance metric(s) (e.g., latency metrics, a number of pending tasks, a number of active queries, etc.) and/or cluster metadata (e.g., number of manager nodes, number of worker nodes, central processing units of the worker/manager nodes, memory allocations of the worker/manager nodes, etc.) collected within a threshold time period before or after a degradation was detected, or an adjustment was made in response to the detected degradation, may be stored for subsequent use (e.g., as an instance of training data) with an indicator that indicates that the adjustments were successful or unsuccessful.”) responsive to determining that the performance metric does not meet the predefined criteria, assign the performance metric to a second category, (Akinapelli, “[0026] …The auto-scaling engine may monitor these performance metrics to identify when the performance of the computing cluster fails to meet a performance requirement (e.g., a performance metric falls below or exceeds one or more predefined performance thresholds) [responsive to determining that the performance metric does not meet the predefined criteria, assign the performance metric to a second category]. In response to detecting this scenario, the auto-scaling engine may adjust the number of computing nodes in the cluster. As a non-limiting example, if the performance metric indicates a latency of the system has exceeded a predefined performance requirement (e.g., a predefined performance requirement related to latency of task completion), the auto-scaling engine may perform operations to increase the number of computing nodes in the cluster in an effort to decrease the latency associated with task execution. As another example, if the performance metric indicates the number of idle computing nodes exceeds a predefined threshold (e.g., 1, 3, 4, 20, etc.), the auto-scaling engine may perform operations to reduce the number of computing nodes in the cluster.”) wherein the first category comprises a pass designation (Akinapelli, “[0027] …If the adjust results in the performance of the cluster approaching the performance requirement, an additional adjustment of a similar nature (e.g., adding an additional set of nodes) may be performed. Once the auto-scaling engine detects the cluster is meeting the performance requirement [wherein the first category comprises a pass designation] (e.g., a latency metric is below a predefined latency threshold while a number of idle computing nodes is less than a predefined idle threshold), the auto-scaling engine may store any suitable data applicable to the time period between when the performance degradation was identified and when the performance degradation was rectified. Said another way, any performance metric(s) (e.g., latency metrics, a number of pending tasks, a number of active queries, etc.) and/or cluster metadata (e.g., number of manager nodes, number of worker nodes, central processing units of the worker/manager nodes, memory allocations of the worker/manager nodes, etc.) collected within a threshold time period before or after a degradation was detected, or an adjustment was made in response to the detected degradation, may be stored for subsequent use (e.g., as an instance of training data) with an indicator that indicates that the adjustments were successful or unsuccessful.”) and the second category comprises a fail designation. (Akinapelli, “[0026] ...These performance metrics can be collected and stored in a centralized data store accessible to the auto-scaling engine. The auto-scaling engine may monitor these performance metrics to identify when the performance of the computing cluster fails to meet a performance requirement (e.g., a performance metric falls below or exceeds one or more predefined performance thresholds) [the second category comprises a fail designation]. In response to detecting this scenario, the auto-scaling engine may adjust the number of computing nodes in the cluster. As a non-limiting example, if the performance metric indicates a latency of the system has exceeded a predefined performance requirement (e.g., a predefined performance requirement related to latency of task completion), the auto-scaling engine may perform operations to increase the number of computing nodes in the cluster in an effort to decrease the latency associated with task execution. As another example, if the performance metric indicates the number of idle computing nodes exceeds a predefined threshold (e.g., 1, 3, 4, 20, etc.), the auto-scaling engine may perform operations to reduce the number of computing nodes in the cluster.”) It would have been obvious to one of ordinary skill in the art before the effective filling date of the present application to combine the teachings of Akinapelli with teachings of Burg, He, Singh and Saillet for the same reasons disclosed for claim 1. Claim(s) 13 and 20 recite analogous limitation as claim 6, so are rejected under the same rationale. Claim(s) 4, 11 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Burg in view of He, Akinapelli, Singh, Saillet and in further view of Obst, Pub. No.: US20160359746A1. Regarding claim 4, Burg in view of He, Akinapelli, Singh and Saillet teach the method of claim 3. Burg in view of He, Akinapelli, Singh and Saillet do not teach: further comprising: responsive to classifying the second model with the failing classification, pausing a data migration operation between the first platform and the second platform. Obst teaches: further comprising: responsive to classifying the second model with the failing classification, pausing a data migration operation between the first platform and the second platform (Obst, “[0008] A method for detecting and suppressing slow data migration across WAN connections is claimed. The method includes first detecting a current data migration between a first source of the source system. The current data migration of the first source is evaluated based on a threshold that corresponds to a pre-defined performance level for the data migration that is acceptable. In situations where the data migration for the first source is below the acceptable performance level threshold [responsive to classifying the second model with the failing classification], a Migration Manager terminates the current data migration [pausing a data migration operation between the first platform and the second platform] and reschedules the data migration of the first source for a future time.”) Obst, Burg, He, Akinapelli, Singh and Saillet are related to the same field of endeavor (i.e.: machine learning model training). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teaching of Obst with teachings of Burg, He, Akinapelli, Singh and Saillet to add if migration performance falls below the threshold, a manager component can terminate and reschedule the migration to maintain efficient and system reliability, (Obst, Abstract). Claim(s) 11 and 18 recite analogous limitation as claim 4, so are rejected under the same rationale. Claim(s) 7 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Burg in view of He, Akinapelli, Singh, Saillet and in further view of Tapia et al., Pub. No.: US20220067573A1. Regarding claim 7, Burg in view of He, Akinapelli, Singh and Saillet teach the method of claim 1. Burg in view of He, Akinapelli, Singh and Saillet do not teach: wherein classifying the first model comprises: assigning a category to the first model based on categories assigned to each performance metric of the set of determined performance metrics. Tapia teaches: wherein classifying the first model comprises: assigning a category to the first model based on categories assigned to each performance metric of the set of determined performance metrics (Tapia, “[0015] An ML model optimization system that monitors the performance of a model deployed to an external system and replaces the deployed model with another model selected from a plurality of models when there is a deterioration in the performance of the deployed model is disclosed. In an example, the external system can be a production system that is in use for one or more automated tasks as opposed to a testing system that is merely used to determine the performance level of different components. The model optimization system monitors the performance of the deployed model and performances of at least a top K models selected from the plurality of models by accessing different model metrics. The model metrics can include static ML metrics, in-production metrics, and category-wise metrics [assigning a category to the first model]. The static metrics can include performance indicators of the plurality of models that are derived from training data used to train the plurality of models [based on categories assigned to each performance metric of the set of determined performance metrics]. The in-production metrics can be obtained based on human corrections provided to the model output that is produced when the external system is online and in-production mode. In an example, the top K models are selected or shortlisted based on the in-production metrics wherein K is a natural number and K=1, 2, 3, etc. The category-wise metrics include performance indicators of the models with respect to a specific category.”) Tapia, Burg, He, Akinapelli, Singh and Saillet are related to the same field of endeavor (i.e.: machine learning model training). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teaching of Tapia with teachings of Burg, He, Akinapelli, Singh and Saillet to add evaluation criteria, using performance metrics to identify the most optimized model for deployment, (Tapia, Abstract). Claim 14 recites analogous limitation as claim 7, so is rejected under the same rationale. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Cao et al., Pub. No.: US9286571B2. Cao teaches for maintaining performance level of a database being migrated between different cloud-based service providers employing machine learning. Lee et al., Pub. No.: US20200050968A1. Lee teaches a first data set corresponding to an evaluation run of a model is generated at a machine learning service for display via an interactive interface. The data set includes a prediction quality metric. A target value of an interpretation threshold associated with the model is determined based on a detection of a particular client's interaction with the interface. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MATIYAS T MARU whose telephone number is (571)270-0902 or via email: matiyas.maru@uspto.gov. The examiner can normally be reached Monday - Friday (8:00am - 4:00pm) EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle Bechtold can be reached on (571)431-0762. The fax phone number for the organization were this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /M.T.M./ Examiner, Art Unit 2148 /MICHELLE T BECHTOLD/ Supervisory Patent Examiner, Art Unit 2148
Read full office action

Prosecution Timeline

Dec 29, 2022
Application Filed
Sep 10, 2025
Non-Final Rejection — §101, §103
Nov 24, 2025
Applicant Interview (Telephonic)
Nov 24, 2025
Examiner Interview Summary
Dec 16, 2025
Response Filed
Mar 06, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586114
GENERATING DIGITAL RECOMMENDATIONS UTILIZING COLLABORATIVE FILTERING, REINFORCEMENT LEARNING, AND INCLUSIVE SETS OF NEGATIVE FEEDBACK
2y 5m to grant Granted Mar 24, 2026
Patent 12572796
METHODS AND SYSTEMS FOR GENERATING RECOMMENDATIONS FOR COUNTERFACTUAL EXPLANATIONS OF COMPUTER ALERTS THAT ARE AUTOMATICALLY DETECTED BY A MACHINE LEARNING ALGORITHM
2y 5m to grant Granted Mar 10, 2026
Patent 12567004
METHOD OF MACHINE LEARNING TRAINING FOR DATA AUGMENTATION
2y 5m to grant Granted Mar 03, 2026
Patent 12561588
Methods and Systems for Generating Example-Based Explanations of Link Prediction Models in Knowledge Graphs
2y 5m to grant Granted Feb 24, 2026
Patent 12561584
TEACHING DATA PREPARATION DEVICE, TEACHING DATA PREPARATION METHOD, AND PROGRAM
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
58%
Grant Probability
70%
With Interview (+12.5%)
4y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 40 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month