Prosecution Insights
Last updated: April 19, 2026
Application No. 17/976,058

METHODS, SYSTEMS, ARTICLES OF MANUFACTURE AND APPARATUS TO MANAGE TRAINING FOR MACHINE LEARNING MODELS

Final Rejection §103
Filed
Oct 28, 2022
Examiner
SHOEMAKER, ERIC JAMES
Art Unit
2664
Tech Center
2600 — Communications
Assignee
Nielsen Consumer LLC
OA Round
2 (Final)
77%
Grant Probability
Favorable
3-4
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
10 granted / 13 resolved
+14.9% vs TC avg
Strong +30% interview lift
Without
With
+30.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
31 currently pending
Career history
44
Total Applications
across all art units

Statute-Specific Performance

§101
9.5%
-30.5% vs TC avg
§103
54.2%
+14.2% vs TC avg
§102
20.0%
-20.0% vs TC avg
§112
16.3%
-23.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 13 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on November 24, 2025, is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Response to Amendment Applicant’s Amendments filed on July 10, 2025, has been entered and made of record. Currently pending Claim(s) 1-25 Independent Claim(s) 1, 9, 17, and 24 Amended Claim(s) 1-3, 5-20, and 22-25 Response to Arguments This office action is responsive to Applicant’s Arguments/Remarks Made in an Amendment received on December 2, 2025. In view of amendments filed on December 2, 2025 to the claims, the Applicant has amended claims 5-8 and 13-16 in response to the claim objections, and the objections to the claims have been overcome. The Applicant also amended claim 14 in response to the 35 U.S.C. § 112(b) rejection. Although the amendment to claim 14 addresses the rejection, the Examiner still recommends changing the dependency of claim 14. See the claim objections section below. Regarding the previous rejections under 35 U.S.C. § 102 and §103, the Applicant has amended the independent claims 1, 9, 17, and 24 to include the additional limitations of determining resource availability and blocking retraining if not enough resources are available. Additionally, the amended independent claims now also include the limitation of detecting a foreign input data type associated with a retraining request and blocking the model from being retrained based on the foreign input data. In view of Applicant Arguments/Remarks filed December 2, 2025, with respect to the claims, the Applicant first argued (on Remarks page 12) that the prior art of record fails to teach evaluation circuitry to determine a sufficiency of available resources as recited in the amended claim 1. The Examiner agrees that Susaiyah, when taken alone, is not specific about analyzing available resources; however, analyzing available resources for performing computer program operations is well known in the art, especially within rental services for processing resources like AWS, Microsoft Azure, Google Cloud, etc. The art of Addepalli teaches an example where clients may rent processing resources to run AI models [Abstract]. These services require monitoring and allocating resources to ensure that sufficient resources are available for the AI models to be executed [0054-0058], and it would be obvious to one of ordinary skill in the art to include circuitry which ensures resources are available when running applications such as machine learning models. Similarly, regarding the independent claims 9, 17, and 24, the Applicant argued (on Remarks page 12-13) that the prior art of record fails to teach retraining the models when sufficient computer resources are available. As stated above, the Examiner believes that Susaiyah in view of Addepalli effectively teach this limitation. Thus, the Examiner respectfully disagrees with the Applicant’s arguments. The limitations argued above were previously present in dependent claims 3-4, 11-12, and 19 and were rejected under 35 U.S.C. § 103 as being unpatentable over Sasaiyah in view of Addepalli in the previous office action (Non-Final Rejection dated September 2, 2025). The Examiner has updated the rejection to the independent claims 1, 9, 17, and 24 to now include the art of Addepalli and maintains similar rejections to those from the previous office action. Claim Objections Claims 10-12 and 15-16 are objected to for containing a typo. In the preamble for each claim, the phrase “the one of more of the at least one programmable circuit…” should be corrected to “the one or more of the at least one programmable circuit…”. Additionally, claim 14 is objected to for containing the limitation “key performance indicators.” This limitation was not explained or addressed previously in the claims 9, 11, or 12, which 14 is dependent on. The Examiner recommends changing the dependency of claim 14 to be dependent on claim 13 where “key performance indicators” are introduced. This change would improve the organization of the claims by keeping the dependencies consistent. For example, see that claim 21 is dependent on claim 20, and claim 6 is dependent on claim 5. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3-4, 7-9, 11-12, 15-17, 19, 22-24 are rejected under 35 U.S.C. 103 as being unpatentable over Susaiyah (EP 4174721 A1), further in view of Addepalli et al. (US 2020/0265509 A1), hereafter Addepalli. Regarding claim 1, Susaiyah teaches an apparatus ([Abstract] “A computer implemented method of managing a first model that was trained using a first machine learning process and is deployed and used to label medical data.”) comprising: train detection circuitry to detect a request to retrain of a model ([0082] “It will be appreciated that the method 200 may be performed in an iterative (e.g. continuous) manner by repeating steps i) and ii) (e.g. steps 202 and 204). For example, the method 200 may be performed periodically, e.g. at set time intervals, or responsive to a trigger such as upon receiving user feedback.” The method of training a model and determining if the performance metric is adequate can be initiated in repose to a trigger, such as a training request. See Step 204 in Fig. 2 and Step 504 in Fig. 5 which show that a model will be retrained until its performance is satisfactory. Also, Fig. 3 shows rating the performance of a model to trigger retraining based on performance.). and to detect a foreign input data type associated with the request to retrain, and blocker circuitry to block retraining of the model based on a presence of the foreign input data type (See 0054-0063. Susaiyah teaches determining which training data is incompatible with the model and blocking the model from being trained on such data by filtering it out of the training pool. [0054] “In some embodiments, step 204 may comprise filtering the data. For example, to remove data that cannot be used as training data, e.g. due to incompleteness, high noise levels or incompatibility with the first model.” [0055] ‘In some embodiments, the step of performing 204 further training on the first model to produce an updated first model comprises filtering the pool of unlabeled data samples to remove unlabeled data samples that do not have parameters corresponding to the input and/or output parameters of the first model… In other words, data may be filtered from the pool of unlabeled data samples if it does not fall into the same scope as the first model.”). Susaiyah fails to teach resource evaluation circuitry to determine a sufficiency of available resources, wherein the blocker circuitry is to maintain the block of the request to retrain when the available resources are insufficient and to permit retraining of the model when the available resources are sufficient. However, Addepalli teaches resource evaluation circuitry to determine a sufficiency of available resources, wherein the blocker circuitry is to maintain the block of the request to retrain when the available resources are insufficient and to permit retraining of the model when the available resources are sufficient ([0054] “FIG. 2 shows an example functional diagram 200 of the AI-MTSB, according to some embodiments. The AI-MTSB's main role is to allocate appropriate AI solution model processor resources, to facilitate execution on behalf of tenants and their AI solution models. In addition, it may constantly monitor the AI solution model processor resources running models usage and performance for reporting, accounting and billing purposes.” Addepalli teaches a system for monitoring and allocating resources for tenants to train AI models. This involves checking for available resources and only running models if resources are available.). Susaiyah and Addepalli are both analogous to the claimed invention, because both teach methods of monitoring models and evaluating model performance. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Susaiyah’s invention by monitoring the resources used by the model and only training models if resources are available. This modification allows for resources to be properly allocated to different models running simultaneously (Addepalli [0054] “To rent processor resources to run AI solution models for tenants, it is more efficient when the allocations can be tailored to the tenants' needs, including how much processing is required, for how long, and during what time. This requires an entity that can compute how many resources are needed for training or inference of an AI solution model—noting that this amount can vary for any unit time throughout the course of the process—as well know what resources are already being used and when.”). Regarding claim 3, Susaiyah fails to teach including resource evaluation circuitry to determine availability metrics corresponding to process circuitry. However, Addepalli teaches wherein the resource evaluation circuitry is to determine availability metrics corresponding to process circuitry ([0054] “The AI-MTSB's main role is to allocate appropriate AI solution model processor resources, to facilitate execution on behalf of tenants and their AI solution models. In addition, it may constantly monitor the AI solution model processor resources running models usage and performance for reporting, accounting and billing purposes.”). Susaiyah and Addepalli are both analogous to the claimed invention, because both teach methods of monitoring models and evaluating model performance. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Susaiyah’s invention by monitoring the resources used by the model. This modification allows for resources to be properly allocated to different models running simultaneously (Addepalli [0054] “To rent processor resources to run AI solution models for tenants, it is more efficient when the allocations can be tailored to the tenants' needs, including how much processing is required, for how long, and during what time. This requires an entity that can compute how many resources are needed for training or inference of an AI solution model—noting that this amount can vary for any unit time throughout the course of the process—as well know what resources are already being used and when.”). Regarding claim 4, Susaiyah fails to teach wherein the resource evaluation circuitry is to query a performance monitoring unit (PMU) corresponding to the process circuitry to identify a processing utilization metric. However, Addepalli teaches wherein the resource evaluation circuitry is to query a performance monitoring unit (PMU) corresponding to the process circuitry to identify a processing utilization metric ([0054] “This requires an entity that can compute how many resources are needed for training or inference of an AI solution model—noting that this amount can vary for any unit time throughout the course of the process—as well know what resources are already being used and when. The AI-MTSB performs these types of special calculations to optimize the rental capabilities of the multi-tenancy systems.”). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Susaiyah’s invention by querying a performance monitoring unit to obtain and manage the resources used by the model. This modification allows for resources to be properly allocated to different models running simultaneously (Addepalli [0054] “To rent processor resources to run AI solution models for tenants, it is more efficient when the allocations can be tailored to the tenants' needs, including how much processing is required, for how long, and during what time.”). Regarding claim 7, Susaiyah teaches wherein process circuitry is to train or re-train models ([0006] “there is a computer implemented method… wherein the upgrade process comprises performing further training on the first model to produce an updated first model, wherein the further training is performed using an active learning process wherein training data for the further training is selected from a pool of unlabeled data samples, according to the active learning process, and sent to a labeler to obtain ground truth labels for use in the further training.”). Regarding claim 8, Susaiyah teaches wherein the process circuitry includes at least one of a central processing unit (CPU), a graphical processing unit (GPU), or a field-programmable gate array (FPGA) (Fig. 1 shows an example of a system for carrying out the method; see the CPU labeled 102 in Fig. 1. [0012] “More generally, the system may form part of a computer system e.g. such as a laptop, desktop computer or other device. Alternatively, the system 100 may form part of the cloud/a distributed computing arrangement.”). Regarding claim 9, Susaiyah teaches an apparatus ([Abstract] “A computer implemented method of managing a first model that was trained using a first machine learning process and is deployed and used to label medical data.”) comprising: at least one memory; machine readable instructions; and at least one programmable circuit to at least one of instantiate or execute the machine readable instructions (Fig. 1; [0007] “a processor configured to communicate with the memory and to execute the set of instructions. The set of instructions, when executed by the processor, cause the processor to: i) determine a performance measure for the first model; and ii) if the performance measure indicates a performance below a threshold performance level,”) to: obtain a request to retrain a model ([0082] “It will be appreciated that the method 200 may be performed in an iterative (e.g. continuous) manner by repeating steps i) and ii) (e.g. steps 202 and 204). For example, the method 200 may be performed periodically, e.g. at set time intervals, or responsive to a trigger such as upon receiving user feedback.” The method of training a model and determining if the performance metric is adequate can be initiated in repose to a trigger, such as a training request.); detect a foreign input data type associated with the request to retrain; override the request to retrain to prevent the model from being retrained based on the presence of a foreign data type (See 0054-0063. Susaiyah teaches determining which training data is incompatible with the model and blocking the model from being trained on such data by filtering it out of the training pool. [0054] “In some embodiments, step 204 may comprise filtering the data. For example, to remove data that cannot be used as training data, e.g. due to incompleteness, high noise levels or incompatibility with the first model.” [0055] ‘In some embodiments, the step of performing 204 further training on the first model to produce an updated first model comprises filtering the pool of unlabeled data samples to remove unlabeled data samples that do not have parameters corresponding to the input and/or output parameters of the first model… In other words, data may be filtered from the pool of unlabeled data samples if it does not fall into the same scope as the first model.”); Susaiyah fails to teach an apparatus instantiating machine instructions to determine an amount of computing resources, compare the amount of computing resources to a threshold amount of computing resources, sustain prevention of retraining of the model when the threshold amount of computing resources is not satisfied, and enable retraining of the model when the threshold amount of computing resources is satisfied. However, Addepalli teaches executing machine readable instructions to determine an amount of computing resources ([0054] “FIG. 2 shows an example functional diagram 200 of the AI-MTSB, according to some embodiments. The AI-MTSB's main role is to allocate appropriate AI solution model processor resources, to facilitate execution on behalf of tenants and their AI solution models. In addition, it may constantly monitor the AI solution model processor resources running models usage and performance for reporting, accounting and billing purposes.”); compare the amount of computing resources to a threshold amount of computing resources; sustain prevention of retraining of the model when the threshold amount of computing resources is not satisfied; and enable retraining of the model when the threshold amount of computing resources is satisfied (See paragraphs 0054-0065. Addepalli teaches a system for determining available resources, determining the needed resources for running an AI model, and allocating the needed amount of resources to tenants at certain times to optimally rent resources and run multiple models.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Susaiyah’s invention by monitoring the resources used by the model and only training models if resources are available. This modification allows for resources to be properly allocated to different models running simultaneously (Addepalli [0054] “To rent processor resources to run AI solution models for tenants, it is more efficient when the allocations can be tailored to the tenants' needs, including how much processing is required, for how long, and during what time. This requires an entity that can compute how many resources are needed for training or inference of an AI solution model—noting that this amount can vary for any unit time throughout the course of the process—as well know what resources are already being used and when.”). Regarding claim 11, Susaiyah fails to teach wherein the processor circuitry is to verify availability metrics. However, Addepalli teaches wherein one of more of the at least one programmable circuit is to verify availability metrics ([0054] “The AI-MTSB's main role is to allocate appropriate AI solution model processor resources, to facilitate execution on behalf of tenants and their AI solution models. In addition, it may constantly monitor the AI solution model processor resources running models usage and performance for reporting, accounting and billing purposes.”). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Susaiyah’s invention by monitoring the resources used by the model. This modification allows for resources to be properly allocated to different models running simultaneously (Addepalli [0054] “To rent processor resources to run AI solution models for tenants, it is more efficient when the allocations can be tailored to the tenants' needs, including how much processing is required, for how long, and during what time. This requires an entity that can compute how many resources are needed for training or inference of an AI solution model—noting that this amount can vary for any unit time throughout the course of the process—as well know what resources are already being used and when.”). Regarding claim 12, Susaiyah fails to teach wherein the processor circuitry is to query a performance monitoring unit (PMU) corresponding to the processor circuitry to identify a processing utilization metric. However, Addepalli teaches wherein the one of more of the at least one programmable circuit is to query a performance monitoring unit (PMU) corresponding to the processor circuitry to identify a processing utilization metric ([0054] “This requires an entity that can compute how many resources are needed for training or inference of an AI solution model—noting that this amount can vary for any unit time throughout the course of the process—as well know what resources are already being used and when. The AI-MTSB performs these types of special calculations to optimize the rental capabilities of the multi-tenancy systems.”). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Susaiyah’s invention by querying a performance monitoring unit to obtain and manage the resources used by the model. This modification allows for resources to be properly allocated to different models running simultaneously (Addepalli [0054] “To rent processor resources to run AI solution models for tenants, it is more efficient when the allocations can be tailored to the tenants' needs, including how much processing is required, for how long, and during what time.”). Regarding claim 15, Susaiyah teaches wherein the one of more of the at least one programmable circuit is to train or retrain models ([0006] “there is a computer implemented method… wherein the upgrade process comprises performing further training on the first model to produce an updated first model, wherein the further training is performed using an active learning process wherein training data for the further training is selected from a pool of unlabeled data samples, according to the active learning process, and sent to a labeler to obtain ground truth labels for use in the further training.”). Regarding claim 16, Susaiyah teaches wherein the processor circuitry includes at least one of a central processing unit (CPU), a graphical processing unit (GPU), or a field-programmable gate array (FPGA) (Fig. 1 shows an example of a system for carrying out the method; see the CPU labeled 102 in Fig. 1. [0012] “More generally, the system may form part of a computer system e.g. such as a laptop, desktop computer or other device. Alternatively, the system 100 may form part of the cloud/a distributed computing arrangement.”). Regarding claim 17, Susaiyah teaches a non-transitory machine readable storage medium comprising instructions that, when executed, cause at least one programmable circuit ([Abstract] “A computer implemented method of managing a first model that was trained using a first machine learning process and is deployed and used to label medical data.”) to at least: retrieve a request to retrain a model ([0082] “It will be appreciated that the method 200 may be performed in an iterative (e.g. continuous) manner by repeating steps i) and ii) (e.g. steps 202 and 204). For example, the method 200 may be performed periodically, e.g. at set time intervals, or responsive to a trigger such as upon receiving user feedback.” The method of training a model and determining if the performance metric is adequate can be initiated in repose to a trigger, such as a training request.); react to the request to retrain and prevent the model from being retrained based on a presence of the foreign input data type (See 0054-0063. Susaiyah teaches determining which training data is incompatible with the model and blocking the model from being trained on such data by filtering it out of the training pool. [0054] “In some embodiments, step 204 may comprise filtering the data. For example, to remove data that cannot be used as training data, e.g. due to incompleteness, high noise levels or incompatibility with the first model.” [0055] ‘In some embodiments, the step of performing 204 further training on the first model to produce an updated first model comprises filtering the pool of unlabeled data samples to remove unlabeled data samples that do not have parameters corresponding to the input and/or output parameters of the first model… In other words, data may be filtered from the pool of unlabeled data samples if it does not fall into the same scope as the first model.”); Susaiyah fails to teach causing at least one programmable circuit to at least determine a sufficiency of available computing resources, maintain prevention of retraining of the model when the available computing resources are insufficient, and permit retraining of the model when the available computing resources are sufficient. However, Addepalli teaches causing at least one programmable circuit to at least determine a sufficiency of available computing resources ([0054] “FIG. 2 shows an example functional diagram 200 of the AI-MTSB, according to some embodiments. The AI-MTSB's main role is to allocate appropriate AI solution model processor resources, to facilitate execution on behalf of tenants and their AI solution models. In addition, it may constantly monitor the AI solution model processor resources running models usage and performance for reporting, accounting and billing purposes.”); maintain prevention of retraining of the model when the available computing resources are insufficient; and permit retraining of the model when the available computing resources are sufficient (See paragraphs 0054-0065. Addepalli teaches a system for determining available resources, determining the needed resources for running an AI model, and allocating the needed amount of resources to tenants at certain times to optimally rent resources and run multiple models.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Susaiyah’s invention by monitoring the resources used by the model and only training models if resources are available. This modification allows for resources to be properly allocated to different models running simultaneously (Addepalli [0054] “To rent processor resources to run AI solution models for tenants, it is more efficient when the allocations can be tailored to the tenants' needs, including how much processing is required, for how long, and during what time. This requires an entity that can compute how many resources are needed for training or inference of an AI solution model—noting that this amount can vary for any unit time throughout the course of the process—as well know what resources are already being used and when.”). Regarding claim 19, Susaiyah fails to teach causing one or more of the at least one programmable circuit to verify availability metrics corresponding thereto. However, Addepalli teaches wherein the instructions, when executed, cause one or more of the at least one programmable circuit to verify availability metrics corresponding thereto ([0054] “The AI-MTSB's main role is to allocate appropriate AI solution model processor resources, to facilitate execution on behalf of tenants and their AI solution models. In addition, it may constantly monitor the AI solution model processor resources running models usage and performance for reporting, accounting and billing purposes.”). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Susaiyah’s invention by monitoring the resources used by the model. This modification allows for resources to be properly allocated to different models running simultaneously (Addepalli [0054] “To rent processor resources to run AI solution models for tenants, it is more efficient when the allocations can be tailored to the tenants' needs, including how much processing is required, for how long, and during what time. This requires an entity that can compute how many resources are needed for training or inference of an AI solution model—noting that this amount can vary for any unit time throughout the course of the process—as well know what resources are already being used and when.”). Regarding claim 22, Susaiyah teaches wherein the instructions, when executed, cause one or more of the at least one programmable circuit to train or retrain the model ([0006] “there is a computer implemented method… wherein the upgrade process comprises performing further training on the first model to produce an updated first model, wherein the further training is performed using an active learning process wherein training data for the further training is selected from a pool of unlabeled data samples, according to the active learning process, and sent to a labeler to obtain ground truth labels for use in the further training.”). Regarding claim 23, Susaiyah teaches wherein the one or more of the at least one programmable circuit includes at least one of a central processing unit (CPU), a graphical processing unit (GPU), or a field- programmable gate array (FPGA) (Fig. 1 shows an example of a system for carrying out the method; see the CPU labeled 102 in Fig. 1. [0012] “More generally, the system may form part of a computer system e.g. such as a laptop, desktop computer or other device. Alternatively, the system 100 may form part of the cloud/a distributed computing arrangement.”). Regarding claim 24, Susaiyah teaches a method of managing a model ([0011]” Embodiments herein relate to managing models trained using machine learning processes (otherwise known as machine learning models) after deployment.”), the method comprising: obtaining, by executing instructions with at least one processor, a retraining request of a model ([0082] “It will be appreciated that the method 200 may be performed in an iterative (e.g. continuous) manner by repeating steps i) and ii) (e.g. steps 202 and 204). For example, the method 200 may be performed periodically, e.g. at set time intervals, or responsive to a trigger such as upon receiving user feedback.” The method of training a model and determining if the performance metric is adequate can be initiated in repose to a trigger, such as a training request.); detecting, by executing instructions with the at least one processor, a foreign input data type associated with the retraining request; preventing, by executing instructions with the at least one processor, the model from being retrained base on the foreign input data type (See 0054-0063. Susaiyah teaches determining which training data is incompatible with the model and blocking the model from being trained on such data by filtering it out of the training pool. [0054] “In some embodiments, step 204 may comprise filtering the data. For example, to remove data that cannot be used as training data, e.g. due to incompleteness, high noise levels or incompatibility with the first model.” [0055] ‘In some embodiments, the step of performing 204 further training on the first model to produce an updated first model comprises filtering the pool of unlabeled data samples to remove unlabeled data samples that do not have parameters corresponding to the input and/or output parameters of the first model… In other words, data may be filtered from the pool of unlabeled data samples if it does not fall into the same scope as the first model.”); Susaiyah fails to teach determining, by executing instructions with the at least one processor, an amount of computing resources; comparing, by executing instructions with the at least one processor, an amount of computing resources to a threshold amount of computing resources; sustaining prevention of retraining of the model when the threshold amount of computing resources is not satisfied; and enabling the model to be retrained when the threshold amount of computing resources is satisfied. However, Addepalli teaches determining, by executing instructions with the at least one processor, an amount of computing resources ([0054] “FIG. 2 shows an example functional diagram 200 of the AI-MTSB, according to some embodiments. The AI-MTSB's main role is to allocate appropriate AI solution model processor resources, to facilitate execution on behalf of tenants and their AI solution models. In addition, it may constantly monitor the AI solution model processor resources running models usage and performance for reporting, accounting and billing purposes.”); comparing, by executing instructions with the at least one processor, an amount of computing resources to a threshold amount of computing resources; sustaining prevention of retraining of the model when the threshold amount of computing resources is not satisfied; and enabling the model to be retrained when the threshold amount of computing resources is satisfied (See paragraphs 0054-0065. Addepalli teaches a system for determining available resources, determining the needed resources for running an AI model, and allocating the needed amount of resources to tenants at certain times to optimally rent resources and run multiple models.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Susaiyah’s invention by monitoring the resources used by the model and only training models if resources are available. This modification allows for resources to be properly allocated to different models running simultaneously (Addepalli [0054] “To rent processor resources to run AI solution models for tenants, it is more efficient when the allocations can be tailored to the tenants' needs, including how much processing is required, for how long, and during what time. This requires an entity that can compute how many resources are needed for training or inference of an AI solution model—noting that this amount can vary for any unit time throughout the course of the process—as well know what resources are already being used and when.”). Claims 2, 5, 10, 13, 18, 20, and 25 are rejected under 35 U.S.C. 103 as being unpatentable over Susaiyah (EP 4174721 A1) in view of Addepalli (US 2020/0265509 A1), and further in view of Devisschere (“TensorFlow Extended (TFX): the components and their functionalities.” https://www.adaltas.com/en/2021/03/05/tfx-overview/). Regarding claim 2, Susaiyah teaches including: performance metric circuitry to calculate at least one performance metric corresponding to the model (Fig. 2; [0037] “in a first step 202 the method comprises i) determining a performance measure for the first model.” [0038] “the performance measure can be any measure of how the model is performing. In some examples, the performance measure can reflect the accuracy of the first model, a measure of user satisfaction of the first model or a combination of the accuracy of the first model and a measure of user satisfaction with the model.”); and score generator circuitry ([0039-40] “The performance measure may be obtained in any manner. For example, accuracy might be determined using a validation dataset comprising example inputs and ground truth annotations that were not used to train the first model (e.g. previously unseen training data). In other examples, a measure of user satisfaction may be obtained from users of the model. For example, via a feedback form 300 such as that illustrated in Fig. 3. User feedback may also be used to obtain correct e.g. ground truth labels for examples where there is low user satisfaction.” [0094] “The user satisfaction scores SS r 410 and blind validation scores VSb 412 are sent to a Query Learning Switch 408 (otherwise known as an "Active Learning Switch") which performs step 202 of the method 200 and determines (e.g. calculates) the performance measure CS as described above, from the user satisfaction scores SS r 410 and blind validation scores VSb 412.”) to: improve model efficiency by comparing the at least one performance metric corresponding to current model execution to at least one threshold performance metric ([0048] “If the performance measure indicates a performance below a threshold performance level, then in step 204 the method 200 comprises ii) triggering an upgrade process.” [0049] “The threshold performance level may be set as a system configuration parameter,”); Susaiyah teaches triggering models to be retrained if they do not satisfy a threshold performance metric; however, neither Susaiyah or Addepalli specifically mention setting flags for this purpose. More specifically, Susaiyah and Addepalli fail to teach setting a first flag corresponding to the model when the threshold performance metric is satisfied, the first flag indicative of satisfactory model performance, and setting a second flag corresponding to the model when the threshold performance metric is not satisfied, the second flag indicative of poor model performance. However, Devisschere teaches set a first flag corresponding to the model when the threshold performance metric is satisfied, the first flag indicative of satisfactory model performance; and set a second flag corresponding to the model when the threshold performance metric is not satisfied, the second flag indicative of poor model performance ([Evaluator] “The Evaluator component analyses the model and helps us understand how the model performed… it compares to the fixed threshold of one or multiple metrics. If the new model satisfies the condition, it receives a tag ‘blessed’. This is a signal to Pusher that it is ready to be pushed to a specified location.”). Susaiyah and Devisschere are analogous in the art because both teach methods of evaluating model performance. Therefore, it would have been obvious to one of ordinary skill before the effective filing date of the claimed invention to modify Susaiyah’s invention by setting flags to indicate satisfactory or poor model performance. This modification implements well-known functionally of Tensorflow.Evaluator (See the code snippet in the Evaluator section of Devisschere), and the flags allow for another component to push the models to production if the performance is satisfactory enough for models to be deployed (Devisschere [Pusher] “The Pusher component verifies the blessing from the Evaluator component and optionally the InfraValidator component. It assesses the compatibility between the model and the model server binary. This prevents technically weak models to be pushed to production. If the results are satisfactory, the model is pushed to one or more deployment targets.”). Regarding claim 5, Susaiyah teaches wherein the performance metric(s) include key performance indicators ([0038] “In more detail, in step 202, the performance measure can be any measure of how the model is performing. In some examples, the performance measure can reflect the accuracy of the first model, a measure of user satisfaction of the first model or a combination of the accuracy of the first model and a measure of user satisfaction with the model.” Susaiyah teaches examples where the performance indicators are user satisfaction (0040), blind validation (0044), accuracy/loss (0043), etc.). Regarding claim 10, Susaiyah teaches wherein the one of more of the at least one programmable circuit is to: calculate at least one performance metric corresponding to the model (Fig. 2; [0037] “in a first step 202 the method comprises i) determining a performance measure for the first model.” [0038] “the performance measure can be any measure of how the model is performing. In some examples, the performance measure can reflect the accuracy of the first model, a measure of user satisfaction of the first model or a combination of the accuracy of the first model and a measure of user satisfaction with the model.”); compare the at least one performance metric corresponding to current model execution to at least on threshold performance metric ([0048] “If the performance measure indicates a performance below a threshold performance level, then in step 204 the method 200 comprises ii) triggering an upgrade process.” [0049] “The threshold performance level may be set as a system configuration parameter,”); Susaiyah teaches triggering models to be retrained if they do not satisfy a threshold performance metric; however, neither Susaiyah or Addepalli specifically mention setting flags for this purpose. More specifically, Susaiyah and Addepalli fail to teach causing a first flag corresponding to the model to be established when the threshold performance metric is satisfied, the first flag indicative of satisfactory model performance, and causing a second flag corresponding to the model to be established when the threshold performance metric is not satisfied, the second flag indicative of poor model performance. However, Devisschere teaches wherein the one or more of the at least one programmable circuit is to: cause a first flag corresponding to the model to be established when the threshold performance metric is satisfied, the first flag indicative of satisfactory model performance; and cause a second flag corresponding to the model to be established when the threshold performance metric is not satisfied, the second flag indicative of poor model performance ([Evaluator] “The Evaluator component analyses the model and helps us understand how the model performed… it compares to the fixed threshold of one or multiple metrics. If the new model satisfies the condition, it receives a tag ‘blessed’. This is a signal to Pusher that it is ready to be pushed to a specified location.”). Therefore, it would have been obvious to one of ordinary skill before the effective filing date of the claimed invention to modify Susaiyah’s invention by setting flags to indicate satisfactory or poor model performance. This modification implements well-known functionally of Tensorflow.Evaluator (See the code snippet in the Evaluator section of Devisschere), and the flags allow for another component to push the models to production if the performance is satisfactory enough for models to be deployed (Devisschere [Pusher] “The Pusher component verifies the blessing from the Evaluator component and optionally the InfraValidator component. It assesses the compatibility between the model and the model server binary. This prevents technically weak models to be pushed to production. If the results are satisfactory, the model is pushed to one or more deployment targets.”). Regarding claim 13, Susaiyah teaches wherein the performance metric(s) include key performance indicators ([0038] “In more detail, in step 202, the performance measure can be any measure of how the model is performing. In some examples, the performance measure can reflect the accuracy of the first model, a measure of user satisfaction of the first model or a combination of the accuracy of the first model and a measure of user satisfaction with the model.” Susaiyah teaches examples where the performance indicators are user satisfaction (0040), blind validation (0044), accuracy/loss (0043), etc.). Regarding claim 18, Susaiyah fails to teach producing a first flag corresponding to the model when the threshold performance metric is satisfied, the first flag indicative of satisfactory model performance; and producing a second flag corresponding to the model when the threshold performance metric is not satisfied, the second flag indicative of poor model performance. However, Devisschere teaches wherein the instructions, when executed, cause one or more of the at least one programmable circuit to: calculate at least one performance metric corresponding to the model (Fig. 2; [0037] “in a first step 202 the method comprises i) determining a performance measure for the first model.” [0038] “the performance measure can be any measure of how the model is performing. In some examples, the performance measure can reflect the accuracy of the first model, a measure of user satisfaction of the first model or a combination of the accuracy of the first model and a measure of user satisfaction with the model.”); compare the at least one performance metric corresponding to current model execution to at least on threshold performance metric ([0048] “If the performance measure indicates a performance below a threshold performance level, then in step 204 the method 200 comprises ii) triggering an upgrade process.” [0049] “The threshold performance level may be set as a system configuration parameter,”); Susaiyah teaches triggering models to be retrained if they do not satisfy a threshold performance metric; however, neither Susaiyah or Addepalli specifically mention setting flags for this purpose. More specifically, Susaiyah and Addepalli fail to teach producing a first flag corresponding to the model when the threshold performance metric is satisfied, the first flag indicative of satisfactory model performance; and producing a second flag corresponding to the model when the threshold performance metric is not satisfied, the second flag indicative of poor model performance. However, Devisschere teaches produce a first flag corresponding to the model when the threshold performance metric is satisfied, the first flag indicative of satisfactory model performance; and produce a second flag corresponding to the model when the threshold performance metric is not satisfied, the second flag indicative of poor model performance ([Evaluator] “The Evaluator component analyses the model and helps us understand how the model performed… it compares to the fixed threshold of one or multiple metrics. If the new model satisfies the condition, it receives a tag ‘blessed’. This is a signal to Pusher that it is ready to be pushed to a specified location.”). Therefore, it would have been obvious to one of ordinary skill before the effective filing date of the claimed invention to modify Susaiyah’s invention by setting flags to indicate satisfactory or poor model performance. This modification implements well-known functionally of Tensorflow.Evaluator (See the code snippet in the Evaluator section of Devisschere), and the flags allow for another component to push the models to production if the performance is satisfactory enough for models to be deployed (Devisschere [Pusher] “The Pusher component verifies the blessing from the Evaluator component and optionally the InfraValidator component. It assesses the compatibility between the model and the model server binary. This prevents technically weak models to be pushed to production. If the results are satisfactory, the model is pushed to one or more deployment targets.”). Regarding claim 20, Susaiyah teaches wherein the performance metric(s) include key performance indicators ([0038] “In more detail, in step 202, the performance measure can be any measure of how the model is performing. In some examples, the performance measure can reflect the accuracy of the first model, a measure of user satisfaction of the first model or a combination of the accuracy of the first model and a measure of user satisfaction with the model.” Susaiyah teaches examples where the performance indicators are user satisfaction (0040), blind validation (0044), accuracy/loss (0043), etc.). Regarding claim 25, Susaiyah teaches calculating at least one performance metric corresponding to the model (Fig. 2; [0037] “in a first step 202 the method comprises i) determining a performance measure for the first model.” [0038] “the performance measure can be any measure of how the model is performing. In some examples, the performance measure can reflect the accuracy of the first model, a measure of user satisfaction of the first model or a combination of the accuracy of the first model and a measure of user satisfaction with the model.”); comparing the at least one performance metric corresponding to current model execution to at least one threshold performance metric ([0048] “If the performance measure indicates a performance below a threshold performance level, then in step 204 the method 200 comprises ii) triggering an upgrade process.” [0049] “The threshold performance level may be set as a system configuration parameter,”); Susaiyah fails to teach setting a first flag corresponding to the model when the threshold performance metric is satisfied, the first flag indicative of satisfactory model performance; and setting a second flag corresponding to the model when the threshold performance metric is not satisfied, the second flag indicative of poor model performance. However, Devisschere teaches wherein, to compare the value of the performance metrics, the method includes: setting a first flag corresponding to the model when the threshold performance metric is satisfied, the first flag indicative of satisfactory model performance; and setting a second flag corresponding to the model when the threshold performance metric is not satisfied, the second flag indicative of poor model performance ([Evaluator] “The Evaluator component analyses the model and helps us understand how the model performed… it compares to the fixed threshold of one or multiple metrics. If the new model satisfies the condition, it receives a tag ‘blessed’. This is a signal to Pusher that it is ready to be pushed to a specified location.”). Therefore, it would have been obvious to one of ordinary skill before the effective filing date of the claimed invention to modify Susaiyah’s invention by setting flags to indicate satisfactory or poor model performance. This modification implements well-known functionally of Tensorflow (See the code snippet in the Evaluator section of Devisschere), and the flags allow for another component to push the models to production if the performance is satisfactory enough for models to be deployed (Devisschere [Pusher] “The Pusher component verifies the blessing from the Evaluator component and optionally the InfraValidator component. It assesses the compatibility between the model and the model server binary. This prevents technically weak models to be pushed to production. If the results are satisfactory, the model is pushed to one or more deployment targets.”). Claim 6, 14, and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Susaiyah (EP 4174721 A1) in view of Addepalli (US 2020/0265509 A1) and Devisschere (“TensorFlow Extended (TFX): the components and their functionalities.” https://www.adaltas.com/en/2021/03/05/tfx-overview/), and further in view of Sokolova et al. (“A systematic analysis of performance measures for classification tasks,” Information Processing & Management, 45, 4, p427-437) hereafter, Sokolova. Regarding claim 6, Susaiyah teaches that the performance indicator can be any measure of how the model is performing such as accuracy, user satisfaction, etc., but Susaiyah does not specifically teach wherein the key performance indicators include at least one of precision, recall, and/or a F1-score. However, Sokolova teaches the apparatus as defined in claim 5, wherein the key performance indicators include at least one of precision, recall, and/or a F1-score ([Section 2, Par. 6] “The evaluation metrics commonly used in Text Classification (Precision, Recall, Fscore) have their origin in IE.” Additionally, Tables 2 and 3 show measures for evaluating single-class and multi-class machine learning models respectively; these tables show precision, recall and Fscore.). Susaiyah and Sokolova are analogous in the art because both teach methods of evaluating model performance. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Susaiyah’s invention by utilizing precision, recall, or Fscore as a key performance indicator. This modification allows for the evaluation of models to be applied to text classification models, since precision, recall, and Fscore are common methods for evaluating text classification models (Sokolova [Section 2, Par. 6] “The evaluation metrics commonly used in Text Classification (Precision, Recall, Fscore) have their origin in IE. The formulas for these measures neglect the correct classification of negative examples, they instead reflect the importance of retrieval of positive examples in text/document classification:”). Regarding claim 14, Susaiyah fails to teach wherein the key performance indicators include at least one of precision, recall, and/or a F1-score. However, Sokolova teaches wherein key performance indicators include at least one of precision, recall, and/or a F1-score ([Section 2, Par. 6] “The evaluation metrics commonly used in Text Classification (Precision, Recall, Fscore) have their origin in IE.” Additionally, Tables 2 and 3 show measures for evaluating single-class and multi-class machine learning models respectively; these tables show precision, recall and Fscore.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Susaiyah’s invention by utilizing precision, recall, or Fscore as a key performance indicator. This modification allows for the evaluation of models to be applied to text classification models, since precision, recall, and Fscore are common methods for evaluating text classification models (Sokolova [Section 2, Par. 6] “The evaluation metrics commonly used in Text Classification (Precision, Recall, Fscore) have their origin in IE. The formulas for these measures neglect the correct classification of negative examples, they instead reflect the importance of retrieval of positive examples in text/document classification:”). Regarding claim 21, Susaiyah fails to teach wherein the key performance indicators include at least one of precision, recall, and/or a F1-score. However, Sokolova teaches the non-transitory machine readable storage medium as defined in claim 20, wherein the key performance indicators include at least one of precision, recall, and/or a F1-score ([Section 2, Par. 6] “The evaluation metrics commonly used in Text Classification (Precision, Recall, Fscore) have their origin in IE.” Additionally, Tables 2 and 3 show measures for evaluating single-class and multi-class machine learning models respectively; these tables show precision, recall and Fscore.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Susaiyah’s invention by utilizing precision, recall, or Fscore as a key performance indicator. This modification allows for the evaluation of models to be applied to text classification models, since precision, recall, and Fscore are common methods for evaluating text classification models (Sokolova [Section 2, Par. 6] “The evaluation metrics commonly used in Text Classification (Precision, Recall, Fscore) have their origin in IE. The formulas for these measures neglect the correct classification of negative examples, they instead reflect the importance of retrieval of positive examples in text/document classification:”). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Jin-Cheng et al. (CN 110956278 A) teaches a system for automatically retraining machine learning models. The system determines whether a model needs to be retrained based on a performance metric, such as precision, recall, F1, etc., and automatically performs retraining until a performance threshold is achieved. Brunn et al. (US 12033094 B2) teaches a system for retraining machine learning models which receive text information as input and output a task message about a task to be performed. Based on feedback and performance about the output task, the machine learning model(s) may be retrained. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ERIC JAMES SHOEMAKER whose telephone number is (571)272-6605. The examiner can normally be reached Monday through Friday from 8am to 5pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner' s supervisor, JENNIFER MEHMOOD, can be reached at (571)272-2976. The fax phone number for the organization where this application or proceeding is assigned is (571)273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Eric Shoemaker/ Patent Examiner /JENNIFER MEHMOOD/ Supervisory Patent Examiner, Art Unit 2664
Read full office action

Prosecution Timeline

Oct 28, 2022
Application Filed
Aug 27, 2025
Non-Final Rejection — §103
Dec 02, 2025
Response Filed
Feb 03, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597157
ELECTRONIC DEVICE FOR CORRECTING POSITION OF EXTERNAL DEVICE AND OPERATION METHOD THEREOF
2y 5m to grant Granted Apr 07, 2026
Patent 12569329
MEDICAL IMAGE PROCESSING APPARATUS, MEDICAL IMAGE PROCESSING METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
77%
Grant Probability
99%
With Interview (+30.0%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 13 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month