Prosecution Insights
Last updated: April 18, 2026
Application No. 17/846,238

PREDICTIVE MAINTENANCE FOR TERMINALS

Non-Final OA §103
Filed
Jun 22, 2022
Examiner
KHAN, SHAHID K
Art Unit
2146
Tech Center
2100 — Computer Architecture & Software
Assignee
Ncr Voyix Corporation
OA Round
3 (Non-Final)
74%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
90%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
287 granted / 389 resolved
+18.8% vs TC avg
Strong +16% interview lift
Without
With
+15.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
31 currently pending
Career history
420
Total Applications
across all art units

Statute-Specific Performance

§101
10.0%
-30.0% vs TC avg
§103
55.7%
+15.7% vs TC avg
§102
16.5%
-23.5% vs TC avg
§112
15.2%
-24.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 389 resolved cases

Office Action

§103
DETAILED ACTION This communication is in response to the after-final amendment filed 02/03/26 in which claims 1, 13, and 19 were amended. Claims 1-20 are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 02/03/26 has been entered. Response to Arguments Applicant’s arguments with respect to claims 1, 13, and 19 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. To expedite prosecution, it may be beneficial to further clarify “maintenance intervention data” and “support data.” Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or non-obviousness. Claims 1, 2, and 7-9 are rejected under 35 U.S.C. 103 as being unpatentable over Al-Dabbagh (US 2022/0120727 A1; published Apr. 21, 2022) in view of Wegerich (US 2007/0005311 A1; published Jan. 4, 2007), Wang (US 2020/0201950 A1; published Jun. 25, 2020), Bi (US 2019/0004891 A1; published Jan. 3, 2019), Gill (US 5,984,178; patented Nov. 16, 1999), and Cheong, Michelle LF, Ping Shung Koo, and B. Chandra Babu. "Ad-hoc automated teller machine failure forecast and field service optimization." 2015 IEEE International Conference on Automation Science and Engineering (CASE). IEEE, 2015 (“Cheong”). Regarding claim 1, Al-Dabbagh discloses [a] method, comprising: training, using training terminal data, a plurality of types of machine learning models (MLMs) to provide predictions of failures and non-failures of terminals; (¶ 1 (“Generally, as part of industrial operations, engineers or technicians monitor the condition of equipment utilized in the industrial operations in order to prevent equipment failure and resulting interruptions.”), ¶ 24 (“During the first phase, a first, binary ML model (or binary ML classifier) is built (or trained) to classify input laboratory analysis results (also “lab analysis data,” “lab results,” or “lab records”) of a plurality of oil samples as good or defective. In this phase, there are two classes available for the classification of the lab results by the first, binary ML model: good and defective (or bad). For example, good may be assigned the binary value of 0, while bad may be assigned a binary value of 1, or vice versa.”), ¶ 27 (“During the second phase, a second, multiclass ML model (or a multiclass classifier) is trained to further classify the lab results of the oil samples classified as defective during the first phase. In the second phase, the defective oil samples are classified according to defect type. For each defective oil sample, the dataset includes the all or a subset of the plurality of features mentioned above with respect to the first model. In the second phase, there are four classes available for the classification of the defective oil samples by the second ML model: contamination, oil mixing, dissolved gasses, and degradation.”), ¶ 29 (“During the third phase, a third, multiclass ML model (or a multiclass classifier) is trained to predict corrective actions for each defect type. In the third phase, the input data includes the defective oil samples of each type of defect. In some instances, a model is trained for each defect type. Each defect type may have a different number or set of corrective actions. The number of corrective actions may vary from one defect type to another. For example, one defect type may have 5 corrective actions, while another may have 7 corrective actions.”)). Al-Dabbagh does not expressly disclose testing each trained MLM using testing terminal data; (but see Wegerich ¶ 32 (“Turning to FIG. 4, a method for automatically selecting a model for deployment from a set of generated candidate models is shown. In step 410, the reference data is filtered and cleaned. In step 415, a model is generated from the data. Models can vary based on tuning parameters, the type of model technology, which variables are selected to be grouped into a model, or the data snapshots used to train the model, or a combination.”)) calculating a respective score for each trained MLM based on the testing; (but see Wegerich ¶ 32 (“In step 420, the model metrics described herein are computed for the model.”)) selecting a particular trained MLM having a highest score among the scores calculated for the plurality of types of MLMs; and (but see Wegerich ¶ 32 (“In step 425, if more models are to be generated, the method steps back to step 415, otherwise at step 430 the models are filtered according to their model metrics to weed out those that do not meet minimum criteria, as described below. In step 435, the remaining models are ranked according to their metrics, and a top rank model is selected for deployment in the equipment health monitoring system.”)) deploying the particular trained MLM to a production environment, wherein the deployed MLM is configured to provide current failure predictions for the terminals during a current period of time using current terminal data (but see Wegerich ¶ 32 (“In step 425, if more models are to be generated, the method steps back to step 415, otherwise at step 430 the models are filtered according to their model metrics to weed out those that do not meet minimum criteria, as described below. In step 435, the remaining models are ranked according to their metrics, and a top rank model is selected for deployment in the equipment health monitoring system.”)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Al-Dabbagh to incorporate the teachings of Wegerich to deploy a model after ranking their performance accuracy, at least because doing so would enable comparing alternative models without significant human intervention. See Wegerich ¶ 5. Al-Dabbagh teaches predicting equipment defects using machine learning models but does not expressly describe predicting failure of terminals (but see Bi ¶ 14 (“FIG. 1 illustrates a system 100 for improving hardware and software diagnostic technology associated with failure predictions, in accordance with embodiments of the present invention. System 100 is enabled to execute a machine learning framework to predict and classify hardware or software failures based on retrieved sensor data, usage data, prior failure data, and specified machine configurations for providing predictive maintenance solutions for hardware devices (e.g., an automated teller machine (ATM)).”)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Al-Dabbagh to incorporate the teachings of Bi to predict ATM failure using machine learning models and retrieved sensor data, at least because doing so would improve hardware device failure predictions. See Bi ¶ 1. Al-Dabbagh ¶ 6 teaches “The first multiclass classification model is trained to classify the laboratory analysis results of the oil sample according to the gradient boosting algorithm if the laboratory analysis results are classified as “defective.” The first multiclass classification model is trained to output a predicted defect type [a projected type associated with the corresponding terminal failure] for the defect in equipment. The system includes a second multiclass classification model. The second multiclass classification model is trained to classify the laboratory analysis results of the oil sample according to the gradient boosting algorithm and the predicted defect type. The second multiclass classification model is trained to output a predicted corrective action [projected remedial action to perform on the corresponding terminal to resolve the corresponding terminal failure before the corresponding terminal experiences an actual failure] pertaining to the equipment based on the predicted defect type for the defect in equipment.” Yet, Al-Dabbagh does not expressly disclose wherein the deployed MLM is further configured to provide each current failure prediction as a data structure comprising a terminal identifier for a corresponding terminal, a projected date for a corresponding terminal failure, a projected type associated with the corresponding terminal failure, and a projected remedial action to perform on the corresponding terminal to resolve the corresponding terminal failure before the corresponding terminal experiences an actual failure (but see Wang ¶ 194 (“In some embodiments, the report and alert generation module 518 may generate a report [data structure] indicating any number of potential failures, the probability of such failure, and the justification or reasoning based on the model and the fit of previously identified states associated with future failure of components. The report may be a maintenance plan or schedule to correct the predicted fault (e.g., preferably before failure and a minimum of power disruption).”); ¶ 198 (“FIG. 21 depicts a prospective component failure forecasting risk score and action urgency depiction in some embodiments. The prospective component failure forecasting risk score and action urgency depiction may include the predictions of failure for any number of components. For those components where predicted risk is above a trigger threshold, information may be highlighted or otherwise emphasized.”); ¶ 199 (“In FIG. 21, the prospective component failure forecasting risk score and action urgency depiction includes an asset identifier [terminal identifier], component name, update time (e.g., time of the prediction), risk score of failure, forecast lead time [projected date], and indicator (e.g., a classification indicating a degree of danger of fault or performance health). In this example, the generator of asset identifier 303056 has an 83% risk of failure. The generator 303060 has a 60% risk of failure. Assuming that the risk of failure is greater than a trigger threshold for generators, the prospective component failure forecasting risk score and action urgency depiction may highlight or otherwise emphasize information regarding the two generators that are at risk. Further, the failure risk score may provide information for a scheduled plan and prioritization.”)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Al-Dabbagh to incorporate the teachings of Wegerich and Wang to generate a report indicating an asset identifier and a forecast lead time indicating a degree of danger of fault or performance health of a piece of equipment, at least because doing so would increase lead time before failure and improve accuracy. See Wang ¶ 1. Al-Dabbagh does not expressly disclose wherein the terminals comprise transaction terminals selected from self-service terminals (SSTs) or point-of-sale (POS) terminals, wherein transaction managers execute on the terminals to generate event data during transaction processing for transactions, (but see Gill 1:15-48 (“Automated banking machines have been developed which perform functions such as dispensing cash, receiving deposits, checking the status of accounts and other functions. Automated banking machines used by consumers are referred to as automated teller machines or "ATMs". There are several manufacturers of automated teller machines. Many types of automated banking machines include internal systems [e.g., transaction managers] which monitor their operation. These internal systems often operate to check the available quantities of items which are required for proper operation of the machine. This may include the amount of cash available in the machine for dispensing to customers or an operator. Other systems may monitor the availability of supplies such as blank receipt forms or deposit envelopes. Such systems operate to provide a signal when the quantities of such items reach levels indicative of a need for replenishment. It is also common to provide further signals when such items are depleted. The signals [e.g., event data] generated by the machine are indicative of the condition which has occurred. Automated banking machines often include systems for providing signals [e.g., event data] indicative of malfunctions or the existence of other conditions which impede the operation of the machine. For example, machines which accept deposits may reach a condition where the depository is filled and cannot accept further deposits. When this occurs the machine loses all or a portion of its functional capabilities. Other malfunctions may include failures of currency dispensing mechanisms, customer card readers, receipt printers, journal printers or other components of the machine. In each case, upon sensing a failure condition, the machine is operative to generate signals indicative of the condition.”)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Al-Dabbagh to incorporate the teachings of Gill to generate signals indicative of malfunctions or the existence of other conditions which impede the operation of an Automated Teller Machine, at least because Al-Dabbagh teaches “Generally, as part of industrial operations, engineers or technicians monitor the condition of equipment utilized in the industrial operations in order to prevent equipment failure and resulting interruptions,” and thus doing so would enable “monitoring fault conditions at automated banking machines and for automatically notifying a servicer or other entity of fault conditions requiring attention.” Gill 1:9-12. Al-Dabbagh does not expressly disclose and wherein the training terminal data comprises the event data generated by the terminals, maintenance intervention data from a maintenance system, and support data from a support system (but see Cheong Abstract (“As part of its overall effort to maintain good customer service while managing operational efficiency and reducing cost, a bank in Singapore has embarked on using data and decision analytics methodologies to perform better ad-hoc ATM failure forecasting and plan the field service engineers to repair the machines. We propose using a combined Data and Decision Analytics Framework which helps the analyst to first understand the business problem by collecting, preparing, and exploring data to gain business insights, before proposing what objectives and solutions can and should be done to solve the problem. This paper reports the work in analyzing past daily ad-hoc ATM failures, forecasting ad-hoc ATM failures and then using the forecasted results to optimize the number of field service engineers to deploy in each geographical zone, to minimize the number of daily unattended ad-hoc ATM failures.”), Section IV (“6 months of daily ad-hoc ATM failure data [e.g., maintenance intervention data and/or support data], from October 2013 to March 2014, denoted as OCT_2013 to MAR_2014, were collected and the fields are: ATM ID Date and Time of Failure Ticket ID [e.g., support data Dispatch ID Problem Category [e.g., support data] The ATM Location data file contains the following fields: ATM ID Location - Latitude Location - Longitude ATM Zone Location Type Using the ATM ID as the key, the tables were merged into a single ATM_Failure_Master_Table as shown in Fig. 2, which contains a total of 73,753 records. The fields include: ATM ID Location - Latitude Location - Longitude ATM Zone Location Type Date and Time of Failure Ticket ID Dispatch ID Problem Category”)”, Section VI (“We used three forecasting methods to forecast the number of ad-hoc failures for the month of March 2014, using 5 months of data from October 2013 to February 2014. The three methods used are Stepwise Autoregressive, Exponential Smoothing and Holt-Winters Additive model. These 3 methods are selected because they are easy to implement and understand, and do not require excessive amounts of past data. Moving average model is not used as it cannot cater to trend component in time series, and Holt-Winters Multiplicative model is not used as there is no multiplicative seasonality effect observed. ARIMA forecasting technique is not used as it requires excessive amounts of data, which is not available.”)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have further modified Al-Dabbagh to incorporate the teachings of Cheong to incorporate ATM failure data collected by engineers during troubleshooting events, at least because Al-Dabbagh teaches “Generally, as part of industrial operations, engineers or technicians monitor the condition of equipment utilized in the industrial operations in order to prevent equipment failure and resulting interruptions,” and thus doing so would enable better ad-hoc ATM failure forecasting and ATM field service optimization. Cheong Section 1. Regarding claim 2, Al-Dabbagh, in view of Wegerich, Wang, Bi, Gill, and Cheong, discloses the invention of claim 1 as discussed above. Al-Dabbagh further discloses providing current terminal data as input to the deployed MLM during the current period of time; and (¶ 32 (“The diagnostic and correction system 104 accesses oil analysis data 102 from a data repository, and uses the oil analysis data 102 as input for one or more machine learning models that identify and classify equipment defects and identify corresponding corrective actions based on the identified equipment defects.”)) reporting each current failure prediction generated as output by the deployed MLM to a retailer interface, a retailer system, or a retailer service associated with the terminals (¶ 36 (“The diagnostic and correction system 104 is configured to generate a recommendation 106 or perform the corrective action with respect to the equipment associated with the oil sample. The diagnostic and correction system 104 may include a computer system that is similar to the computer systems 700 and 714 described with regard to FIGS. 7A and 7B, respectively, and the accompanying descriptions.)). Regarding claim 7, Al-Dabbagh, in view of Wegerich, Wang, Bi, Gill, and Cheong, discloses the invention of claim 1 as discussed above. Al-Dabbagh further discloses wherein training further includes sampling and resampling raw terminal data over a historical period of time to obtain a balanced data set between the failures and the non-failures (¶ 26 (“In some example embodiments, the binary classifier using the XGBoost algorithm is trained on a first percentage (e.g., 80%) of the data [historical training data], after being oversampled using Synthetic Minority Oversampling Technique (SMOTE) using a support vector machine (SVM) algorithm to detect the sample used for generating new synthetic samples in order to balance the two classes with respect to the data distribution. A second percentage (e.g., 20%) of the oversampled data is used to test the prediction performance of the binary classifier.”)). Regarding claim 8, Al-Dabbagh, in view of Wegerich, Wang, Bi, Gill, and Cheong, discloses the invention of claim 7 as discussed above. Al-Dabbagh further discloses portioning the balanced data set into the training terminal data and testing terminal data (¶ 26 (“In some example embodiments, the binary classifier using the XGBoost algorithm is trained on a first percentage (e.g., 80%) of the data, after being oversampled using Synthetic Minority Oversampling Technique (SMOTE) using a support vector machine (SVM) algorithm to detect the sample used for generating new synthetic samples in order to balance the two classes with respect to the data distribution. A second percentage (e.g., 20%) of the oversampled data is used to test the prediction performance of the binary classifier.”)). Regarding claim 9, Al-Dabbagh, in view of Wegerich, Wang, Bi, Gill, and Cheong, discloses the invention of claim 8 as discussed above. Al-Dabbagh further discloses labeling features identified within the training terminal data and the testing terminal data; and labeling the failures and the non-failures within the training terminal data (¶ 25 (“In some example embodiments, when an analyst tests an oil sample, the oil sample is marked as a good sample or a defective sample. A dataset may be generated from the records of a plurality (e.g., tens of thousands) of oil samples for the categorization into the two classes during the first phase of prediction. For each oil sample, the dataset includes a plurality of features. The features correspond to attributes (e.g., parameters or characteristics) identified during the testing of the oil samples. Examples of features are “Aluminum,” “Antimony,” “Appear,” “Barium,” “Boron,” “Base Sediment & Water,” “Cadmium,” “Calcium,” “Chromium,” “Color,” “Copper,” “FERL,” “FERS,” “Filter,” “Flash Point,” “Foam,” “Fuel Dilution,” “Iron,” “Lead,” “Magnesium,” “Moisture,” “Molybdenum,” “Nickel,” “pH,” “Phosphorous,” “Rotating Pressure Vessel Oxidation,” “Silicon,” “Silver,” “Sodium,” “Solids,” “Total Acid Number,” “Total Base Number,” “Tin,” “Titanium,” “Viscosity at 100 degrees Celsius (C),” “Viscosity at 40 degrees C.,” “Water,” “Zinc,” “Oil Type,” “Oil Sump Capacity,” and “Equipment Type.” FERS indicates the direct ferrography test result for smaller particles. FERL indicates the direct ferrography test result for large particles. The features are used as predictor values (also “predictors”), while the target value (also “target”) is one of the binary values of “0” for good or “1” for defective. The predictor variables are used to predict (e.g., determine) the target variable. In some instances, fewer features may be selected to improve the model.”)). Claims 3-6 are rejected under 35 U.S.C. 103 as being unpatentable over Al-Dabbagh, Wegerich, Wang, Bi, Gill, and Cheong as applied to claim 2 above, and further in view of Lin (CA 2,834,959 A1; published Nov. 8, 2012). Regarding claim 3, Al-Dabbagh, in view of Wegerich, Wang, Bi, Gill, and Cheong, discloses the invention of claim 2 as discussed above. Al-Dabbagh does not expressly disclose providing the current terminal data to one or more of the plurality of types of MLMs that are not currently deployed within the production environment during the current period of time; and (but see Lin ¶ 110 (“The first condition that can trigger an update of updateable trained predictive models can be selected to accommodate various considerations. Some example first conditions were already described above in reference to FIG. 5. That is, receiving new training data in and of itself can satisfy the first condition and trigger the update.”)) recording each candidate terminal failure prediction received as output from a corresponding one of the plurality of types of MLMs that are not currently deployed within the production environment during the current period of time (but see Lin ¶ 111 (“Before the updateable trained predictive models that are stored in the repository 215 are "updated" with the training data stored in the training data queue 213, each trained predictive model in the repository 215 can be rescored for accuracy. That is, new accuracy scores of the trained models in the repository are determined based on the received training data sets stored in the training data queue 213 (Box 608).”)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Al-Dabbagh to incorporate the teachings of Lin to periodically rescore and retrain trained predictive models, at least because doing so would provide access to a trained predictive model that has been trained with training data reflective of changes to the data. See Lin ¶ 451. Regarding claim 4, Al-Dabbagh, in view of Wegerich, Wang, Bi, Gill, Cheong, and Lin, discloses the invention of claim 3 as discussed above. Al-Dabbagh further discloses determining actual terminal failures from the current terminal data at an end of the current period of time; (¶ 34 (“The lab records of the oil samples in the training dataset and associated labels (e.g., “good” or “defective”) assigned by experts based on the quality of the oil samples are utilized as input for training the first ML model.”)). Al-Dabbagh does not expressly disclose calculating current scores for the deployed MLM and each of the plurality of types of MLMs not currently deployed within the production environment over the current period of time based on the actual terminal failures, the current failure predictions, and corresponding candidate terminal failure predictions; and (but see Lin ¶ 111 (“Before the updateable trained predictive models that are stored in the repository 215 are "updated" with the training data stored in the training data queue 213, each trained predictive model in the repository 215 can be rescored for accuracy. That is, new accuracy scores of the trained models in the repository are determined based on the received training data sets stored in the training data queue 213 (Box 608). The new accuracy scores are determined using test data. The test data can include the data in the training data queue 213 in addition to previously received training data that is stored in the training data repository 214. The techniques described above in reference to FIG. 5 to determine what to include in the test data and how to calculate the new accuracy scores can be employed here to determine the new accuracy scores.); see ¶ 110 (describing time intervals for updates to training data)) selecting a next MLM for deployment in the production environment for a next period of time based on the current scores (but see Lin ¶ 113 (“A trained predictive model is selected from the multiple trained predictive models based on their respective new accuracy scores. That is, the new accuracy scores of the trained predictive models stored in the repository 215 can be compared and the most accurate model, i.e., a first trained predictive model, selected.”)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Al-Dabbagh to incorporate the teachings of Lin to periodically rescore and retrain trained predictive models, at least because doing so would provide access to a trained predictive model that has been trained with training data reflective of changes to the data. See Lin ¶ 451. Regarding claim 5, Al-Dabbagh, in view of Wegerich, Wang, Bi, Gill, Cheong, and Lin, discloses the invention of claim 4 as discussed above. Al-Dabbagh does not expressly disclose training the next MLM for the next period of time using most-recent terminal data maintained over a most recent training period of time; and (but see Lin ¶ 112 (“The updateable trained predictive models that are stored in the repository 215 are "updated" with the training data stored in the training data queue 213. That is, retrained predictive models are generated (Box 610) using: the training data queue 213; the updateable trained predictive models obtained from the repository 215; and the corresponding training functions that were initially used to train the updateable trained predictive models, which training functions are obtained from the training function repository 216.”)) deploying the next MLM to the production environment as the deployed MLM to provide failure predictions for the terminals during the next period of time using next period terminal data for the next period of time (but see Lin ¶ 113 (“Access is provided to the first trained predictive model to the client computing system 202 (Box 612).”)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Al-Dabbagh to incorporate the teachings of Lin to periodically rescore and retrain trained predictive models, at least because doing so would provide access to a trained predictive model that has been trained with training data reflective of changes to the data. See Lin ¶ 451. Regarding claim 6, Al-Dabbagh, in view of Wegerich, Wang, Bi, Gill, Cheong, and Lin, discloses the invention of claim 5 as discussed above. Al-Dabbagh does not expressly disclose continuously iterating the providing of current terminal data as input to the deployed MLM during the current period of time, wherein the next period of time transitions to become the current period of time and the next period terminal data transitions to become the current terminal data (but see Lin ¶ 110 (“The first condition that can trigger an update of updateable trained predictive models can be selected to accommodate various considerations. Some example first conditions were already described above in reference to FIG. 5.”)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Al-Dabbagh to incorporate the teachings of Lin to periodically rescore and retrain trained predictive models, at least because doing so would provide access to a trained predictive model that has been trained with training data reflective of changes to the data. See Lin ¶ 451. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Al-Dabbagh, Wegerich, Wang, Bi, Gill, and Cheong as applied to claim 1 above, and further in view of C. Rudin et al., "Machine Learning for the New York City Power Grid," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 2, pp. 328-345, Feb. 2012 (“Rudin”). Regarding claim 10, Al-Dabbagh, in view of Wegerich, Wang, Bi, Gill, and Cheong, discloses the invention of claim 1 as discussed above. Al-Dabbagh does not expressly disclose wherein training further includes training the MLMs to provide each prediction associated with any terminal failure as a data structure comprising, a terminal identifier for the corresponding terminal, a projected date for the corresponding terminal failure, (but see Rudin 7.1 Contingency Analysis Program (“CAP is a tool designed by Con Edison and used at their main control centers. It brings together information relevant to the outage of a primary feeder cable. When a contingency occurs, Con Edison already has applications in use (integrated into the CAP tool) that preemptively model the network for the possibility of additional feeders failing. These applications determine the failures that could have the worst consequences for the system. Columbia’s key contribution to the CAP tool is a feeder susceptibility indicator (described in Section 5.1) that gives the operators a new important piece of information: an indicator of which feeders are most likely to fail next. Operators can use this information to help determine the allocation of effort and resources toward preventing a cascade. The “worst consequences” feeder may not be the same as the “most likely to fail” feeder, so the operator can choose to allocate resources to feeders that are both likely to fail and for which a failure could lead to more serious consequences. Fig. 19 shows a snapshot of the CAP tool interface.”); see also 7.3 Manhole Event Structure Profiling Tool and Visualization Tool (describing manhole location identifiers for potential failure warning)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Al-Dabbagh to incorporate the teachings of Rudin to provide an identifier and timing of equipment failure, at least because doing so would assist with prioritizing repairs, inspections, and corrections. See Rudin 1 Introduction. Al-Dabbagh further discloses a projected type associated the corresponding terminal failure, (¶ 41 (“The sample analysis results 204 may be used to train the ML models 206 (e.g., a binary classification model, a first multiclass classification model, or a second multiclass classification model) to identify equipment defects and generate recommendations for corrective actions associated with the equipment defects.”)) and a projected remedial action to perform on the corresponding terminal to resolve the corresponding terminal failure before the corresponding terminal experiences an actual failure (¶ 41 (“The sample analysis results 204 may be used to train the ML models 206 (e.g., a binary classification model, a first multiclass classification model, or a second multiclass classification model) to identify equipment defects and generate recommendations for corrective actions associated with the equipment defects.”)). Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Al-Dabbagh, Wegerich, Wang, Bi, Gill, and Cheong as applied to claim 1 above, and further in view of Poornaki (US 2020/0210824 A1; published Jul. 2, 2020). Regarding claim 11, Al-Dabbagh, in view of Wegerich, Wang, Bi, Gill, and Cheong, discloses the invention of claim 1 as discussed above. Al-Dabbagh does not expressly disclose wherein training further includes training three types of MLMs comprising a first type associated with a Long Short-Term Memory (LSTM) recurrent neural network MLM, (but see Poornaki ¶ 133 (“In step 1610, the model training module 512 may utilize a long short-term memory (LSTM) network (e.g., as a recurrent network). LSTM networks are well-suited to classifying, processing, and making predictions based on time series data, since there can be lags of unknown duration between important events in a time series. The output size of the LSTM network in this example is batch*45.”)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Al-Dabbagh to incorporate the teachings of Poornaki to utilize an LSTM network as a recurrent network to predict failure of equipment, at least because LSTM networks are well suited to classifying, processing, and making predictions based on time series data. Poornaki ¶ 133. Al-Dabbagh further discloses a second type associated with a decision-tree-based ensemble MLM that uses a gradient boosting framework, (¶ 23 (“A supervised ML algorithm (e.g., a gradient boosted decision tree or a gradient boosting algorithm) is applied to laboratory analysis data of oil samples during the three phases.”)) and a third type associated with a gradient boosting framework MLM that uses tree-based learning (¶ 23 (“A supervised ML algorithm (e.g., a gradient boosted decision tree or a gradient boosting algorithm) is applied to laboratory analysis data of oil samples during the three phases.”)). Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Al-Dabbagh, Wegerich, Wang, Bi, Gill, and Cheong as applied to claim 1 above, and further in view of Abadi (US 2023/0029777A1; published Feb. 2, 2023). Regarding claim 12, Al-Dabbagh, in view of Wegerich, Wang, Bi, Gill, and Cheong, discloses the invention of claim 1 as discussed above. Al-Dabbagh teaches that the nodes in the network may be configured to provide services for a client device as part of a cloud computing system, see ¶ 99, but Al-Dabbagh does not expressly disclose wherein deploying further includes providing the deployed MLM within the production environment as a Software-as-a-Service (SaaS) to a retailer interface, a retailer system, or a retailer service associated with the terminals (but see Abadi ¶ 64 (“In an embodiment, at 250, the item sequence fraud assessor is provided and processed as a Software-as-a-Service (SaaS) to a retailer associated with the transaction terminal 130 and the fraud detection system 124.”)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Al-Dabbagh to incorporate the teachings of Abadi to provide the equipment failure models as a Software-as-a-Service to a retailer associated with the equipment, at least because doing so would enable providing the service as part of a cloud computing system. Claims 13—20 are rejected under 35 U.S.C. 103 as being unpatentable over Al-Dabbagh in view of Wegerich, Wang, Lin, Bi, Gill, and Cheong. Regarding claim 13, Al-Dabbagh discloses [a] method, comprising: training machine learning models (MLMs) to predict terminal failures of terminals; (¶ 24 (“During the first phase, a first, binary ML model (or binary ML classifier) is built (or trained) to classify input laboratory analysis results (also “lab analysis data,” “lab results,” or “lab records”) of a plurality of oil samples as good or defective. In this phase, there are two classes available for the classification of the lab results by the first, binary ML model: good and defective (or bad). For example, good may be assigned the binary value of 0, while bad may be assigned a binary value of 1, or vice versa.”), ¶ 27 (“During the second phase, a second, multiclass ML model (or a multiclass classifier) is trained to further classify the lab results of the oil samples classified as defective during the first phase. In the second phase, the defective oil samples are classified according to defect type. For each defective oil sample, the dataset includes the all or a subset of the plurality of features mentioned above with respect to the first model. In the second phase, there are four classes available for the classification of the defective oil samples by the second ML model: contamination, oil mixing, dissolved gasses, and degradation.”), ¶ 29 (“During the third phase, a third, multiclass ML model (or a multiclass classifier) is trained to predict corrective actions for each defect type. In the third phase, the input data includes the defective oil samples of each type of defect. In some instances, a model is trained for each defect type. Each defect type may have a different number or set of corrective actions. The number of corrective actions may vary from one defect type to another. For example, one defect type may have 5 corrective actions, while another may have 7 corrective actions.”)). Al-Dabbagh does not expressly disclose testing a capability of each MLM to provide correct predictions; (but see Wegerich ¶ 32 (“Turning to FIG. 4, a method for automatically selecting a model for deployment from a set of generated candidate models is shown. In step 410, the reference data is filtered and cleaned. In step 415, a model is generated from the data. Models can vary based on tuning parameters, the type of model technology, which variables are selected to be grouped into a model, or the data snapshots used to train the model, or a combination.”)) calculating respective prediction scores for the MLMs based on the testing; (but see Wegerich ¶ 32 (“In step 420, the model metrics described herein are computed for the model.”)) selecting an MLM having a highest prediction score from the calculated prediction scores; (but see Wegerich ¶ 32 (“In step 425, if more models are to be generated, the method steps back to step 415, otherwise at step 430 the models are filtered according to their model metrics to weed out those that do not meet minimum criteria, as described below. In step 435, the remaining models are ranked according to their metrics, and a top rank model is selected for deployment in the equipment health monitoring system.”)) providing the selected MLM to a deployment environment to provide predictions of failures of the terminals during a current period of time, (but see Wegerich ¶ 32 (“In step 425, if more models are to be generated, the method steps back to step 415, otherwise at step 430 the models are filtered according to their model metrics to weed out those that do not meet minimum criteria, as described below. In step 435, the remaining models are ranked according to their metrics, and a top rank model is selected for deployment in the equipment health monitoring system.”)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Al-Dabbagh to incorporate the teachings of Wegerich to deploy a model after ranking their performance accuracy, at least because doing so would enable comparing alternative models without significant human intervention. See Wegerich ¶ 5. Al-Dabbagh ¶ 6 teaches “The first multiclass classification model is trained to classify the laboratory analysis results of the oil sample according to the gradient boosting algorithm if the laboratory analysis results are classified as “defective.” The first multiclass classification model is trained to output a predicted defect type [a projected type associated with the corresponding terminal failure] for the defect in equipment. The system includes a second multiclass classification model. The second multiclass classification model is trained to classify the laboratory analysis results of the oil sample according to the gradient boosting algorithm and the predicted defect type. The second multiclass classification model is trained to output a predicted corrective action [projected remedial action to perform on the corresponding terminal to resolve the corresponding terminal failure before the corresponding terminal experiences an actual failure] pertaining to the equipment based on the predicted defect type for the defect in equipment.” Yet, Al-Dabbagh does not expressly disclose wherein each prediction comprises a data structure including a terminal identifier, a projected failure date, a projected failure type, and a projected remedial action; (but see Wang ¶ 194 (“In some embodiments, the report and alert generation module 518 may generate a report [data structure] indicating any number of potential failures, the probability of such failure, and the justification or reasoning based on the model and the fit of previously identified states associated with future failure of components. The report may be a maintenance plan or schedule to correct the predicted fault (e.g., preferably before failure and a minimum of power disruption).”); ¶ 198 (“FIG. 21 depicts a prospective component failure forecasting risk score and action urgency depiction in some embodiments. The prospective component failure forecasting risk score and action urgency depiction may include the predictions of failure for any number of components. For those components where predicted risk is above a trigger threshold, information may be highlighted or otherwise emphasized.”); ¶ 199 (“In FIG. 21, the prospective component failure forecasting risk score and action urgency depiction includes an asset identifier [terminal identifier], component name, update time (e.g., time of the prediction), risk score of failure, forecast lead time [projected date], and indicator (e.g., a classification indicating a degree of danger of fault or performance health). In this example, the generator of asset identifier 303056 has an 83% risk of failure. The generator 303060 has a 60% risk of failure. Assuming that the risk of failure is greater than a trigger threshold for generators, the prospective component failure forecasting risk score and action urgency depiction may highlight or otherwise emphasize information regarding the two generators that are at risk. Further, the failure risk score may provide information for a scheduled plan and prioritization.”)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Al-Dabbagh to incorporate the teachings of Wegerich and Wang to generate a report indicating an asset identifier and a forecast lead time indicating a degree of danger of fault or performance health of a piece of equipment, at least because doing so would increase lead time before failure and improve accuracy. See Wang ¶ 1. Al-Dabbagh does not expressly disclose obtaining candidate predictions of failures of the terminals from non-selected MLMs during the current period of time; calculating new prediction scores for the selected MLM and the non-selected MLMs; (but see Lin ¶ 71 (“In other implementations, determining the updated accuracy score for a particular trained predictive model includes: summing a number of correct predictive outputs included in the generated predictive output data as determined from the comparison;”), ¶ 111 (“Before the updateable trained predictive models that are stored in the repository 215 are "updated" with the training data stored in the training data queue 213, each trained predictive model in the repository 215 can be rescored for accuracy.”)) selecting a next MLM for a next period of time based on the new prediction scores; and (but see Lin ¶ 113 (“A trained predictive model is selected from the multiple trained predictive models based on their respective new accuracy scores. That is, the new accuracy scores of the trained predictive models stored in the repository 215 can be compared and the most accurate model, i.e., a first trained predictive model, selected.”)) providing the selected next MLM to the deployment environment to provide predictions of failures from the terminals during the next period of time (but see Lin ¶ 113 (“Access is provided to the first trained predictive model to the client computing system 202 (Box 612).”)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Al-Dabbagh to incorporate the teachings of Lin to periodically rescore and retrain trained predictive models, at least because doing so would provide access to a trained predictive model that has been trained with training data reflective of changes to the data. See Lin ¶ 451. Al-Dabbagh teaches predicting equipment defects using machine learning models but does not expressly describe predicting failure of terminals (but see Bi ¶ 14 (“FIG. 1 illustrates a system 100 for improving hardware and software diagnostic technology associated with failure predictions, in accordance with embodiments of the present invention. System 100 is enabled to execute a machine learning framework to predict and classify hardware or software failures based on retrieved sensor data, usage data, prior failure data, and specified machine configurations for providing predictive maintenance solutions for hardware devices (e.g., an automated teller machine (ATM)).”)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Al-Dabbagh to incorporate the teachings of Bi to predict ATM failure using machine learning models and retrieved sensor data, at least because doing so would improve hardware device failure predictions. See Bi ¶ 1. Al-Dabbagh does not expressly disclose wherein the terminals comprise transaction terminals selected from self-service terminals (SSTs) or point-of-sale (POS) terminals, wherein transaction managers execute on the terminals to generate event data during transaction processing for transactions, (but see Gill 1:15-48 (“Automated banking machines have been developed which perform functions such as dispensing cash, receiving deposits, checking the status of accounts and other functions. Automated banking machines used by consumers are referred to as automated teller machines or "ATMs". There are several manufacturers of automated teller machines. Many types of automated banking machines include internal systems [e.g., transaction managers] which monitor their operation. These internal systems often operate to check the available quantities of items which are required for proper operation of the machine. This may include the amount of cash available in the machine for dispensing to customers or an operator. Other systems may monitor the availability of supplies such as blank receipt forms or deposit envelopes. Such systems operate to provide a signal when the quantities of such items reach levels indicative of a need for replenishment. It is also common to provide further signals when such items are depleted. The signals [e.g., event data] generated by the machine are indicative of the condition which has occurred. Automated banking machines often include systems for providing signals [e.g., event data] indicative of malfunctions or the existence of other conditions which impede the operation of the machine. For example, machines which accept deposits may reach a condition where the depository is filled and cannot accept further deposits. When this occurs the machine loses all or a portion of its functional capabilities. Other malfunctions may include failures of currency dispensing mechanisms, customer card readers, receipt printers, journal printers or other components of the machine. In each case, upon sensing a failure condition, the machine is operative to generate signals indicative of the condition.”)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Al-Dabbagh to incorporate the teachings of Gill to generate signals indicative of malfunctions or the existence of other conditions which impede the operation of an ATM, at least because Al-Dabbagh teaches “Generally, as part of industrial operations, engineers or technicians monitor the condition of equipment utilized in the industrial operations in order to prevent equipment failure and resulting interruptions,” and thus doing so would enable “monitoring fault conditions at automated banking machines and for automatically notifying a servicer or other entity of fault conditions requiring attention.” Gill 1:9-12. Al-Dabbagh does not expressly disclose and wherein terminal data used by the MLMs comprises the event data generated by the terminals, maintenance intervention data from a maintenance system indicating service interventions on the terminals, and support data from a support system indicating service tickets for the terminals (but see Cheong Abstract (“As part of its overall effort to maintain good customer service while managing operational efficiency and reducing cost, a bank in Singapore has embarked on using data and decision analytics methodologies to perform better ad-hoc ATM failure forecasting and plan the field service engineers to repair the machines. We propose using a combined Data and Decision Analytics Framework which helps the analyst to first understand the business problem by collecting, preparing, and exploring data to gain business insights, before proposing what objectives and solutions can and should be done to solve the problem. This paper reports the work in analyzing past daily ad-hoc ATM failures, forecasting ad-hoc ATM failures and then using the forecasted results to optimize the number of field service engineers to deploy in each geographical zone, to minimize the number of daily unattended ad-hoc ATM failures.”), Section IV (“6 months of daily ad-hoc ATM failure data [e.g., maintenance intervention data and/or support data], from October 2013 to March 2014, denoted as OCT_2013 to MAR_2014, were collected and the fields are: ATM ID Date and Time of Failure Ticket ID [e.g., support data] Dispatch ID Problem Category [e.g., support data] The ATM Location data file contains the following fields: ATM ID Location - Latitude Location - Longitude ATM Zone Location Type Using the ATM ID as the key, the tables were merged into a single ATM_Failure_Master_Table as shown in Fig. 2, which contains a total of 73,753 records. The fields include: ATM ID Location - Latitude Location - Longitude ATM Zone Location Type Date and Time of Failure Ticket ID Dispatch ID Problem Category”)”, Section VI (“We used three forecasting methods to forecast the number of ad-hoc failures for the month of March 2014, using 5 months of data from October 2013 to February 2014. The three methods used are Stepwise Autoregressive, Exponential Smoothing and Holt-Winters Additive model. These 3 methods are selected because they are easy to implement and understand, and do not require excessive amounts of past data. Moving average model is not used as it cannot cater to trend component in time series, and Holt-Winters Multiplicative model is not used as there is no multiplicative seasonality effect observed. ARIMA forecasting technique is not used as it requires excessive amounts of data, which is not available.”)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have further modified Al-Dabbagh to incorporate the teachings of Cheong to incorporate ATM failure data collected by engineers during troubleshooting events, at least because Al-Dabbagh teaches “Generally, as part of industrial operations, engineers or technicians monitor the condition of equipment utilized in the industrial operations in order to prevent equipment failure and resulting interruptions,” and thus doing so would enable better ad-hoc ATM failure forecasting and ATM field service optimization. Cheong Section 1. Regarding claim 14, Al-Dabbagh, in view of Wegerich, Wang, Lin, Bi, Gill, and Cheong, discloses the invention of claim 13 as discussed above. Al-Dabbagh does not expressly disclose continuously iterating to the obtaining at an end of each next period of time and providing the selected next MLM as an optimal MLM for the predictions for each next period of time (but see Lin ¶ 110 (“The first condition that can trigger an update of updateable trained predictive models can be selected to accommodate various considerations. Some example first conditions were already described above in reference to FIG. 5.”)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Al-Dabbagh to incorporate the teachings of Lin to periodically rescore and retrain trained predictive models, at least because doing so would provide access to a trained predictive model that has been trained with training data reflective of changes to the data. See Lin ¶ 451. Regarding claim 15, Al-Dabbagh, in view of Wegerich, Wang, Lin, Bi, Gill, and Cheong, discloses the invention of claim 13 as discussed above. Al-Dabbagh does not expressly disclose maintaining metrics on terminal data associated with the terminals; and (but see Wegerich ¶ 3 (“According to one of the new techniques described in U.S. Pat. No. 5,764,509 to Wegerich et al., sensor data from equipment to be monitored is accumulated and used to train an empirical model of the equipment. The training includes determining a matrix of learned observations of sets of sensor values inclusive of sensor minimums and maximums. The model is then used online to monitor equipment health, by generating estimates of sensor signals in response to measurement of actual sensor signals from the equipment. The actual measured values and the estimated values are differenced to produce residuals. The residuals can be tested using a statistical hypothesis test to determine with great sensitivity when the residuals become anomalous, indicative of incipient equipment failure.”)) sending an alert when a threshold deviation in the terminal data is detected based on the metrics (but see Wegerich ¶ 13 (“An equipment health monitoring system according to the invention is shown in FIG. 1 to comprise an estimation engine 105 at its core, which generates estimates based on a model comprising a learned reference library 110 of observations, in response to receiving a new input observation (comprising readings from multiple sensors) via real-time input module 115. An anomaly-testing module 120 compares the inputs to the estimates from estimation engine 105, and is preferably disposed to perform statistical hypothesis tests on the series of such comparisons to detect anomalies between the model prediction and the actual sensor values from a monitored piece of equipment. A diagnostic rules library 125 is provided to interpret the anomaly patterns, and both the anomaly-testing module 120 and the diagnostic rules library 125 provide informational output to a monitoring graphical user interface (GUI) 130, which alerts humans to developing equipment problems.”)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Al-Dabbagh to incorporate the teachings of Wegerich to deploy a model after ranking their performance accuracy, at least because doing so would enable comparing alternative models without significant human intervention. See Wegerich ¶ 5. Regarding claim 16, Al-Dabbagh, in view of Wegerich, Wang, Lin, Bi, Gill, and Cheong, discloses the invention of claim 13 as discussed above. Al-Dabbagh does not expressly disclose wherein providing the selected MLM to the deployment environment further includes providing daily terminal data associated with the terminals to the selected MLM as input, (but see Lin ¶ 81 (“In another example, the client computing system 202 may upload a new training data set according to a particular schedule, e.g., at the end of each day.”)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Al-Dabbagh to incorporate the teachings of Lin to periodically rescore and retrain trained predictive models, at least because doing so would provide access to a trained predictive model that has been trained with training data reflective of changes to the data. See Lin ¶ 451. Al-Dabbagh does not expressly disclose and providing the predictions produced as output by the selected MLM to a retail interface, a retail system, or a retail service associated with the terminals before any projected terminal failure occurs on any particular terminal based on any given prediction (but see Wegerich ¶ 13 (“An equipment health monitoring system according to the invention is shown in FIG. 1 to comprise an estimation engine 105 at its core, which generates estimates based on a model comprising a learned reference library 110 of observations, in response to receiving a new input observation (comprising readings from multiple sensors) via real-time input module 115. An anomaly-testing module 120 compares the inputs to the estimates from estimation engine 105, and is preferably disposed to perform statistical hypothesis tests on the series of such comparisons to detect anomalies between the model prediction and the actual sensor values from a monitored piece of equipment. A diagnostic rules library 125 is provided to interpret the anomaly patterns, and both the anomaly-testing module 120 and the diagnostic rules library 125 provide informational output to a monitoring graphical user interface (GUI) 130, which alerts humans to developing equipment problems.”)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Al-Dabbagh to incorporate the teachings of Wegerich to deploy a model after ranking their performance accuracy, at least because doing so would enable comparing alternative models without significant human intervention. See Wegerich ¶ 5. Regarding claim 17, Al-Dabbagh, in view of Wegerich, Wang, Lin, Bi, Gill, and Cheong, discloses the invention of claim 13 as discussed above. Al-Dabbagh does not expressly disclose wherein selecting the next MLM further includes training the next MLM on most-recent training data over a most-recent training period of time (but see Lin ¶ 112 (“The updateable trained predictive models that are stored in the repository 215 are "updated" with the training data stored in the training data queue 213.”)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Al-Dabbagh to incorporate the teachings of Lin to periodically rescore and retrain trained predictive models, at least because doing so would provide access to a trained predictive model that has been trained with training data reflective of changes to the data. See Lin ¶ 451. Regarding claim 18, Al-Dabbagh, in view of Wegerich, Wang, Lin, Bi, Gill, and Cheong, discloses the invention of claim 17 as discussed above. Al-Dabbagh does not expressly disclose continuously updating the most-recent training data for the most-recent training period of time (but see Lin ¶ 110 (“That is, receiving new training data in and of itself can satisfy the first condition and trigger the update.”)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Al-Dabbagh to incorporate the teachings of Lin to periodically rescore and retrain trained predictive models, at least because doing so would provide access to a trained predictive model that has been trained with training data reflective of changes to the data. See Lin ¶ 451. Regarding claim 19, Al-Dabbagh discloses [a] system, comprising: a service comprising at least one processor and a non-transitory computer-readable storage medium, the non-transitory computer-readable storage medium comprising executable instructions that when provided to or obtained by the at least one processor from the non-transitory computer-readable storage medium cause the at least one processor to perform operations, comprising: (¶ 92 (“For example, as shown in FIG. 7A, the computing system 700 may include one or more computer processors 702, non-persistent storage 704 (e.g., volatile memory, such as random access memory (RAM) or cache memory)”)) obtaining historical training data associated with terminal failures and terminal non-failures of terminals; (¶ 23 (“In some example embodiments, the diagnostic and correction system trains one or more machine learning (hereinafter also “ML”) models to classify oil samples as good or defective using the data derived from the analysis of various oil samples and historical expert decision data made based on the sample analysis.”)) sampling and resampling the historical training data to generate a balanced data set of failures and non-failures; (¶ 26 (“In some example embodiments, the binary classifier using the XGBoost algorithm is trained on a first percentage (e.g., 80%) of the data [historical training data], after being oversampled using Synthetic Minority Oversampling Technique (SMOTE) using a support vector machine (SVM) algorithm to detect the sample used for generating new synthetic samples in order to balance the two classes with respect to the data distribution. A second percentage (e.g., 20%) of the oversampled data is used to test the prediction performance of the binary classifier.”)) labeling features within the balanced data set of failures and non-failures, the features comprising terminal identifiers for the terminals, the terminal failures, the terminal non-failures, dates for corresponding failures, and remediation actions taken for the corresponding failures; (¶ 25 (“In some example embodiments, when an analyst tests an oil sample, the oil sample is marked as a good sample or a defective sample. A dataset may be generated from the records of a plurality (e.g., tens of thousands) of oil samples for the categorization into the two classes during the first phase of prediction. For each oil sample, the dataset includes a plurality of features. The features correspond to attributes (e.g., parameters or characteristics) identified during the testing of the oil samples. Examples of features are “Aluminum,” “Antimony,” “Appear,” “Barium,” “Boron,” “Base Sediment & Water,” “Cadmium,” “Calcium,” “Chromium,” “Color,” “Copper,” “FERL,” “FERS,” “Filter,” “Flash Point,” “Foam,” “Fuel Dilution,” “Iron,” “Lead,” “Magnesium,” “Moisture,” “Molybdenum,” “Nickel,” “pH,” “Phosphorous,” “Rotating Pressure Vessel Oxidation,” “Silicon,” “Silver,” “Sodium,” “Solids,” “Total Acid Number,” “Total Base Number,” “Tin,” “Titanium,” “Viscosity at 100 degrees Celsius (C),” “Viscosity at 40 degrees C.,” “Water,” “Zinc,” “Oil Type,” “Oil Sump Capacity,” and “Equipment Type.” FERS indicates the direct ferrography test result for smaller particles. FERL indicates the direct ferrography test result for large particles. The features are used as predictor values (also “predictors”), while the target value (also “target”) is one of the binary values of “0” for good or “1” for defective. The predictor variables are used to predict (e.g., determine) the target variable. In some instances, fewer features may be selected to improve the model.”)) portioning the balanced data set of failures and non-failures into training data and testing data; (¶ 26 (“In some example embodiments, the binary classifier using the XGBoost algorithm is trained on a first percentage (e.g., 80%) of the data, after being oversampled using Synthetic Minority Oversampling Technique (SMOTE) using a support vector machine (SVM) algorithm to detect the sample used for generating new synthetic samples in order to balance the two classes with respect to the data distribution. A second percentage (e.g., 20%) of the oversampled data is used to test the prediction performance of the binary classifier.”)) training a plurality of machine learning models (MLMs) based on the training data, the plurality of MLMs including two or more types of MLMs; (¶ 24 (“During the first phase, a first, binary ML model (or binary ML classifier) is built (or trained) to classify input laboratory analysis results (also “lab analysis data,” “lab results,” or “lab records”) of a plurality of oil samples as good or defective. In this phase, there are two classes available for the classification of the lab results by the first, binary ML model: good and defective (or bad). For example, good may be assigned the binary value of 0, while bad may be assigned a binary value of 1, or vice versa.”), ¶ 27 (“During the second phase, a second, multiclass ML model (or a multiclass classifier) is trained to further classify the lab results of the oil samples classified as defective during the first phase. In the second phase, the defective oil samples are classified according to defect type. For each defective oil sample, the dataset includes the all or a subset of the plurality of features mentioned above with respect to the first model. In the second phase, there are four classes available for the classification of the defective oil samples by the second ML model: contamination, oil mixing, dissolved gasses, and degradation.”), ¶ 29 (“During the third phase, a third, multiclass ML model (or a multiclass classifier) is trained to predict corrective actions for each defect type. In the third phase, the input data includes the defective oil samples of each type of defect. In some instances, a model is trained for each defect type. Each defect type may have a different number or set of corrective actions. The number of corrective actions may vary from one defect type to another. For example, one defect type may have 5 corrective actions, while another may have 7 corrective actions.”)). Al-Dabbagh does not expressly disclose testing the two or more types of MLMs based on the testing data; (but see Wegerich ¶ 32 (“Turning to FIG. 4, a method for automatically selecting a model for deployment from a set of generated candidate models is shown. In step 410, the reference data is filtered and cleaned. In step 415, a model is generated from the data. Models can vary based on tuning parameters, the type of model technology, which variables are selected to be grouped into a model, or the data snapshots used to train the model, or a combination.”)) calculating a respective score for each type of MLM based on the testing; (but see Wegerich ¶ 32 (“In step 420, the model metrics described herein are computed for the model.”)) selecting an MLM having a highest score; (but see Wegerich ¶ 32 (“In step 425, if more models are to be generated, the method steps back to step 415, otherwise at step 430 the models are filtered according to their model metrics to weed out those that do not meet minimum criteria, as described below. In step 435, the remaining models are ranked according to their metrics, and a top rank model is selected for deployment in the equipment health monitoring system.”)) deploying the selected MLM into a production environment of a retailer as a deployed MLM to provide daily predictions on potential terminal failures over a current period of time, (but see Wegerich ¶ 32 (“In step 425, if more models are to be generated, the method steps back to step 415, otherwise at step 430 the models are filtered according to their model metrics to weed out those that do not meet minimum criteria, as described below. In step 435, the remaining models are ranked according to their metrics, and a top rank model is selected for deployment in the equipment health monitoring system.”)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Al-Dabbagh to incorporate the teachings of Wegerich to deploy a model after ranking their performance accuracy, at least because doing so would enable comparing alternative models without significant human intervention. See Wegerich ¶ 5. Al-Dabbagh ¶ 6 teaches “The first multiclass classification model is trained to classify the laboratory analysis results of the oil sample according to the gradient boosting algorithm if the laboratory analysis results are classified as “defective.” The first multiclass classification model is trained to output a predicted defect type [a projected type associated with the corresponding terminal failure] for the defect in equipment. The system includes a second multiclass classification model. The second multiclass classification model is trained to classify the laboratory analysis results of the oil sample according to the gradient boosting algorithm and the predicted defect type. The second multiclass classification model is trained to output a predicted corrective action [projected remedial action to perform on the corresponding terminal to resolve the corresponding terminal failure before the corresponding terminal experiences an actual failure] pertaining to the equipment based on the predicted defect type for the defect in equipment.” Yet, Al-Dabbagh does not expressly disclose wherein each prediction comprises a data structure including a terminal identifier, a projected failure data, a projected failure type, and a projected remedial action; (but see Wang ¶ 194 (“In some embodiments, the report and alert generation module 518 may generate a report [data structure] indicating any number of potential failures, the probability of such failure, and the justification or reasoning based on the model and the fit of previously identified states associated with future failure of components. The report may be a maintenance plan or schedule to correct the predicted fault (e.g., preferably before failure and a minimum of power disruption).”); ¶ 198 (“FIG. 21 depicts a prospective component failure forecasting risk score and action urgency depiction in some embodiments. The prospective component failure forecasting risk score and action urgency depiction may include the predictions of failure for any number of components. For those components where predicted risk is above a trigger threshold, information may be highlighted or otherwise emphasized.”); ¶ 199 (“In FIG. 21, the prospective component failure forecasting risk score and action urgency depiction includes an asset identifier [terminal identifier], component name, update time (e.g., time of the prediction), risk score of failure, forecast lead time [projected date], and indicator (e.g., a classification indicating a degree of danger of fault or performance health). In this example, the generator of asset identifier 303056 has an 83% risk of failure. The generator 303060 has a 60% risk of failure. Assuming that the risk of failure is greater than a trigger threshold for generators, the prospective component failure forecasting risk score and action urgency depiction may highlight or otherwise emphasize information regarding the two generators that are at risk. Further, the failure risk score may provide information for a scheduled plan and prioritization.”)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Al-Dabbagh to incorporate the teachings of Wegerich and Wang to generate a report indicating an asset identifier and a forecast lead time indicating a degree of danger of fault or performance health of a piece of equipment, at least because doing so would increase lead time before failure and improve accuracy. See Wang ¶ 1. Al-Dabbagh do not expressly disclose calculating new scores for the deployed MLM and non-deployed MLMs not selected for operation within the production environment at the end of the current period of time; (but see Lin ¶ 111 (“Before the updateable trained predictive models that are stored in the repository 215 are "updated" with the training data stored in the training data queue 213, each trained predictive model in the repository 215 can be rescored for accuracy. That is, new accuracy scores of the trained models in the repository are determined based on the received training data sets stored in the training data queue 213 (Box 608). The new accuracy scores are determined using test data. The test data can include the data in the training data queue 213 in addition to previously received training data that is stored in the training data repository 214. The techniques described above in reference to FIG. 5 to determine what to include in the test data and how to calculate the new accuracy scores can be employed here to determine the new accuracy scores.); see ¶ 110 (describing time intervals for updates to training data)) selecting a next MLM having a highest score for a next period of time; (but see Lin ¶ 113 (“A trained predictive model is selected from the multiple trained predictive models based on their respective new accuracy scores. That is, the new accuracy scores of the trained predictive models stored in the repository 215 can be compared and the most accurate model, i.e., a first trained predictive model, selected.”)) training the next selected MLM on most-recent training data over a most-recent training period of time; (but see Lin ¶ 112 (“The updateable trained predictive models that are stored in the repository 215 are "updated" with the training data stored in the training data queue 213. That is, retrained predictive models are generated (Box 610) using: the training data queue 213; the updateable trained predictive models obtained from the repository 215; and the corresponding training functions that were initially used to train the updateable trained predictive models, which training functions are obtained from the training function repository 216.”)) deploying the next MLM into the production environment as the deployed MLM to provide the daily predictions during the next period of time; and (but see Lin ¶ 113 (“Access is provided to the first trained predictive model to the client computing system 202 (Box 612).”)) continuously iterating to recalculate the scores at the end of each next period of time (but see Lin ¶ 110 (“The first condition that can trigger an update of updateable trained predictive models can be selected to accommodate various considerations. Some example first conditions were already described above in reference to FIG. 5. That is, receiving new training data in and of itself can satisfy the first condition and trigger the update. Receiving an update request from the client computing system 202 can satisfy the first condition. Other examples of first condition include a threshold size of the training data queue 213. That is, once the volume of data in the training data queue 213 reaches a threshold size, the first condition can be satisfied and an update can occur. The threshold size can be defined as a predetermined value, e.g., a certain number of kilobytes of data, or can be defined as a fraction of the training data included in the training data repository 214. That is, once the amount of data in the training data queue is equal to or exceeds x0/0 of the data used to initially train the trained predictive model 218 or x% of the data in the training data repository 214 (which may be the same, but could be different), the threshold size is reached. In another example, once a predetermined time period has expired, the first condition is satisfied. For example, an update can be scheduled to occur once a day, once a week or otherwise.”)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Al-Dabbagh to incorporate the teachings of Lin to periodically rescore and retrain trained predictive models, at least because doing so would provide access to a trained predictive model that has been trained with training data reflective of changes to the data. See Lin ¶ 451. Al-Dabbagh teaches predicting equipment defects using machine learning models but does not expressly describe predicting failure of terminals (but see Bi ¶ 14 (“FIG. 1 illustrates a system 100 for improving hardware and software diagnostic technology associated with failure predictions, in accordance with embodiments of the present invention. System 100 is enabled to execute a machine learning framework to predict and classify hardware or software failures based on retrieved sensor data, usage data, prior failure data, and specified machine configurations for providing predictive maintenance solutions for hardware devices (e.g., an automated teller machine (ATM)).”)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Al-Dabbagh to incorporate the teachings of Bi to predict ATM failure using machine learning models and retrieved sensor data, at least because doing so would improve hardware device failure predictions. See Bi ¶ 1. Al-Dabbagh does not expressly disclose wherein the terminals comprise transaction terminals selected from self-service terminals (SSTs) orpoint-of-sale (POS) terminals, wherein transaction managers execute on the terminals to generate event data during transaction processing for transactions, (but see Gill 1:15-48 (“Automated banking machines have been developed which perform functions such as dispensing cash, receiving deposits, checking the status of accounts and other functions. Automated banking machines used by consumers are referred to as automated teller machines or "ATMs". There are several manufacturers of automated teller machines. Many types of automated banking machines include internal systems [e.g., transaction managers] which monitor their operation. These internal systems often operate to check the available quantities of items which are required for proper operation of the machine. This may include the amount of cash available in the machine for dispensing to customers or an operator. Other systems may monitor the availability of supplies such as blank receipt forms or deposit envelopes. Such systems operate to provide a signal when the quantities of such items reach levels indicative of a need for replenishment. It is also common to provide further signals when such items are depleted. The signals [e.g., event data] generated by the machine are indicative of the condition which has occurred. Automated banking machines often include systems for providing signals [e.g., event data] indicative of malfunctions or the existence of other conditions which impede the operation of the machine. For example, machines which accept deposits may reach a condition where the depository is filled and cannot accept further deposits. When this occurs the machine loses all or a portion of its functional capabilities. Other malfunctions may include failures of currency dispensing mechanisms, customer card readers, receipt printers, journal printers or other components of the machine. In each case, upon sensing a failure condition, the machine is operative to generate signals indicative of the condition.”)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Al-Dabbagh to incorporate the teachings of Gill to generate signals indicative of malfunctions or the existence of other conditions which impede the operation of an ATM, at least because Al-Dabbagh teaches “Generally, as part of industrial operations, engineers or technicians monitor the condition of equipment utilized in the industrial operations in order to prevent equipment failure and resulting interruptions,” and thus doing so would enable “monitoring fault conditions at automated banking machines and for automatically notifying a servicer or other entity of fault conditions requiring attention.” Gill 1:9-12. Al-Dabbagh does not expressly disclose and wherein the historical training data comprises the event data generated by the terminals, maintenance intervention data from a maintenance system indicating service interventions required for the terminals, and support data from a support system indicating service tickets entered for the terminals (but see Cheong Abstract (“As part of its overall effort to maintain good customer service while managing operational efficiency and reducing cost, a bank in Singapore has embarked on using data and decision analytics methodologies to perform better ad-hoc ATM failure forecasting and plan the field service engineers to repair the machines. We propose using a combined Data and Decision Analytics Framework which helps the analyst to first understand the business problem by collecting, preparing, and exploring data to gain business insights, before proposing what objectives and solutions can and should be done to solve the problem. This paper reports the work in analyzing past daily ad-hoc ATM failures, forecasting ad-hoc ATM failures and then using the forecasted results to optimize the number of field service engineers to deploy in each geographical zone, to minimize the number of daily unattended ad-hoc ATM failures.”), Section IV (“6 months of daily ad-hoc ATM failure data [e.g., maintenance intervention data and/or support data], from October 2013 to March 2014, denoted as OCT_2013 to MAR_2014, were collected and the fields are: ATM ID Date and Time of Failure Ticket ID [e.g., support data] Dispatch ID Problem Category [e.g., support data] The ATM Location data file contains the following fields: ATM ID Location - Latitude Location - Longitude ATM Zone Location Type Using the ATM ID as the key, the tables were merged into a single ATM_Failure_Master_Table as shown in Fig. 2, which contains a total of 73,753 records. The fields include: ATM ID Location - Latitude Location - Longitude ATM Zone Location Type Date and Time of Failure Ticket ID Dispatch ID Problem Category”)”, Section VI (“We used three forecasting methods to forecast the number of ad-hoc failures for the month of March 2014, using 5 months of data from October 2013 to February 2014. The three methods used are Stepwise Autoregressive, Exponential Smoothing and Holt-Winters Additive model. These 3 methods are selected because they are easy to implement and understand, and do not require excessive amounts of past data. Moving average model is not used as it cannot cater to trend component in time series, and Holt-Winters Multiplicative model is not used as there is no multiplicative seasonality effect observed. ARIMA forecasting technique is not used as it requires excessive amounts of data, which is not available.”)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have further modified Al-Dabbagh to incorporate the teachings of Cheong to incorporate ATM failure data collected by engineers during troubleshooting events, at least because Al-Dabbagh teaches “Generally, as part of industrial operations, engineers or technicians monitor the condition of equipment utilized in the industrial operations in order to prevent equipment failure and resulting interruptions,” and thus doing so would enable better ad-hoc ATM failure forecasting and ATM field service optimization. Cheong Section 1. Regarding claim 20, Al-Dabbagh, in view of Wegerich, Wang, Lin, Bi, Gill, and Cheong, discloses the invention of claim 19 as discussed above. Al-Dabbagh does not expressly disclose wherein each terminal is a self-service terminal, an automated teller machine, a point-of-sale terminal, or a kiosk (but see Bi ¶ 14 (“FIG. 1 illustrates a system 100 for improving hardware and software diagnostic technology associated with failure predictions, in accordance with embodiments of the present invention. System 100 is enabled to execute a machine learning framework to predict and classify hardware or software failures based on retrieved sensor data, usage data, prior failure data, and specified machine configurations for providing predictive maintenance solutions for hardware devices (e.g., an automated teller machine (ATM)).”)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Al-Dabbagh to incorporate the teachings of Bi to predict ATM failure using machine learning models and retrieved sensor data, at least because doing so would improve hardware device failure predictions. See Bi ¶ 1. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHAHID KHAN whose telephone number is (571)270-0419. The examiner can normally be reached M-F, 9-5 est. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Usmaan Saeed can be reached at (571)272-4046. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SHAHID K KHAN/Primary Examiner, Art Unit 2146
Read full office action

Prosecution Timeline

Jun 22, 2022
Application Filed
May 21, 2025
Non-Final Rejection — §103
Aug 27, 2025
Response Filed
Nov 29, 2025
Final Rejection — §103
Feb 03, 2026
Response after Non-Final Action
Mar 02, 2026
Request for Continued Examination
Mar 11, 2026
Response after Non-Final Action
Apr 04, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591768
DEEP LEARNING ACCELERATION WITH MIXED PRECISION
2y 5m to grant Granted Mar 31, 2026
Patent 12579516
System and Method for Organizing and Designing Comment
2y 5m to grant Granted Mar 17, 2026
Patent 12566813
SYSTEMS AND METHODS FOR RENDERING INTERACTIVE WEB PAGES
2y 5m to grant Granted Mar 03, 2026
Patent 12547298
Display Method and Electronic Device
2y 5m to grant Granted Feb 10, 2026
Patent 12530916
MULTIMODAL MULTITASK MACHINE LEARNING SYSTEM FOR DOCUMENT INTELLIGENCE TASKS
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
74%
Grant Probability
90%
With Interview (+15.7%)
2y 11m
Median Time to Grant
High
PTA Risk
Based on 389 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month