Prosecution Insights
Last updated: April 19, 2026
Application No. 18/447,266

MONITORING AND ALERTING SYSTEM AND METHOD

Final Rejection §101§103
Filed
Aug 09, 2023
Examiner
GOLDBERG, IVAN R
Art Unit
3619
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Bread Financial Payments Inc.
OA Round
4 (Final)
35%
Grant Probability
At Risk
5-6
OA Rounds
4y 8m
To Grant
72%
With Interview

Examiner Intelligence

Grants only 35% of cases
35%
Career Allow Rate
128 granted / 365 resolved
-16.9% vs TC avg
Strong +37% interview lift
Without
With
+36.9%
Interview Lift
resolved cases with interview
Typical timeline
4y 8m
Avg Prosecution
57 currently pending
Career history
422
Total Applications
across all art units

Statute-Specific Performance

§101
27.7%
-12.3% vs TC avg
§103
40.4%
+0.4% vs TC avg
§102
3.4%
-36.6% vs TC avg
§112
20.7%
-19.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 365 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Notice to Applicant The following is a Final Office Action responsive to Applicant’s communication of 1/28/26 in response to the Non-Final rejection of 12/10/25. Claims 1-20 are pending and have been rejected below. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e. an abstract idea) without reciting significantly more. Step One - First, pursuant to step 1 in MPEP 2106.03, the claim 1 is directed to a method which is a statutory category. Step 2A, Prong One - MPEP 2106.04 - The claim 1 recites– “selecting at least one process; identifying a plurality of metrics for said at least one process; automatically calculating a performance envelope for at least one of said plurality of metrics; continually monitoring, in real-time, data related to said at least one process for any data related to said plurality of metrics, and automatically and dynamically updating said performance envelope for at least one of said plurality of metrics [Examiner notes, no computer is required to perform this step still in claim 1; should a computer be used for each step, as in claim 8, and claim 15, see discussion below in claim 8 in Step 2A, prong two and step 2B for the “automatically and dynamically” language]; automatically storing any of said data related to said plurality of metrics in an alert …; automatically analyzing said stored data for any data points outside of said performance envelope for said at least one of said plurality of metrics; wherein said at least one of said plurality of metrics are further monitored by a client utilizing said at least one process (Applicant’s specification [0019] as published states “The term “client” refers to a retailer, brand, business, IT department, IT manager, customer, manager, combination thereof, or the like, that would utilize the monitoring and alerting system for one or more of their own processes. [0064] as published states “In one embodiment, direct alert 210 is a real-time alert that will provide an (automated and/or curated) email, text, message, or other pop-up type alert directly to a client or a designated user or users (e.g., process manager, owner, designated IT personnel, or the like); automatically generating an alert for one or more identified data points; receiving a response for each of said identified data points, said response comprising: a description of a remediation performed in a user digestible response; and updating, in said alert …, said identified data points with said response.” As drafted, this is, under its broadest reasonable interpretation, within the Abstract idea grouping of “certain methods of organizing human activity” (e.g. commercial interactions or managing interactions between people (following rules or instructions) or “mitigating risk,” where the client (such as a business or manager) is monitoring the metrics (e.g. related to a loan approval) as well as “mathematical relationships” as here we are performing a series of mathematical operations- first, identifying metrics, calculating a performance envelope ([0023] as published – any process metrics or business KPIs; [0028] approval rate; [0031] bureau pass rate), analyzing data for performance outside an envelope, generating an alert where a response for the identified data points has a description of a remediation ([0021] as published - In general, the tool will analyze data related to the one or more processes and their underlying business metrics and provide real-time or near real-time user digestible information based on the results of the analysis. [0073] – personnel to respond to and remediate an alert). Accordingly, claim 1 is directed to an abstract idea because it is for commercial interactions for businesses in a series of metrics for a business process (e.g. a loan, along with approval rate metrics in some examples; FIG. 5, [0027-0030] as published - the various metrics relate to financial processes – accounts, “make offer rate”, “send rate”, approvals) and doing a series of mathematical calculations and analysis steps to calculate KPI (key performance indicators) along with analyzing data points outside of a threshold/envelope. Step 2A, Prong Two - MPEP 2106.04 - This judicial exception is not integrated into a practical application. Examiner notes that claim 1 is devoid of a computer performing the steps. As an initial step, Examiner recommends amending claim 1 to recite a computer performing each step. In particular, the claim 1 recites additional elements that are: continually monitoring, in real-time, data related to said at least one process for any data related to said plurality of metrics, and automatically and dynamically updating said performance envelope for at least one of said plurality of metrics; automatically storing any of said data related to said plurality of metrics in an alert database; updating, in said alert database, said identified data points with said response.” (MPEP 2106.05f applies –the claim involves a computer (as likely to be amended), and is considered “apply it [the abstract idea – math relationships] on a computer”; merely uses a computer as a tool to perform an abstract idea; the “storing” of data and a “database” is viewed as “field of use” (MPEP 2106.05h)). Accordingly, the additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim also fails to recite any improvements to another technology or technical field, improvements to the functioning of the computer itself, use of a particular machine, effecting a transformation or reduction of a particular article to a different state or thing, and/or an additional element applies or uses the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception. See 84 Fed. Reg. 55. The claim is directed to an abstract idea. Step 2B in MPEP 2106.05 - The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of a computing system, is treated as MPEP 2106.05(f) (Mere Instructions to Apply an Exception – “Thus, for example, claims that amount to nothing more than an instruction to apply the abstract idea using a generic computer do not render an abstract idea eligible.” Alice Corp., 134 S. Ct. at 235)). Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The limitations of “”storing” data in a “database” is also a conventional computer functions – See MPEP 2106.05d(II)(iv) - Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334. The claim fails to recite any improvements to another technology or technical field, improvements to the functioning of the computer itself, use of a particular machine, effecting a transformation or reduction of a particular article to a different state or thing, adding unconventional steps that confine the claim to a particular useful application, and/or meaningful limitations beyond generally linking the use of an abstract idea to a particular environment. See 84 Fed. Reg. 55. The claim is not patent eligible. Viewed individually or as a whole, these additional claim element(s) do not provide meaningful limitation(s) to transform the abstract idea into a patent eligible application of the abstract idea such that the claim(s) amounts to significantly more than the abstract idea itself. Independent claim 8 is a statutory category at step one (article of manufacture). Claim 8 further recites: “A non-transitory computer-readable medium for storing instructions, said instructions comprising: one or more instructions which, when executed by one or more processors, cause one or more processors” which are additional elements treated as “apply it [abstract idea] on a computer” (MPEP 2106.05f) at step 2a, prong two and step 2B. The remaining limitations are similar to claim 1 and are rejected for the same reasons at step 2a, prong one; step 2a, prong two, and step 2B. Examiner notes in claim 8, this means each step is by a “computer” and the recitations of “automatically” and the new “automatically and dynamically” are viewed as just “apply it [abstract idea] on a computer.” Examiner reminds Applicant that relying on the speed of the computer itself for business processes, like loan approvals and financial metrics here, is viewed as similar to “MPEP 2106.05(a)(1)(I) Examples… not sufficient so show an improvement in computer-functionality: ii. Accelerating a process of analyzing audit log data when the increased speed comes solely from the capabilities of a general-purpose computer, FairWarning IP, LLC v. Iatric Sys., 839 F.3d 1089, 1095. Independent claim 15 further recites: “A computer system comprising: a memory; a display; and at least one processor, said at least one processor configured to” which are additional elements treated as “apply it [abstract idea] on a computer” (MPEP 2106.05f) at step 2a, prong two and step 2B. The remaining limitations are similar to claim 1 and claim 8 and are rejected for the same reasons at step 2a, prong one; step 2a, prong two, and step 2B. Claims 2, 9, 16 narrow the abstract idea by naming the metrics used, e.g. KPI metrics. Claims 3, 10, 17 narrow the abstract idea by having further mathematical relationships including normal behavior of attributes, and thresholds. Claims 4, 11, 18 narrow the abstract idea by having further mathematical relationships including normal behavior of attributes, and value ranges. Claims 5, 12 recite the limitations of claims 3 and 4 and are rejected for the same reasons. Claims 6, 13 narrow the abstract idea by having further analysis and giving information to the user if no remediation is needed for the alert. Claims 7, 14 narrow the abstract idea by having further analysis and giving information to the user if remediation was performed that includes a description of remediation. Claim 19 narrows the abstract idea by giving alert of analysis to a user. It also recites additional elements to extent client is a 2nd computer. At step 2a, prong two and step 2B, this is treated as “apply it [abstract idea] on a computer” (MPEP 2106.05f). Claim 20 is rejected for similar reasons as claim 19, and further recites a destination of software of the computer, where alert is given to “interactive dashboard application.” At this time, this only gives the destination of where alert goes to, and is also treated as “apply it [abstract idea] on a computer” (MPEP 2106.05f) and field of use (MPEP 2106.05h). Therefore, the claim(s) are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter. For more information on 101 rejections, see MPEP 2106. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-2, 7-9, 14-16, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Gopinathan (US 2015/0142713) and Merrill (US 2019/0340518). Concerning claim 1, Gopinathan discloses: A method (Gopinathan see par 16 - method are disclosed that provide an adaptive, automated neural network setup for model creation and deployment, which continuously learns from its output and re-creates itself whenever required; his platform can be applied to any type of business that uses data from any event requiring a decision as an output, including (but not limited to) consumer credit, business credit… and many others) comprising: selecting at least one process (Gopinathan see par 16 - method are disclosed that provide an adaptive, automated neural network setup for model creation and deployment, which continuously learns from its output and re-creates itself whenever required This platform amalgamates various processes that can essentially build the decision making system of a process or an entire collection of processes (such as required to operate an entire business), including but not limited to variable generation, model building, optimization and fine-tuning, portfolio performance evaluation, report generation and alert triggering mechanism); identifying a plurality of metrics for said at least one process (Gopinathan – see par 50 - Since the data is always fetched from the Master Data Manager whenever a new model is build or an existing one is rebuilt, or the performance of a certain business metric has to be evaluated, this data needs to be synced into the Data Manager. The Master Data Manager 110 may be a master repository of all system data. The system also may have an Automated Report Generation Module 103 that takes data from Master Data Manager 110, and applies trigger alerts and distribution shift alarms along with reports on the performance of key business metrics. In a lending environment, for example, there is some expected minimum number of applications for loans, percentage of approvals, maximum risk cutoffs, fraud thresholds, etc that should be satisfied.); automatically calculating a performance envelope for at least one of said plurality of metrics (Gopinathan – see par 50 - The system also may have an Automated Report Generation Module 103 that takes data from Master Data Manager 110, and applies trigger alerts and distribution shift alarms along with reports on the performance of key business metrics. In a lending environment, for example, there is some expected minimum number of applications for loans, percentage of approvals, maximum risk cutoffs, fraud thresholds, etc that should be satisfied. If any of these are not met, an alert is triggered by this system to notify for human intervention. See par 52 - If the performance of any of the deployed models or business metrics falls below a predefined threshold, it sends a request to the Model Building Module 105 to rebuild the underperforming model(s) or to a Decision Logic Module 108 to improve the corresponding decision logic; The decision logic might depend upon the underperforming models or be affected directly by the new data. The decision logic may be modified to make the threshold fit into the expected values.); continually monitoring, in real-time, data related to said at least one process for any data related to said plurality of metrics and automatically and dynamically updating said performance envelope for at least one of said plurality of metrics ([0023] as published –“metrics” is any process metrics or business KPIs; [0028] approval rate; [0031] bureau pass rate Gopinathan discloses the limitations – see par 52 - The Model Management Module 106 may constantly monitor the performance levels of the models and business metrics with each new transaction. If the performance of any of the deployed models or business metrics falls below a predefined threshold, it sends a request to the Model Building Module 105 to rebuild the underperforming model(s) or to a Decision Logic Module 108 to improve the corresponding decision logic. For example, in a lending business, a lending decision may be made by making use of a set of models. The repayment behavior of the borrowers from latest vintages may be compared with the predictions of the models and if the performance parameters show a drop from estimated values, then the Model Management Module may trigger a request to the Model Building Module to rebuild models that are underperforming; The decision logic might depend upon the underperforming models or be affected directly by the new data. The decision logic may be modified to make the threshold fit into the expected values; see par 53- In the above lending example, where the profit model was rebuilt, the evaluation of performance of the models and trigger for the rebuild is given by the Model Management Module. The decision logic may also be revisited to check for any scope of optimization. With a modification of the models and other business rules, the system may generate a completely different score combination and decision logic); automatically storing any of said data related to said plurality of metrics in an alert database (Gopinathan – see par 53 - The decision logic (comprising of the profit model score, a third party credit score and a fraud rule) is altered accordingly and the new thresholds are computed. Both the Model Building Module 105 and Model Management Module 106 use data from the Master Data Manager 110. See par 62, FIG. 3 - In the Near Real Time Processor 102, offline decision outcomes data 304, and data from other additional external sources 302, enters a Data Pre-Processing module 242, where complex variable computations, which include (but are not limited to) dynamic risk computations, etc, are carried out. External sources may include (but are not limited to) data collected from social networks, etc. The data then may be synched with the information received from the Variable Generation Module 104, which contains dependent variables (DVs) and independent variables (IDVs); see par 63, FIG. 1, 3 – Near Real Time Processor 102 enables generation of reports in a Reports Data Mart 306); automatically analyzing said stored data for any data points outside of said performance envelope for said at least one of said plurality of metrics (Gopinathan – see par 52 - The Model Management Module 106 may constantly monitor the performance levels of the models and business metrics with each new transaction. A (new) transaction or event may be a request or an exchange or transfer of a tangible product or service for an asset/money/payment/promise of payment between one or more parties, for example a loan application from a customer to a lender, loan acceptance by lender. Another example may be a credit/debit card transaction for purchase of goods, acceptance of customer details by the retailer and delivery of the purchased entity. If the performance of any of the deployed models or business metrics falls below a predefined threshold; For example, in a lending business, a lending decision may be made by making use of a set of models. The repayment behavior of the borrowers from latest vintages may be compared with the predictions of the models and if the performance parameters show a drop from estimated values, then the Model Management Module may trigger a request to the Model Building Module to rebuild models that are underperforming); wherein said at least one of said plurality of metrics are further monitored by a client utilizing said at least one process (Applicant’s specification [0019] as published states “The term “client” refers to a retailer, brand, business, IT department, IT manager, customer, manager, combination thereof, or the like, that would utilize the monitoring and alerting system for one or more of their own processes. [0064] as published states “In one embodiment, direct alert 210 is a real-time alert that will provide an (automated and/or curated) email, text, message, or other pop-up type alert directly to a client or a designated user or users (e.g., process manager, owner, designated IT personnel, or the like).” Gopinathan discloses the limitations based on broadest reasonable interpretation in light of the specification – a person/party/business/manager can receive the metrics by alert to manually monitor a process with metrics - see par 52 - The Model Management Module 106 may constantly monitor the performance levels of the models and business metrics with each new transaction. Another example may be a credit/debit card transaction ; see par 63 - The Near Real Time Processor 102 may also enable the generation of various types of reports on a continuous basis in a Reports Data Mart 306. The Master data Manager 110 provides data to the Automated Reports Generation Module 103 which computes various business performance metrics concerning, but not limited to profit, bad debt, acquisition, model performance, champion/challenger performance; In the system, alarms and alerts are triggered based on the comparative analysis against data from a previous day, previous month, previous year, previous market conditions or any such conditions specified. This module (103 report generation) constantly measures… and monitors other such diagnostic metrics. A case is also flagged for human intervention. The human operator may interfere whenever he/she so desires); automatically generating an alert for one or more identified data points (Gopinathan – see FIG. 6, par 76 - the Model Management Module 106 may also synchronize the model score data, for offline validations, with the Modeling Data Mart and sends updated specifications with the new models and variables to the Automated Reports Generation Module 103 (part of the Near Real Time Module 102), which returns the metrics for all existing variables/models. The Model Management Module 106 may also continuously monitor the performance of the business strategy currently deployed and if the performance falls below a pre-specified threshold, automatically triggers rebuilds of the predictive models involved in the underperforming strategy. The Model Management Module 106 also may be able to identify the part of the strategy (model/set of models) that is under-performing and triggers an alert to the Model Building Module 105 to start a rebuilding process for the corresponding models; see par 78 - module 107 also may evaluate how the predictive models perform in the different challenger segments, and generates rebuild triggers for the model management module if the performance falls below expected or specified thresholds. After a sufficient number of transactions (varies by business product and specific challenger being tested), a successful challenger strategy becomes a part of the current business strategy in production; else it is either modified further or flagged for human intervention. See par 79 - In a lending environment, for example, if one wants to control credit risk and sets an upper bound for the acceptable range, then this module would take into account all the models, business rules, test segments, etc, to generate an optimized strategy which would approve only those application whose predicted cumulative credit risk falls below this upper bound. For example, in the case of John's loan application, the decision logic comprises of a profit model score, a third party credit score and a fraud alert and a threshold may be determined based on these three values. The Decision Logic Module 108 may also trigger a rebuild (giving few constraints) of certain models that are a part of the strategy. The final decision logic (including model, IDV, and DecisionLogic specs) arrived at will be reviewed and presented in a common format (across the system) that will enable automatic deployment into the Real Time Decision Engine 101. See par 80 - In an embodiment of the AMP system used for consumer credit, the Decision Logic Module 108 creates a business strategy based on constrained optimization where the objective function is (to maximize) Total Profit from the portfolio. Constraints typically are in the form of available lending capital and reserve requirements and specific conditions on metrics related to bad debt and unit profit (i.e. individual loan or customer level profit), and the inputs are the different predictive models built for each aspect of the portfolio (risk, conversion, profitability, income, etc). The business strategy typically ends up as a set of mathematical rules derived from the models that are applied to each transaction. The derived business strategy, along with the underlying predictive models and variables is then passed onto the Pre-Production Testing 109, to be eventually deployed into the Real Time Decision Engine 101.); receiving a response for each of said identified data points (Gopinathan – see par 63 - The Near Real Time Processor 102 may also enable the generation of various types of reports on a continuous basis in a Reports Data Mart 306. The Master data Manager 110 provides data to the Automated Reports Generation Module 103 which computes various business performance metrics concerning, but not limited to profit, bad debt, acquisition, model performance, champion/challenger performance. This module constantly measures correlations between different independent variables, correlation between independent variables and dependent variables, applies the Kolmogorov-Smirnov statistical distribution test across all variables to identify significant distributional shifts over various time intervals, and monitors other such diagnostic metrics. If an anomaly is found, this module automatically triggers corrective action invoking the Model Management Module 106 (which then triggers model rebuilds for affected variables); See par 76 - the models that are deployed in the Real Time Decision Engine 101 can be set to be rebuilt every day (or even every hour/minute, depending on the business) and adapt dynamically to the ever changing environment. In a lending environment, the models in place may predict risk involved in a transaction. Marketing associated with the incoming traffic might change, resulting in a different behavior of the applicants from the one predicted by the risk model. If this model is built again using data from recent transactions, it should take into account the bias caused by the marketing campaign and predict risk more accurately henceforth.), said response comprising: a description of a remediation performed in a user digestible format (First, Examiner notes that the content of the “description” here is not entitled to patentable weight – see MPEP 2111.05 - , where the claim as a whole is directed to conveying a message or meaning to a human reader independent of the intended computer system, and/or the computer-readable medium merely serves as a support for information or data, no functional relationship exists. For example, a claim to a memory stick containing tables of batting averages, or tracks of recorded music, utilizes the intended computer system merely as a support for the information. Such claims are directed toward conveying meaning to the human reader rather than towards establishing a functional relationship between recorded data and the computer. Nonetheless, for compact prosecution, art still applied: For claim interpretation - Applicant specification [0021] as published states “In general, the tool will analyze data related to the one or more processes and their underlying business metrics and provide real-time or near real-time user digestible information based on the results of the analysis.”; [0076-0077] as published states “the self-service RCA path 431 and the managed RCA path 432 (which could also therefore include the hybrid path) include a research 436 feature used to research any data that has successfully passed through the prioritize/filter 435 component. In one embodiment, the report includes all the data that has successfully passed through the prioritize/filter 435 component as well as any research found by the research 436 feature Gopinathan discloses the limitations based on broadest reasonable interpretation in light of the specification – See par 76 - The Model Management Module 106 may also continuously monitor the performance of the business strategy currently deployed and if the performance falls below a pre-specified threshold, automatically triggers rebuilds of the predictive models involved in the underperforming strategy. The Model Management Module 106 also may be able to identify the part of the strategy (model/set of models) that is under-performing and triggers an alert to the Model Building Module 105 to start a rebuilding process for the corresponding models; see par 78 - module 107 also may evaluate how the predictive models perform in the different challenger segments, and generates rebuild triggers for the model management module if the performance falls below expected or specified thresholds. After a sufficient number of transactions (varies by business product and specific challenger being tested), a successful challenger strategy becomes a part of the current business strategy in production; else it is either modified further or flagged for human intervention.) It is unclear if Gopinathan discloses a “response” as best understood, and a successful strategy for getting performance above a threshold can be “flagged for human intervention” (See par 78). Merrill is also applied here as it discloses: receiving a “response” for each of said identified data points, said “response comprising: a description of a remediation performed in a user digestible format” (Merrill – See par 28 - embodiments disclosed herein provide libraries and tools that allow an operator to perform modeling and analysis tasks that record data in a knowledge graph, including tasks such as allowing an operator to flag unreliable variables and features, for instance: flagging of features that lead to disparities in approval rate between protected classes and a baseline;… including comparison of distributions of each feature in the population that would be newly approved by the model with the distributions of the same feature within the population of applicants that would have been approved by both the old model and the new model; See par 74 - In one embodiment, the disparate impact analysis includes approval rates among population segments such as protected classes and a baseline, determination of which features contribute to the approval rate disparity, and an analysis of each feature's economic impact. In one embodiment, the disparate impact analysis includes the disposition of a variable's suitability for inclusion in the model (included or suppressed) and the reason for the disposition (including a business justification, quantification of harm, and natural language representing the reasoning behind the disposition, and attribution of the judgment to an identified responsible person). See par 172 - These machine-readable relations enable new and useful automated analysis and reporting: a query to the knowledge graph may now answer if a detected anomaly matters (is this feature important to the model and what is the economic impact of the anomaly) and how to fix it). Gopinathan and Merrill disclose: updating, in said alert database, said identified data points with said response (Gopinathan – See par 53 - In the above lending example, where the profit model was rebuilt, the evaluation of performance of the models and trigger for the rebuild is given by the Model Management Module. The decision logic may also be revisited to check for any scope of optimization. With a modification of the models and other business rules, the system may generate a completely different score combination and decision logic. In this case, since only one model is considered, for the sake of simplicity let model M2 replaces model M1. The decision logic (comprising of the profit model score, a third party credit score and a fraud rule) is altered accordingly and the new thresholds are computed. see par 54 - In a lending business, from time to time, new models may be built, improved strategies and algorithms may be developed, third party services may get updated and improved versions may have to be put into use for better results and more accurate estimates. See par 76 – newly build profit model with new variables; The Model Management Module 106 may also synchronize the model score data, for offline validations, with the Modeling Data Mart and sends updated specifications with the new models and variables to the Automated Reports Generation Module 103 (part of the Near Real Time Module 102), which returns the metrics for all existing variables/models Merrill - see par 44 - In one embodiment, a dashboard of anomalous model inputs and scores is presented to an operator so they can inspect the outliers and take action to retrain the model or ignore the alerts and add the anomalous data to the baseline data for the modeling system. In one embodiment, each addition to baseline data is tracked in a knowledge graph so that the history of modifications may be preserved and rolled back, should the operator choose to undo past modifications. In this way an intelligent monitoring system may be constructed that learns from human judgment as alerts are cleared or actioned; See par 172 - These machine-readable relations enable new and useful automated analysis and reporting: a query to the knowledge graph may now answer if a detected anomaly matters (is this feature important to the model and what is the economic impact of the anomaly) and how to fix it (it came from this feature which is comprised of these other inputs that come from these data stores using this function to combine them.) Both Gopinathan and Merrill are analogous art as they are directed to analyzing data/metrics for loan applications (see Gopinathan Abstract, par 50; Merrill Abstract, par 35 – fair lending, par 163 - features related to loans). Gopinathan discloses triggering corrective actions of model rebuilds when anomalies found (See par 63), rebuilding risk model based on more recent transactions and identifying underperforming strategies as well as flagging strategies for human intervention (See par 52, 76, 78). Merrill improves upon Gopinathan by disclosing flagging features leading to disparities in approval rates, then use of a new model to check different distributions of features (See par 28), ability to possibly add anomalous data to baseline data for modeling/training, determining features contributing to an “approval rate disparity” and a reason for including in a model (See par 74), providing reporting on “how to fix” an anomaly (See par 44, 172). One of ordinary skill in the art would be motivated to further include having detection of features relative to approval rates that can result in modifying the model and detection of anomalous data and how to fix it or change models/training data to efficiently improve upon the rebuilding of models based on recent transactions in Gopinathan. Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the rebuilding of models in the lending environment in Gopinathan (See par 50, 63, 76), to further include fixing/modifying models/training related to loan approvals as disclosed in Merrill, since the claimed invention is merely a combination of old elements, and in combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable and there is a reasonable expectation of success. Concerning independent claim 8, Gopinathan and Merrill disclose: A non-transitory computer-readable medium for storing instructions, said instructions comprising: one or more instructions which, when executed by one or more processors, cause one or more processors to (Gopinathan – See par 37- In a software implementation of the AMP 100, each component shown in FIG. 1 may be implemented as a plurality lines of computer code that may be stored on a computer readable medium, such as a CD, DVD, flash memory, persistent storage device, cloud computing storage and then may be executed by a processor.). The remaining limitations are similar to claim 1 above and are rejected for the same reasons over Gopinathan and Merrill. Concerning independent claim 15, Gopinathan and Merrill disclose: A computer system (Gopinathan – see par 37 – computer system) comprising: a memory (Gopinathan – See par 37- In a software implementation of the AMP 100, each component shown in FIG. 1 may be implemented as a plurality lines of computer code that may be stored on a computer readable medium, such as a CD, DVD, flash memory, persistent storage device, cloud computing storage and then may be executed by a processor). Gopinathan discloses having reports (See par 50, 63). Merrill discloses: a display (Merrill – par 188 – FIG. 3 – device with display device and user input device; see par 31 – displaying output to operator). Gopinathan discloses: at least one processor, said at least one processor configured to: (Gopinathan – See par 37- In a software implementation of the AMP 100, each component shown in FIG. 1 may be implemented as a plurality lines of computer code that may be stored on a computer readable medium, such as a CD, DVD, flash memory, persistent storage device, cloud computing storage and then may be executed by a processor.). The remaining limitations are similar to claim 1 above and are rejected for the same reasons over Gopinathan and Merrill. In addition, Merrill improves upon the computer with reporting in Gopinathan by explicitly having a “display”. Concerning claims 2, 9, and 16 Gopinathan disclose: The method of claim 1 wherein said plurality of metrics are selected from a group consisting of: system performance metrics, key performance indicators (KPI) metrics, and a combination of said system performance metrics and said KPI metrics (Gopinathan see par 63 - The Master data Manager 110 provides data to the Automated Reports Generation Module 103 which computes various business performance metrics concerning, but not limited to profit, bad debt, acquisition, model performance, champion/challenger performance.) Concerning claims 7 and 14, Gopinathan and Merrill disclose: The method of claim 1 further comprising: analyzing said alert for one of said identified data points (Gopinathan – see par 76 - The Model Management Module 106 also may be able to identify the part of the strategy (model/set of models) that is under-performing and triggers an alert to the Model Building Module 105 to start a rebuilding process for the corresponding models; See also Merrill); determining if said remediation is needed for said alert based on said analysis of said alert (Gopinathan – See par 50 - In a lending environment, for example, there is some expected minimum number of applications for loans, percentage of approvals, maximum risk cutoffs, fraud thresholds, etc that should be satisfied. If any of these are not met, an alert is triggered by this system to notify for human intervention. See also Merrill - see par 44 - In one embodiment, a dashboard of anomalous model inputs and scores is presented to an operator so they can inspect the outliers and take action to retrain the model or ignore the alerts and add the anomalous data to the baseline data for the modeling system); performing said remediation (Gopinathan – see par 63 - This module constantly measures correlations between different independent variables, correlation between independent variables and dependent variables, applies the Kolmogorov-Smirnov statistical distribution test across all variables to identify significant distributional shifts over various time intervals, and monitors other such diagnostic metrics. If an anomaly is found, this module automatically triggers corrective action invoking the Model Management Module 106 (which then triggers model rebuilds for affected variables); see par 78 - module 107 also may evaluate how the predictive models perform in the different challenger segments, and generates rebuild triggers for the model management module if the performance falls below expected or specified thresholds. After a sufficient number of transactions (varies by business product and specific challenger being tested), a successful challenger strategy becomes a part of the current business strategy in production; else it is either modified further or flagged for human intervention; see also Merrill - see par 44 - In one embodiment, the alerting system sends emails, SMS messages, and other forms of alerts to operators based on the numeric score, in order to alert the operator that the model may be encountering data or behaving in ways that are unsafe. In one embodiment, a dashboard of anomalous model inputs and scores is presented to an operator so they can inspect the outliers and take action to retrain the model or ignore the alerts and add the anomalous data to the baseline data for the modeling system); and providing, as said response, said description of said remediation performed in said user digestible format (Gopinathan – See par 74 - In one embodiment, the disparate impact analysis includes approval rates among population segments such as protected classes and a baseline, determination of which features contribute to the approval rate disparity, and an analysis of each feature's economic impact. In one embodiment, the disparate impact analysis includes the disposition of a variable's suitability for inclusion in the model (included or suppressed) and the reason for the disposition (including a business justification, quantification of harm, and natural language representing the reasoning behind the disposition; see FIG. 6, par 76 - the Model Management Module 106 may also synchronize the model score data, for offline validations, with the Modeling Data Mart and sends updated specifications with the new models and variables to the Automated Reports Generation Module 103 (part of the Near Real Time Module 102), which returns the metrics for all existing variables/models. The Model Management Module 106 may also continuously monitor the performance of the business strategy currently deployed and if the performance falls below a pre-specified threshold, automatically triggers rebuilds of the predictive models involved in the underperforming strategy. The Model Management Module 106 also may be able to identify the part of the strategy (model/set of models) that is under-performing and triggers an alert to the Model Building Module 105 to start a rebuilding process for the corresponding models; see par 78 - module 107 also may evaluate how the predictive models perform in the different challenger segments, and generates rebuild triggers for the model management module if the performance falls below expected or specified thresholds. After a sufficient number of transactions (varies by business product and specific challenger being tested), a successful challenger strategy becomes a part of the current business strategy in production; Merrill – see par 69 - In some embodiments, the system 101 includes an MRM module for automated generation of a data dictionary that includes a table and an associated narrative. See par 74 - In one embodiment, the disparate impact analysis includes approval rates among population segments such as protected classes and a baseline, determination of which features contribute to the approval rate disparity, and an analysis of each feature's economic impact. In one embodiment, the disparate impact analysis includes the disposition of a variable's suitability for inclusion in the model (included or suppressed) and the reason for the disposition (including a business justification, quantification of harm, and natural language representing the reasoning behind the disposition; See par 172 - These machine-readable relations enable new and useful automated analysis and reporting: a query to the knowledge graph may now answer if a detected anomaly matters (is this feature important to the model and what is the economic impact of the anomaly)). It would have been obvious to combine Gopinathan and Merrill for the same reasons as claim 1 above. Concerning claim 19, Gopinathan discloses: The computer system of claim 15, wherein said alert comprises: a direct alert automatically provided directly to said client, the direct alert selected from a group consisting of: an email, a text, a message, and a pop-up type alert (Gopinathan – see par 63 - The Near Real Time Processor 102 may also enable the generation of various types of reports on a continuous basis in a Reports Data Mart 306. The Master data Manager 110 provides data to the Automated Reports Generation Module 103 which computes various business performance metrics concerning, but not limited to profit, bad debt, acquisition, model performance, champion/challenger performance (“report” discloses “message”); Merrill – see par 28 - Some embodiments disclosed herein provide libraries and tools that allow an operator to perform modeling and analysis tasks that record data in a knowledge graph, including tasks such as allowing an operator to flag unreliable variables and features, for instance: features that are largely missing, extremely low variance, or that have large shifts in distribution over time; flagging of features that lead to disparities in approval rate between protected classes and a baseline… , comparison of approval rates between protected classes, determination of which features contribute to disparity, and a quantification of a feature's contribution to the approval rate disparity as well as their contribution to the economics of the model; and other analysis; see par 44, 178-179 – model monitoring module… stored in the knowledge graph to generate an alert; alert is email or text message to operators). It would have been obvious to combine Gopinathan and Merrill for the same reasons as claim 1 above. Concerning claim 20, Gopinathan and Merrill disclose: The computer system of claim 15, wherein said alert comprises: a summarizing alert automatically provided to an interactive dashboard application (Merrill – see par 44 - In one embodiment, the alerting system sends emails, SMS messages, and other forms of alerts to operators based on the numeric score, in order to alert the operator that the model may be encountering data or behaving in ways that are unsafe. In one embodiment, a dashboard of anomalous model inputs and scores is presented to an operator so they can inspect the outliers and take action to retrain the model or ignore the alerts and add the anomalous data to the baseline data for the modeling system). It would have been obvious to combine Gopinathan and Merrill for the same reasons as claim 1 above. Merrill improves upon the alerts and reports of Gopinathan by explicitly disclosing having a “dashboard” with a plurality of information (e.g. model inputs, scores, outliers). Claims 3-5, 10-12, and 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Gopinathan (US 2015/0142713) and Merrill (US 2019/0340518), as applied above to claims 1-2, 7-9, 14-16, and 19-20, and further in view of Way (US 2019/0114704). Concerning claims 3, 10, and 17, Gopinathan discloses that in a lending environment, there is “expected” minimum number of applications for loans, percentage of approvals, maximum risk cutoffs, fraud thresholds, etc that should be satisfied (See par 50), that when a model is underperforming, loan decision logic may be modified to make the threshold fit into expected values (See par 52), and that “The decision logic (comprising of the profit model score, a third party credit score and a fraud rule) is altered accordingly and the new thresholds are computed” (See par 53). Way discloses: The method of claim 1 wherein said performance envelope comprises: one or more automatically and dynamically developed thresholds indicative of the end of a range of normal behavior for one or more process attributes (Way See par 18 - In various embodiments, the statistical loan engine 102 may apply a distribution function, such as a Standard Normal Cumulative Distribution Function (CDF), to the intermediate borrower score to calculate the probability. see also par 26-27 – looking at historical borrower data and Kolomogorv-Smirnov (K-S) statistic; see par 29 - Accordingly, the K-S statistic may provide model coefficients for relationship attributes that are useful for calculating the intermediate borrower score of borrowers; See par 37-40 - Since the statistical model is constructed by the model generation module 214 from a standard normal distribution, i.e., mean of zero and standard deviation of one, each borrower score calculated by the borrower score module 216 is a normalized score. See par 42 - In some embodiments, the administrator may use the user interface module 224 to cause the loan approval module 220 to set or adjust cutoff thresholds for loan approvals. The administrator may monitor portfolio metrics, such as charge offs, to achieve a desired balance between portfolio risk and return. Raising the cutoff threshold leads to lower loan defaults but also lower loan volume. Lowering the cutoff will have the opposite effect, meaning that the loan default rate is expected to rise but loan volume is expected to increase. Accordingly, the administrator may initially choose a cutoff threshold that maximizes the K-S statistic, and then modify the cutoff threshold based on the actual portfolio metrics) selected from a group consisting of: activities, usage, results, and queries (Way – see par 40 – disclosing activities or usage - The ROC curve, in turn, measures the difference between the “True Positive Rate” (TPR) and the “False Positive Rate” (FPR), K-S=(TPR-FPR). TPR is the percent of good credit scored as good credit and FPR is the percent of bad credit mistaken for good credit. The calculation of K-S requires knowledge of the loans not approved that will not enter the predetermined (e.g., 30 or more days) of delinquency). Gopinathan, Merrill, and Way are analogous art as they are directed to analyzing data/metrics for loan applications (see Gopinathan Abstract, par 50; Merrill Abstract; Way Abstract). Gopinathan discloses that in a lending environment, there is “expected” minimum number of applications for loans, percentage of approvals, maximum risk cutoffs, fraud thresholds, etc that should be satisfied (See par 50), that when a model is underperforming, loan decision logic may be modified to make the threshold fit into expected values (See par 52), and that “The decision logic (comprising of the profit model score, a third party credit score and a fraud rule) is altered accordingly and the new thresholds are computed” (See par 53). Way improves upon Gopinathan and Merrill by disclosing data having distribution/range with normalness (see par 37-40) and further adjusting thresholds, such as cutoff thresholds for loan approvals (See par 43) based on knowledge of loans not approved. One of ordinary skill in the art would be motivated to further include having normalized distribution data as well as knowledge of loans not approved to adjust a cutoff threshold to efficiently improve upon the “new thresholds” being computed for profit model purposes in Gopinathan and the fixing of data related to detected anomalies in Merrill. Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the rebuilding of models in the lending environment in Gopinathan (See par 50, 63, 76), to further include fixing/modifying models/training related to loan approvals as disclosed in Merrill, and to further include data indicating normal distributions used for determining thresholds in lending as disclosed in Way, since the claimed invention is merely a combination of old elements, and in combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable and there is a reasonable expectation of success. Concerning claims 4, 11, and 18 Gopinathan, Merrill, and Way disclose: The method of claim 1 wherein said performance envelope comprises: one or more automatically and dynamically developed value ranges indicative of a normal behavior for one or more process attributes (Way See par 18 - In various embodiments, the statistical loan engine 102 may apply a distribution function, such as a Standard Normal Cumulative Distribution Function (CDF), to the intermediate borrower score to calculate the probability. see also par 26-27 – looking at historical borrower data and Kolomogorv-Smirnov (K-S) statistic; see par 29 - Accordingly, the K-S statistic may provide model coefficients for relationship attributes that are useful for calculating the intermediate borrower score of borrowers; See par 37-40 - Since the statistical model is constructed by the model generation module 214 from a standard normal distribution, i.e., mean of zero and standard deviation of one, each borrower score calculated by the borrower score module 216 is a normalized score. See par 53-54 - FIG. 6 is a flow diagram of an example process 600 for using a distribution function to calculate a probability value that determines whether a borrower is qualified for a loan. The process 600 further describes block 406 of the process 400. At block 606, the statistical loan engine 102 evaluate the distribution function with respect to the intermediate borrower score to generate a numerical approximation of a probability value for the borrower. In one instance, the statistical loan engine 102 may use the Excel function NORM.S.DIST(x,TRUE) to calculate the probability.) selected from a group consisting of: activities, usage, results, and queries (Way – see par 40 – disclosing activities or usage - The ROC curve, in turn, measures the difference between the “True Positive Rate” (TPR) and the “False Positive Rate” (FPR), K-S=(TPR-FPR). TPR is the percent of good credit scored as good credit and FPR is the percent of bad credit mistaken for good credit. The calculation of K-S requires knowledge of the loans not approved that will not enter the predetermined (e.g., 30 or more days) of delinquency). It would have been obvious to combine Gopinathan, Merrill, and Way for the same reasons as claim 3 above, where the distribution of “normal” is used to develop value ranges, in this case probability of 406 of FIG. 4 of not being charged off on a loan. Concerning claims 5 and 12, Gopinathan disclose: The method of claim 1 wherein said performance envelope comprises: one or more automatically and dynamically developed thresholds for one or more process attributes (Way See par 18 - In various embodiments, the statistical loan engine 102 may apply a distribution function, such as a Standard Normal Cumulative Distribution Function (CDF), to the intermediate borrower score to calculate the probability. see also par 26-27 – looking at historical borrower data and Kolomogorv-Smirnov (K-S) statistic; see par 29 - Accordingly, the K-S statistic may provide model coefficients for relationship attributes that are useful for calculating the intermediate borrower score of borrowers; See par 37-40 - Since the statistical model is constructed by the model generation module 214 from a standard normal distribution, i.e., mean of zero and standard deviation of one, each borrower score calculated by the borrower score module 216 is a normalized score. See par 42 - In some embodiments, the administrator may use the user interface module 224 to cause the loan approval module 220 to set or adjust cutoff thresholds for loan approvals. The administrator may monitor portfolio metrics, such as charge offs, to achieve a desired balance between portfolio risk and return. Raising the cutoff threshold leads to lower loan defaults but also lower loan volume. Lowering the cutoff will have the opposite effect, meaning that the loan default rate is expected to rise but loan volume is expected to increase. Accordingly, the administrator may initially choose a cutoff threshold that maximizes the K-S statistic, and then modify the cutoff threshold based on the actual portfolio metrics) selected from a group consisting of: activities, usage, results, and queries; and one or more automatically and dynamically developed value ranges for one or more process attributes selected from said group consisting of: activities, usage, results, and queries (Way – see par 40 – disclosing activities or usage - The ROC curve, in turn, measures the difference between the “True Positive Rate” (TPR) and the “False Positive Rate” (FPR), K-S=(TPR-FPR). TPR is the percent of good credit scored as good credit and FPR is the percent of bad credit mistaken for good credit. The calculation of K-S requires knowledge of the loans not approved that will not enter the predetermined (e.g., 30 or more days) of delinquency; See par 53-54 - FIG. 6 is a flow diagram of an example process 600 for using a distribution function to calculate a probability value that determines whether a borrower is qualified for a loan. The process 600 further describes block 406 of the process 400. At block 606, the statistical loan engine 102 evaluate the distribution function with respect to the intermediate borrower score to generate a numerical approximation of a probability value for the borrower. In one instance, the statistical loan engine 102 may use the Excel function NORM.S.DIST(x,TRUE) to calculate the probability). The same citations from claims 3 and 4 above apply and it would have been obvious to combine Gopinathan, Merrill, and Way for the same reasons as claims 3 and 4 above. Claims 6 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Gopinathan (US 2015/0142713) and Merrill (US 2019/0340518), as applied above to claims 1-2, 7-9, 14-16, and 19-20, and further in view of Moore (US 2011/0106692). Concerning claims 6 and 13, Gopinathan and Merrill disclose: The method of claim 1 further comprising: analyzing said alert for one of said identified data points (Gopinathan – see par 76 - The Model Management Module 106 also may be able to identify the part of the strategy (model/set of models) that is under-performing and triggers an alert to the Model Building Module 105 to start a rebuilding process for the corresponding models; See also Merrill ); determining if said remediation is needed for said alert based on said analysis of said alert (Gopinathan – See par 50 - In a lending environment, for example, there is some expected minimum number of applications for loans, percentage of approvals, maximum risk cutoffs, fraud thresholds, etc that should be satisfied. If any of these are not met, an alert is triggered by this system to notify for human intervention; see par 78 - After a sufficient number of transactions (varies by business product and specific challenger being tested), a successful challenger strategy becomes a part of the current business strategy in production; else it is either modified further or flagged for human intervention; See also Merrill - see par 44 - In one embodiment, a dashboard of anomalous model inputs and scores is presented to an operator so they can inspect the outliers and take action to retrain the model or ignore the alerts and add the anomalous data to the baseline data for the modeling system; See par 172 - These machine-readable relations enable new and useful automated analysis and reporting: a query to the knowledge graph may now answer if a detected anomaly matters (is this feature important to the model and what is the economic impact of the anomaly) and how to fix it); and Gopinathan discloses that if rules are not satisfied, an alert is triggered for human intervention (See par 50) and that there is a pre-production testing engine to identify “any” issues (See par 56). Merrill discloses ignoring alerts and adding anomalous data to baseline data for modeling (See par 44). Moore discloses: providing, as said response, a no remediation performed message if no remediation is needed (Moore – see par 70 - The LPM system may generate recommended remediation solutions and alternative remediation solutions that conform to the contact management campaign. The recommended and the alternative remediation solutions each may comprise a success probability that identifies a probability of transforming the at risk loan into a performing loan. see par 101 - FIG. 19 illustrates example operations 1900 to identify at risk loans and generate remediation recommendations. The call center agent may use the portfolio analyzer and the borrower health analyzer to identify and/or generate a campaign for the borrower (1906). The LPM system 102 may evaluate whether each loan is at risk (1908). The LPM system 102 may evaluate whether each loan is at risk (1908). The call center agent may route a customer to an appropriate resource (1910) when the loan is not at risk. When the loan is at risk, agent may offer the borrower an optimal loan modification based on campaign recommendations (1912) generated by the LPM system 102. The agent may initiate debt forgiveness, and/or loan modification/workout (1914). When key attributes change (1918), the LPM system 102 may identify cases requiring special handling/exceptions (1920)). It would have been obvious to combine Gopinathan and Merrill for the same reasons as claim 1 above. In addition, Gopinathan, Merrill, and Moore are analogous art as they are directed to analyzing data/metrics for loan applications (see Gopinathan Abstract, par 50; Merrill Abstract; Moore Abstract, par 72-74, 83). Gopinathan discloses that if rules are not satisfied, an alert is triggered for human intervention (See par 50) and that there is a pre-production testing engine to identify “any” issues (See par 56). Merrill discloses ignoring alerts and adding anomalous data to baseline data for modeling (See par 44). Moore improves upon Gopinathan and Merrill by disclosing call center agent able to route a customer to an appropriate resource when the loan is not at risk, and when the loan is at risk, agent is informed of optimal loan modification; and when attributes change, system may identify cases that need “special handling/exceptions” (See par 101). One of ordinary skill in the art would be motivated to further include having communications to call center agents if loan is ready or if different modifications/handling/exceptions are needed to efficiently improve upon the alert triggered for human intervention for issues in Gopinathan and the fixing of data related to detected anomalies in Merrill. Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the rebuilding of models in the lending environment in Gopinathan (See par 50, 63, 76), to further include fixing/modifying models/training related to loan approvals as disclosed in Merrill, and to further include notifying call center agent when loan not at risk and ready to be routed to another resource 1910 or when special handling/modifications needed as disclosed in Moore, since the claimed invention is merely a combination of old elements, and in combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable and there is a reasonable expectation of success. Response to Arguments Applicant’s arguments filed 1/28/26 have been fully considered but they are not persuasive and/or are moot in view of the new rejections. With regards to 101, Applicant argues that the claim 1 is not directed to an abstract idea, while copying the rejection of why it is an abstract idea. Remarks, page 14-15. In response, Examiner respectfully disagrees. Examiner is not sure what Applicant’s specific argument is, as it appears to just copy the rejection for “why is it directed to an abstract idea.” Accordingly, the argument is not persuasive. Applicant also does bold the new limitation, so it appears Applicant is arguing the new limitation “automatically and dynamically updating said performance envelope for at least one of said plurality of metrics.” The arguments are not persuasive. For claim 1 – no computer is required. For claims 8 and 15, when viewing limitations individually or in combination, just adding by a computer does not make a claim eligible. See MPEP 2106.05f (apply it [abstract idea] on a computer]; MPEP 2106.05(a)(1)(I) Examples… not sufficient so show an improvement in computer-functionality: ii. Accelerating a process of analyzing audit log data when the increased speed comes solely from the capabilities of a general-purpose computer, FairWarning IP, LLC v. Iatric Sys., 839 F.3d 1089, 1095. The metrics here is 0023] as published – any process metrics or business KPIs; [0028] approval rate; [0031] bureau pass rate for financial lending. With regards to 103, Applicant argues that three different limitations are not disclosed by Gopinathan or Merrill for – 1) “receiving a response for each of said identified data points, said response comprising: a description of a remediation performed in a user digestible format,” or 2) wherein said at least one of said plurality of metrics are further monitored by a client utilizing said at least one process, or new limitation 3) “continually monitoring, in real-time, data related to said at least one process for any data related to said plurality of metrics, and automatically and dynamically updating said performance envelope for at least one of said plurality of metrics.” Remarks, page 28-30. In response, Examiner respectfully disagrees. Applicant does not address substantively any of the citations in the 103 rejection for limitation 1) or 2). It is only a conclusion without addressing any portion of the reference cited. The argument for limitation 3) is moot in view of the revised rejection necessitated by the amendments. Applicant argues the limitation in claim 6 of “providing, as said response, a no remediation performed message if no remediation is needed” is not disclosed by Gopinathan or Merrill. Remarks, page 29-30. In response, Examiner respectfully disagrees. Applicant does not address substantively any of the citations in the 103 rejection for limitation. It is only a conclusion without addressing any portion of the reference cited. Applicant argues the limitation in claim 7 of “providing, as said response, said description of said remediation performed in said user digestible format” or new limitation from claim 1 is not disclosed by Gopinathan or Merrill. Remarks, pages 29-30. In response, Examiner respectfully disagrees. Applicant does not address substantively any of the citations in the 103 rejection for limitation. It is only a conclusion without addressing any portion of the reference cited. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to IVAN R GOLDBERG whose telephone number is (571)270-7949. The examiner can normally be reached 830AM - 430PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anita Coupe can be reached at 571-270-3614. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /IVAN R GOLDBERG/Primary Examiner, Art Unit 3619
Read full office action

Prosecution Timeline

Aug 09, 2023
Application Filed
Apr 16, 2025
Non-Final Rejection — §101, §103
Aug 21, 2025
Response Filed
Sep 30, 2025
Final Rejection — §101, §103
Oct 30, 2025
Request for Continued Examination
Nov 08, 2025
Response after Non-Final Action
Dec 05, 2025
Non-Final Rejection — §101, §103
Jan 28, 2026
Response Filed
Mar 20, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596970
SYSTEM AND METHOD FOR INTERMODAL FACILITY MANAGEMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12591826
SYSTEM FOR CREATING AND MANAGING ENTERPRISE USER WORKFLOWS
2y 5m to grant Granted Mar 31, 2026
Patent 12586020
DETERMINING IMPACTS OF WORK ITEMS ON REPOSITORIES
2y 5m to grant Granted Mar 24, 2026
Patent 12579493
SYSTEMS AND METHODS FOR CLIENT INTAKE AND MANAGEMENT USING HIERARCHICAL CONFLICT ANALYSIS
2y 5m to grant Granted Mar 17, 2026
Patent 12555055
CENTRALIZED ORCHESTRATION OF WORKFLOW COMPONENT EXECUTIONS ACROSS SOFTWARE SERVICES
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
35%
Grant Probability
72%
With Interview (+36.9%)
4y 8m
Median Time to Grant
High
PTA Risk
Based on 365 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month