DETAILED ACTION
The following is a Final Office action. In response to Examiner’s communication of 10/17/2025, Applicant, on 1/16/2026, amended claims 1-20. Claims 1-20 are now pending and have been rejected as indicated below.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
Applicant’s amendments are acknowledged.
A revised 35 USC 101 rejection of claims 1-20 in regard to abstract ideas has been applied in light of Applicant’s amendments and explanations.
Revised 35 USC 102 rejections of claims 1-20 have been applied in light of Applicant’s amendments and explanations.
Claim Rejections - 35 USC§ 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Here, under considerations of the broadest reasonable interpretation of the claimed invention, Examiner finds that the Applicant invented a method and system for processing performance indicators. Examiner formulates an abstract idea analysis, following the framework described in the MPEP, as follows:
Step 1: The claims are directed to a statutory category, namely a "method" (claims 1-7) and "system" (claims 8-20).
Step 2A - Prong 1: The claims are found to recite limitations that set forth the abstract idea(s), namely, regarding claim 1:
A method of improving alarm accuracy of performance indicators, comprising: acquiring historical data comprising a value of a first indicator associated with an application program when a plurality of second indicators associated with the application program have different values;
determining, based on the historical data, a plurality of target performance indicators in the plurality of second indicators and determining a regression coefficient of each of the plurality of target performance indicators wherein the regression coefficient of a particular one of the plurality of target performance indicators indicates a degree of influence that the particular target performance indicator has had historically on the first indicator;
determining, based on the regression coefficient of each of the plurality of target performance indicators and a minimum difference value of a significant change of the first indicator, a maximum acceptable deterioration value corresponding to each of the plurality of target performance indicator;
setting an alarm threshold for each of the plurality of target performance indicators based on the corresponding maximum acceptable deterioration value to improve an alarm accuracy for each of the plurality of target performance indicators and causing to output an alarm when a change value of any one of the plurality of target performance indicators exceeds the alarm threshold for that target performance indicator.
Independent claims 8 and 15 recite substantially similar claim language.
Dependent claims 2-7, 9-14, and 16-20 recite the same or similar abstract idea(s) as independent claims 1, 8, and 15 with merely a further narrowing of the abstract idea(s) to particular data characterization and/or additional data analyses performed as part of the abstract idea.
The limitations in claims 1-20 above falling well-within the groupings of subject matter identified by the courts as being abstract concepts, specifically the claims are found to correspond to the category of:
"Certain methods of organizing human activity- fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions)" as the limitations identified above are directed to processing performance indicators and thus is a method of organizing human activity including at least commercial or business interactions or relations and/or a management of user personal behavior; and/or
"Mental processes - concepts performed in the human mind (including an observation, evaluation, judgement, opinion)" as the limitations identified above include mere data observations, evaluations, judgements, and/or opinions, e.g. including user observation and evaluation of performance indicators, which is capable of being performed mentally and/or using pen and paper.
Mathematical Concepts – mathematical relationships, mathematical formulas or equations, mathematical calculations; as the claims recite processing performance indicators that include mathematical formulas, functions, and/or calculations including evaluating a product based on a determined carbon footprint through formulas and expressions.
Step 2A - Prong 2: Claims 1-20 are found to clearly be directed to the abstract idea identified above because the claims, as a whole, fail to integrate the claimed judicial exception into a practical application, specifically the claims recite the additional elements of:
" A terminal device, comprising: a processor and a memory; wherein the memory stores computer-executable instructions; the processor executes the computer-executable instructions stored in the memory, so that the processor performs acts for processing performance indicators, the acts comprising: / A non-transitory computer-readable storage medium, wherein computer--executable instructions are stored in the computer-readable storage medium, and a processor, when executing the computer-executable instructions, implements acts for processing performance indicators, the acts comprising:" (claims 1, 8, and 15) however the aforementioned elements merely amount to generic components of a general purpose computer used to "apply" the abstract idea (MPEP 2106.0S(f)) and thus fails to integrate the recited abstract idea into a practical application, furthermore the high-level recitation of receiving data from a generic "terminal device" is at most an attempt to limit the abstract to a particular field of use (MPEP 2106.0S(h), e.g.: "For instance, a data gathering step that is limited to a particular data source (such as the Internet) or a particular type of data (such as power grid data or XML tags) could be considered to be both insignificant extra-solution activity and a field of use limitation. See, e.g., Ultramercial, 772 F.3d at 716, 112 USPQ2d at 1755 (limiting use of abstract idea to the Internet); Electric Power, 830 F.3d at 1354, 119 USPQ2d at 1742 (limiting application of abstract idea to power grid data); Intellectual Ventures I LLC v. Erie lndem. Co., 850 F.3d 1315, 1328-29, 121 USPQ2d 1928, 1939 (Fed. Cir. 2017) (limiting use of abstract idea to use with XML tags).") and/or merely insignificant extra-solution activity (MPE 2106.05(g)) and thus further fails to integrate the abstract idea into a practical application;
Step 2B: Claims 1-20 do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements as described above with respect to Step 2A Prong 2 merely amount to a general purpose computer that attempts to apply the abstract idea in a technological environment (MPEP 2106.0S(f)), including merely limiting the abstract idea to a particular field of use of processing performance indicators via a "terminal device", as explained above, and/or performs insignificant extra-solution activity, e.g. data gathering or output, (MPEP 2106.0S(g)), as identified above, which is further found under step 2B to be merely well-understood, routine, and conventional activities as evidenced by MPEP 2106.0S(d)(II) (describing conventional activities that include transmitting and receiving data over a network, electronic recordkeeping, storing and retrieving information from memory, electronically scanning or extracting data from a physical document, and a web browser's back and forward button functionality). Therefore, similarly the combination and arrangement of the above identified additional elements when analyzed under Step 2B also fails to necessitate a conclusion that the claims amount to significantly more than the abstract idea directed to processing performance indicators.
Claims 1-20 are accordingly rejected under 35 USC§ 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea(s)) without significantly more.
Note: The analysis above applies to all statutory categories of invention. As such, the presentment of any claim otherwise styled as a machine or manufacture, for example, would be subject to the same analysis.
For further authority and guidance, see:
MPEP § 2106
https://www.uspto.gov/patents/laws/examination-policy/subject-matter-eligibility
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102(A)(1) that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(A)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-20 are rejected under 35 U.S.C. 102(A)(1) as being anticipated by U.S. Patent Application Publication Number 2023/0099001 to Harutyunyan et al. (hereafter referred to as Harutyunyan).
As per claim 1, Harutyunyan teaches:
A method of improving alarm accuracy of performance indicators, comprising: acquiring historical data comprising a value of a first indicator associated with an application program when a plurality of second indicators associated with the application program have different values (Paragraph Number [0086] teaches the operations manager trains inference models for applications running in a distributed computing system. For selected applications, the operations manager collects metrics and KPIs associated with the selected application for a historical time window from a data-storage device. The duration of the historical time window may be preset to an hour, two hours, twelve hours, a day, a week, or a month or even longer. Paragraph Number [0142] teaches an example of k-fold cross validation applied to an example set of metrics and KPI for k=5. In FIG. 19A, line 1902 represents a historical time window. Block 1904 represents a set of p metrics X recorded in the historical time window 1902. Shaded block 1906 represents KPI values for a KPI recorded in the historical time window 1902. The metrics X and KPI Y have been normalized and synchronized as described above).
determining, based on the historical data, a plurality of target performance indicators in the plurality of second indicators and determining a regression coefficient of each of the plurality of target performance indicators (Paragraph Number [0086] teaches the operations manager trains inference models for applications running in a distributed computing system. For selected applications, the operations manager collects metrics and KPIs associated with the selected application for a historical time window from a data-storage device. Paragraph Number [0124] teaches rather than eliminating metrics based on hypothesis testing, the operations manager may use a backward stepwise selection process to train a parametric model that contains only relevant metrics. The backward stepwise process employs a step-by-step process of eliminating irrelevant metrics from the set of metrics and thereby produces a parametric model that has been trained with relevant metrics. The process begins by partitioning metrics and the KPI recorded in a historical time window into a training set and a validating set).
wherein the regression coefficient of a particular one of the plurality of target performance indicators indicates a degree of influence that the particular target performance indicator has had historically on the first indicator (Paragraph Number [0122] teaches when the operations manager has determined that at least one of the metrics is relevant, the operations manager separately assesses the significance of the estimated model coefficients in the parametric model based on hypothesis testing. The null hypothesis for each estimated model coefficient is. The t-test is the test statistic based on the t-distribution. Paragraph Number [0144] teaches lasso regression may be used to compute estimated model coefficients Paragraph Number [0162] teaches importance scores of the metrics are determined based on magnitudes of estimated model coefficients of a parametric inference model. The magnitudes of the estimated model coefficients are given by |{circumflex over (β)}.sub.j|, where |⋅| denotes the absolute value and j=1, . . . , p. The operations manager computes the importance score for each metric by first determining the largest magnitude estimated model coefficient: An importance score is assigned to each corresponding metric. The metrics are rank ordered based on the corresponding importance scores to identify the highest ranked metrics that may affect the KPI using the condition in Equation (26)).
determining, based on the regression coefficient of each of the plurality of target performance indicators and a minimum difference value of a significant change of the first indicator, a maximum acceptable deterioration value corresponding to each of the plurality of target performance indicator. (Paragraph Number [0155] teaches an application service degradation or non-optimal performance of an application can originate from the infrastructure and/or the application itself and can be discovered in an application key performance indicator (“KPI”). For example, an application with a KPI that violates a performance threshold can be selected for troubleshooting. After an inference model has been trained for the application, the computer-implemented processes and systems described below use the trained inference model and rules to identify the performance problem and generate a recommendation for correcting the performance problem. Paragraph Number [0156] teaches an example graphical user interface (“GUI”) 2100 that displays KPIs associated with different applications running in a distributed computing system. The GUI 2100 includes a window 2102 that displays four entries 2104-2107 that list applications identified as Application 1, Application 2, Application 3, and Application 4 and show plots of curves 2108-2111 that represent corresponding KPIs plotted over the same recent run-time interval that ends at the current time denoted by t.sub.c. Horizontal dashed lines represent thresholds between normal and abnormal behavior of the applications. For example, KPI values of Applications 1, 2, and 4 are below a threshold 2112, which indicates the applications are performing normally as represented by normal icons, such as normal icon 2114. On the other hand, KPI values of the Application 3 exceed the threshold 2112, such as KPI value 2114, triggering a warning alert 2116. Threshold 2116 indicates the application exhibits critical behavior that triggers a critical alert icon that is not shown. A user may select “run troubleshooting” by clicking on the button 2118, which begins the automated computer-implemented process of troubleshooting Application 3 described below).
setting an alarm threshold for each of the plurality of target performance indicators based on the corresponding maximum acceptable deterioration value to improve an alarm accuracy for each of the plurality of target performance indicators (Paragraph Number [0156] teaches an example graphical user interface (“GUI”) 2100 that displays KPIs associated with different applications running in a distributed computing system. The GUI 2100 includes a window 2102 that displays four entries 2104-2107 that list applications identified as Application 1, Application 2, Application 3, and Application 4 and show plots of curves 2108-2111 that represent corresponding KPIs plotted over the same recent run-time interval that ends at the current time denoted by t.sub.c. Horizontal dashed lines represent thresholds between normal and abnormal behavior of the applications. For example, KPI values of Applications 1, 2, and 4 are below a threshold 2112, which indicates the applications are performing normally as represented by normal icons, such as normal icon 2114. On the other hand, KPI values of the Application 3 exceed the threshold 2112, such as KPI value 2114, triggering a warning alert 2116. Threshold 2116 indicates the application exhibits critical behavior that triggers a critical alert icon that is not shown. A user may select “run troubleshooting” by clicking on the button 2118, which begins the automated computer-implemented process of troubleshooting Application 3 described below. Paragraph Number [0165] teaches when the ranked order of metrics in the highest ranked metrics 2302 matches the ranked order of metrics in one of the lists of ranked metrics, the corresponding rule reports the performance problem in a GUI in the form of an alert that identifies the performance problem and a recommendation for correcting the performance problem).
and causing to output an alarm when a change value of any one of the plurality of target performance indicators exceeds the alarm threshold for that target performance indicator (Paragraph Number [0165] teaches when the ranked order of metrics in the highest ranked metrics 2302 matches the ranked order of metrics in one of the lists of ranked metrics, the corresponding rule reports the performance problem in a GUI in the form of an alert that identifies the performance problem and a recommendation for correcting the performance problem. Paragraph Number [0167] teaches the operations manager resolves a run-time performance problem by generating recommendations for resolving the performance problem based on the highest ranked metrics that identify the run-time performance problem, such as one or more highest ranked metrics that also trigger corresponding run-time alerts. Let {v.sub.j}.sub.j∈C denote a set of metrics that corresponds to a performance problem, where C denotes indices of metrics with run-time metric values that violate corresponding thresholds (See also Paragraph Number [0156])).
As per claim 8, Harutyunyan teaches:
A terminal device, comprising: a processor and a memory; wherein the memory stores computer-executable instructions; the processor executes the computer-executable instructions stored in the memory, so that the processor performs acts for processing performance indicators, the acts comprising: (Paragraph Number [0049] teaches generalized hardware and software components of a general-purpose computer system, such as a general-purpose computer system having an architecture similar to that shown in FIG. 1. The computer system 400 is often considered to include three fundamental layers: (1) a hardware layer or level 402; (2) an operating-system layer or level 404; and (3) an application-program layer or level 406. The hardware layer 402 includes one or more processors 408, system memory 410, different types of input-output (“I/O”) devices 410 and 412, and mass-storage devices 414. Of course, the hardware level also includes many other components, including power supplies, internal communications links and busses, specialized integrated circuits, many different types of processor-controlled or microprocessor-controlled peripheral devices and controllers, and many other components. (See also Paragraph Number [0043] and Claim 15)).
The remainder of the claim limitations are substantially similar to those found in claim 1 and are rejected for the same reasons put forth in regard to claim 1.
As per claim 15, Harutyunyan teaches:
A non-transitory computer-readable storage medium, wherein computer executable instructions are stored in the computer-readable storage medium, and a processor, when executing the computer-executable instructions, implements acts for processing performance indicators, the acts comprising: (Paragraph Number [0049] teaches generalized hardware and software components of a general-purpose computer system, such as a general-purpose computer system having an architecture similar to that shown in FIG. 1. The computer system 400 is often considered to include three fundamental layers: (1) a hardware layer or level 402; (2) an operating-system layer or level 404; and (3) an application-program layer or level 406. The hardware layer 402 includes one or more processors 408, system memory 410, different types of input-output (“I/O”) devices 410 and 412, and mass-storage devices 414. Of course, the hardware level also includes many other components, including power supplies, internal communications links and busses, specialized integrated circuits, many different types of processor-controlled or microprocessor-controlled peripheral devices and controllers, and many other components. (See also Paragraph Number [0043] and Claim 15)).
The remainder of the claim limitations are substantially similar to those found in claim 1 and are rejected for the same reasons put forth in regard to claim 1.
As per claims 2, 9, and 16, Harutyunyan teaches each of the limitations of claims 1, 8, and 15 respectively.
In addition, Harutyunyan teaches:
determining, according to the historical data, a subset of the plurality of second indicators (Paragraph Number [0086] teaches the operations manager trains inference models for applications running in a distributed computing system. For selected applications, the operations manager collects metrics and KPIs associated with the selected application for a historical time window from a data-storage device. The duration of the historical time window may be preset to an hour, two hours, twelve hours, a day, a week, or a month or even longer. Paragraph Number [0142] teaches show of an example of k-fold cross validation applied to an example set of metrics and KPI for k=5. In FIG. 19A, line 1902 represents a historical time window. Block 1904 represents a set of p metrics X recorded in the historical time window 1902. Shaded block 1906 represents KPI values for a KPI recorded in the historical time window 1902. The metrics X and KPI Y have been normalized and synchronized as described above).
a problem of multicollinearity not existing in the subset of the plurality of second indicators (Paragraph Number [0145] teaches the parametric inference models described above are computed based on an assumed linear relationship between metrics and a KPI. However, in certain cases, the relationship between metrics and a KPI is not linear. A cross-validation error estimate, denoted by CV.sub.error, may be used to determine whether a parametric inference model is suitable or a non-parametric inference model should be used instead. Paragraph Number [0147] teaches in cases where there is no linear relationship between metrics and a KPI, the operations manager trains a non-parametric inference model based on K-nearest neighbor regression. K-nearest neighbor regression is performed by first determining an optimum positive integer number, K. of nearest neighbors for the metrics and the KPI. The optimum K is then used to predict, or forecast, a KPI value for prospective changes to metric values of the metrics and troubleshoot a root cause of an application performance problem).
processing, by a multiple linear regression method, the subset of the plurality of second indicators to acquire a P value of each of the subset of second indicators (Paragraph Number [0147] teaches in cases where there is no linear relationship between metrics and a KPI, the operations manager trains a non-parametric inference model based on K-nearest neighbor regression. K-nearest neighbor regression is performed by first determining an optimum positive integer number, K. of nearest neighbors for the metrics and the KPI. The optimum K is then used to predict, or forecast, a KPI value for prospective changes to metric values of the metrics and troubleshoot a root cause of an application performance problem. Paragraph Number [0148] teaches an example of determining a K-nearest neighbor regression model. FIG. 20A shows an example of p-tuples of p metrics represented by points in a p-dimensional space and a plot 2000 of corresponding KPI values of KPI. Each p-tuple of the p metrics is represented by a point in a p-dimensional space and has a corresponding KPI value in the plot 2000 at the same time stamp. For example, point 2002 comprises metrics values of p metrics and corresponds to KPI value 2004 at a time stamp t.sub.1. Point 2006 comprises metrics values of the p metrics and corresponds to KPI value 2008 at a time stamp t.sub.i. Point 2010 comprises metrics values of the p metrics and corresponds to KPI value 2012 at a time stamp t.sub.n).
the P value being used for indicating whether there is a significant difference between the regression coefficient of the second indicator and a preset value (Paragraph Number [0155] teaches an application service degradation or non-optimal performance of an application can originate from the infrastructure and/or the application itself and can be discovered in an application key performance indicator (“KPI”). For example, an application with a KPI that violates a performance threshold can be selected for troubleshooting. After an inference model has been trained for the application, the computer-implemented processes and systems described below use the trained inference model and rules to identify the performance problem and generate a recommendation for correcting the performance problem. Paragraph Number [0156] teaches an example graphical user interface (“GUI”) 2100 that displays KPIs associated with different applications running in a distributed computing system. The GUI 2100 includes a window 2102 that displays four entries 2104-2107 that list applications identified as Application 1, Application 2, Application 3, and Application 4 and show plots of curves 2108-2111 that represent corresponding KPIs plotted over the same recent run-time interval that ends at the current time denoted by t.sub.c. Horizontal dashed lines represent thresholds between normal and abnormal behavior of the applications. For example, KPI values of Applications 1, 2, and 4 are below a threshold 2112, which indicates the applications are performing normally as represented by normal icons, such as normal icon 2114. On the other hand, KPI values of the Application 3 exceed the threshold 2112, such as KPI value 2114, triggering a warning alert 2116. Threshold 2116 indicates the application exhibits critical behavior that triggers a critical alert icon that is not shown. A user may select “run troubleshooting” by clicking on the button 2118, which begins the automated computer-implemented process of troubleshooting Application 3 described below).
determining the second indicators from the subset of second indicators of which the P value is less than or equal to a preset threshold as the plurality of target performance indicators. (Paragraph Number [0155] teaches an application service degradation or non-optimal performance of an application can originate from the infrastructure and/or the application itself and can be discovered in an application key performance indicator (“KPI”). For example, an application with a KPI that violates a performance threshold can be selected for troubleshooting. After an inference model has been trained for the application, the computer-implemented processes and systems described below use the trained inference model and rules to identify the performance problem and generate a recommendation for correcting the performance problem. The processes and systems eliminate human errors in detecting application performance problems and significantly reduce the time for detecting the performance problem from days and weeks to minutes and seconds. The processes and systems provide immediate notification of a performance problem, provide a recommendation for correcting the performance problem, and enable rapid execution of remedial measures that correct the performance problem).
As per claims 3, 10, and 17, Harutyunyan teaches each of the limitations of claims 1 and 2, 8 and 9, and 15 and 16 respectively.
In addition, Harutyunyan teaches:
processing, by a lasso linear regression method, the historical data to acquire an influence degree curve of each of the plurality of second indicators on the first indicator (Paragraph Number [0144] teaches lasso regression may be used to compute estimated model coefficients. Computation of the estimated model coefficients {{circumflex over (β)}.sub.j.sup.L}.sub.j=1.sup.p is a quadratic programming problem with linear inequality constraints as described in “Regression Shrinkage and Selection via the Lasso,” by Robert Tibshirani, J. R. Statist. Soc. B (1996) vol. 58, no. 1, pp. 267-288. Paragraph Number [0145] teaches the parametric inference models described above are computed based on an assumed linear relationship between metrics and a KPI. However, in certain cases, the relationship between metrics and a KPI is not linear. A cross-validation error estimate, denoted by CV.sub.error, may be used to determine whether a parametric inference model is suitable or a non-parametric inference model should be used instead. When the cross-validation error estimate satisfies the condition CV.sub.error<Th.sub.error, where Th.sub.error is an error threshold (e.g., Th.sub.error=0.1 or 0.2), the parametric inference model is used. Otherwise, when the cross-validation error estimate satisfies the condition CV.sub.error≥Th.sub.error, a non-parametric inference model is computed as described below. For the k-fold cross validation, the CV.sub.error=CV.sub.k, described above with reference to Equation (17b). For the other parametric inference models described above, the CV.sub.error=MSE(Ŷ,Y.sup.V), where Ŷ is the estimated KPI computed for a validating set of metrics X.sup.V and validating KPI Y.sup.V).
sorting, according to a plurality of influence degree curves, the plurality of second indicators in an order in which the influence degree becomes a preset value, (Paragraph Numbers [0159]-[0161] teach the operations manager computes an MSE, MSE(Ŷ.sub.m.sup.r,Y.sup.r), for each of the expected run-time KPIs. Each MSE indicates the degrees to which the KPI depends on a metric. An omitted metric with a large associated MSE indicates that the KPI depends on the omitted metric more than an omitted metric with a smaller MSE. The operations manager computes an importance score for each metric based on the associated MSE. The importance score is a measure of how much the KPI depends on the metric. The operations manager computes the importance score for each metric by first determining the largest MSE of the p run-time metrics: The operations manager then computes an importance score for each j=1, . . . , p. A threshold for identifying the highest ranked metrics is given by the condition... where Th.sub.score is a user defined threshold. For example, the user-defined threshold may be set to 70%, 60%, 50% or 40%. The importance score computed in Equation (25) is assigned to each corresponding metric. The metrics are rank ordered based on the corresponding importance scores to identify the highest ranked metrics that directly impact the KPI. For example, the highest ranked metrics are metrics with importance scores above the user-defined threshold Th.sub.score. The combination of highest ranked metrics associated with a KPI that indicates a performance problem with an application identify the root cause of the performance problem with the application).
determining first N performance indicators in the sorted plurality of second indicators as a plurality of initial performance indicators, N being an integer greater than 1 (Paragraph Numbers [0159]-[0162] teach the importance score computed in Equation (25) is assigned to each corresponding metric. The metrics are rank ordered based on the corresponding importance scores to identify the highest ranked metrics that directly impact the KPI. For example, the highest ranked metrics are metrics with importance scores above the user-defined threshold Th.sub.score. The combination of highest ranked metrics associated with a KPI that indicates a performance problem with an application identify the root cause of the performance problem with the application. Importance scores of the metrics are determined based on magnitudes of estimated model coefficients of a parametric inference model. The magnitudes of the estimated model coefficients are given by |{circumflex over (β)}.sub.j|, where |⋅| denotes the absolute value and j=1, . . . , p. The operations manager computes the importance score for each metric by first determining the largest magnitude estimated model coefficient: An importance score is assigned to each corresponding metric. The metrics are rank ordered based on the corresponding importance scores to identify the highest ranked metrics that may affect the KPI using the condition in Equation (26)).
determining, according to relevancies between the plurality of initial performance indicators, the subset of the plurality of second indicators, wherein relevancies between the subset of the plurality of second indicators is less than a preset relevancy. (Paragraph Number [0121] teaches if it is determined that the null hypothesis for the estimated model coefficients is rejected, it may still be the case that one or more of the metrics are irrelevant and not associated with the KPI Y. Including irrelevant metrics in the computation of the estimate KPI Ŷ leads to unnecessary complexity in the final parametric model. The operations manager removes irrelevant metrics (i.e., setting corresponding estimated model coefficients to zero in the model) to obtain a model based on metrics that more accurately relate to the KPI Y. Paragraph Number [0122] teaches when the operations manager has determined that at least one of the metrics is relevant, the operations manager separately assesses the significance of the estimated model coefficients in the parametric model based on hypothesis testing. Paragraph Number [0124] teaches rather than eliminating metrics based on hypothesis testing, the operations manager may use a backward stepwise selection process to train a parametric model that contains only relevant metrics. The backward stepwise process employs a step-by-step process of eliminating irrelevant metrics from the set of metrics and thereby produces a parametric model that has been trained with relevant metrics. The process begins by partitioning metrics and the KPI recorded in a historical time window into a training set and a validating set).
As per claims 4, 11, and 18, Harutyunyan teaches each of the limitations of claims 1-3, 8-10, and 15-17 respectively.
In addition, Harutyunyan teaches:
determining the relevancy between every two initial performance indicators (Paragraph Number [0121] teaches if it is determined that the null hypothesis for the estimated model coefficients is rejected, it may still be the case that one or more of the metrics are irrelevant and not associated with the KPI Y. Including irrelevant metrics in the computation of the estimate KPI Ŷ leads to unnecessary complexity in the final parametric model. The operations manager removes irrelevant metrics (i.e., setting corresponding estimated model coefficients to zero in the model) to obtain a model based on metrics that more accurately relate to the KPI Y. Paragraph Number [0122] teaches when the operations manager has determined that at least one of the metrics is relevant, the operations manager separately assesses the significance of the estimated model coefficients in the parametric model based on hypothesis testing. Paragraph Number [0124] teaches rather than eliminating metrics based on hypothesis testing, the operations manager may use a backward stepwise selection process to train a parametric model that contains only relevant metrics. The backward stepwise process employs a step-by-step process of eliminating irrelevant metrics from the set of metrics and thereby produces a parametric model that has been trained with relevant metrics. The process begins by partitioning metrics and the KPI recorded in a historical time window into a training set and a validating set).
performing, according to the relevancy between every two initial performance indicators, deduplication processing on the plurality of initial performance indicators to acquire the subset of the plurality of second (Paragraph Number [0122] teaches when the operations manager has determined that at least one of the metrics is relevant, the operations manager separately assesses the significance of the estimated model coefficients in the parametric model based on hypothesis testing. Paragraph Number [0123] teaches the metric X.sub.j is not related to the KPI Y (i.e., is irrelevant) and the estimated model coefficient {circumflex over (β)}.sub.j is set to zero in the parametric model. When one or more metrics have been identified as being unrelated to the KPI Y, the model coefficients may be recalculated according to Equation (13) with the irrelevant metrics omitted from the design matrix {tilde over (X)} and corresponding model coefficients omitted from the process. [0124] In another implementation, rather than eliminating metrics based on hypothesis testing, the operations manager may use a backward stepwise selection process to train a parametric model that contains only relevant metrics. The backward stepwise process employs a step-by-step process of eliminating irrelevant metrics from the set of metrics and thereby produces a parametric model that has been trained with relevant metrics. The process begins by partitioning metrics and the KPI recorded in a historical time window into a training set and a validating set).
the deduplication processing being used for deleting one of two initial performance indicators between which the relevancy is greater than or equal to the preset relevancy (Paragraph Number [0122] teaches when the operations manager has determined that at least one of the metrics is relevant, the operations manager separately assesses the significance of the estimated model coefficients in the parametric model based on hypothesis testing. Paragraph Number [0123] teaches the metric X.sub.j is not related to the KPI Y (i.e., is irrelevant) and the estimated model coefficient {circumflex over (β)}.sub.j is set to zero in the parametric model. When one or more metrics have been identified as being unrelated to the KPI Y, the model coefficients may be recalculated according to Equation (13) with the irrelevant metrics omitted from the design matrix {tilde over (X)} and corresponding model coefficients omitted from the process. [0124] In another implementation, rather than eliminating metrics based on hypothesis testing, the operations manager may use a backward stepwise selection process to train a parametric model that contains only relevant metrics. The backward stepwise process employs a step-by-step process of eliminating irrelevant metrics from the set of metrics and thereby produces a parametric model that has been trained with relevant metrics. The process begins by partitioning metrics and the KPI recorded in a historical time window into a training set and a validating set).
As per claims 5, 12, and 19, Harutyunyan teaches each of the limitations of claims 1, 8, and 15 respectively.
In addition, Harutyunyan teaches:
determining a ratio of the minimum difference value to the regression coefficient of the target performance indicator (Paragraph Number [0155] teaches an application service degradation or non-optimal performance of an application can originate from the infrastructure and/or the application itself and can be discovered in an application key performance indicator (“KPI”). For example, an application with a KPI that violates a performance threshold can be selected for troubleshooting. After an inference model has been trained for the application, the computer-implemented processes and systems described below use the trained inference model and rules to identify the performance problem and generate a recommendation for correcting the performance problem. Paragraph Number [0156] teaches an example graphical user interface (“GUI”) 2100 that displays KPIs associated with different applications running in a distributed computing system. The GUI 2100 includes a window 2102 that displays four entries 2104-2107 that list applications identified as Application 1, Application 2, Application 3, and Application 4 and show plots of curves 2108-2111 that represent corresponding KPIs plotted over the same recent run-time interval that ends at the current time denoted by t.sub.c. Horizontal dashed lines represent thresholds between normal and abnormal behavior of the applications. For example, KPI values of Applications 1, 2, and 4 are below a threshold 2112, which indicates the applications are performing normally as represented by normal icons, such as normal icon 2114. On the other hand, KPI values of the Application 3 exceed the threshold 2112, such as KPI value 2114, triggering a warning alert 2116. Threshold 2116 indicates the application exhibits critical behavior that triggers a critical alert icon that is not shown. A user may select “run troubleshooting” by clicking on the button 2118, which begins the automated computer-implemented process of troubleshooting Application 3 described below).
determining the ratio as the maximum acceptable deterioration value corresponding to the target performance indicator. (Paragraph Number [0155] teaches an application service degradation or non-optimal performance of an application can originate from the infrastructure and/or the application itself and can be discovered in an application key performance indicator (“KPI”). For example, an application with a KPI that violates a performance threshold can be selected for troubleshooting. After an inference model has been trained for the application, the computer-implemented processes and systems described below use the trained inference model and rules to identify the performance problem and generate a recommendation for correcting the performance problem. Paragraph Number [0156] teaches an example graphical user interface (“GUI”) 2100 that displays KPIs associated with different applications running in a distributed computing system. The GUI 2100 includes a window 2102 that displays four entries 2104-2107 that list applications identified as Application 1, Application 2, Application 3, and Application 4 and show plots of curves 2108-2111 that represent corresponding KPIs plotted over the same recent run-time interval that ends at the current time denoted by t.sub.c. Horizontal dashed lines represent thresholds between normal and abnormal behavior of the applications. For example, KPI values of Applications 1, 2, and 4 are below a threshold 2112, which indicates the applications are performing normally as represented by normal icons, such as normal icon 2114. On the other hand, KPI values of the Application 3 exceed the threshold 2112, such as KPI value 2114, triggering a warning alert 2116. Threshold 2116 indicates the application exhibits critical behavior that triggers a critical alert icon that is not shown. A user may select “run troubleshooting” by clicking on the button 2118, which begins the automated computer-implemented process of troubleshooting Application 3 described below).
As per claims 6, 13, and 20, Harutyunyan teaches each of the limitations of claims 1, 8, and 15 respectively.
In addition, Harutyunyan teaches:
determining a plurality of indicator values of the first indicator (Paragraph Number [0086] teaches the operations manager trains inference models for applications running in a distributed computing system. For selected applications, the operations manager collects metrics and KPIs associated with the selected application for a historical time window from a data-storage device. The duration of the historical time window may be preset to an hour, two hours, twelve hours, a day, a week, or a month or even longer. Paragraph Number [0142] teaches an example of k-fold cross validation applied to an example set of metrics and KPI for k=5. In FIG. 19A, line 1902 represents a historical time window. Block 1904 represents a set of p metrics X recorded in the historical time window 1902. Shaded block 1906 represents KPI values for a KPI recorded in the historical time window 1902. The metrics X and KPI Y have been normalized and synchronized as described above).
determining, according to the plurality of indicator values of the first indicator, an indicator average value and an indicator variance value corresponding to the first indicator (Paragraph Number [0099] teaches for the types of processing carried out by the currently disclosed processes and systems, the metric values of the metrics and the KPI are synchronized to a general set of uniformly spaced time stamps. Metric values may be synchronized by computing a run-time average of metric values in a sliding time window centered at each time stamp of the general set of uniformly spaced time stamps. In an alternative implementation, the metric values with time stamps in the sliding time window may be smoothed by computing a run-time median of metric values in the sliding time window centered at each time stamp of the general set of uniformly spaced time stamps. Processes and systems may also synchronize the metrics by deleting time stamps of missing metric values and/or interpolating missing metric data at time stamps of the general set of uniformly spaced time stamps using a running average, linear interpolation, quadratic interpolation, or spline interpolation. Paragraph Number [0131]-[0132] teach d is the number of metrics in the corresponding model {circumflex over (M)}.sup.(γ); [0133] {circumflex over (σ)}.sup.2 is the variance of the full model {circumflex over (M)}.sup.(0) given by Equation (14b); and [0134] J=1, . . . , p−Q+1. The C.sub.p-statistic for the full model {circumflex over (M)}.sup.(0) is given by SSR(Y.sup.V,Ŷ.sub.1.sup.(0)). The parametric model with the smallest corresponding C.sub.p-statistic is the resulting trained parametric model).
determining, according to the indicator average value and the indicator variance value corresponding to the first indicator, the minimum difference value of the significant change corresponding to the first indicator by means of T-test (Paragraph Number [0122] teaches when the operations manager has determined that at least one of the metrics is relevant, the operations manager separately assesses the significance of the estimated model coefficients in the parametric model based on hypothesis testing. The null hypothesis for each estimated model coefficient is. The t-test is the test statistic based on the t-distribution. Paragraph Number [0144] teaches lasso regression may be used to compute estimated model coefficients. Paragraph Number [0162] teaches importance scores of the metrics are determined based on magnitudes of estimated model coefficients of a parametric inference model. The magnitudes of the estimated model coefficients are given by |{circumflex over (β)}.sub.j|, where |⋅| denotes the absolute value and j=1, . . . , p. The operations manager computes the importance score for each metric by first determining the largest magnitude estimated model coefficient: An importance score is assigned to each corresponding metric. The metrics are rank ordered based on the corresponding importance scores to identify the highest ranked metrics that may affect the KPI using the condition in Equation (26)).
As per claims 7 and 14, Harutyunyan teaches each of the limitations of claims 1 and 8 respectively.
In addition, Harutyunyan teaches:
wherein the plurality of second indicators are a plurality of indicators of the first indicator in a first business line direction (Paragraph Number [0086] teaches the operations manager trains inference models for applications running in a distributed computing system. For selected applications, the operations manager collects metrics and KPIs associated with the selected application for a historical time window from a data-storage device. The duration of the historical time window may be preset to an hour, two hours, twelve hours, a day, a week, or a month or even longer. Paragraph Number [0108] teaches the operations manager uses the p metrics, {X.sub.j}.sub.j=1.sup.p, and the KPI, Y, to train an inference model for the application. The inference model can be a parametric inference model or a non-parametric inference model. The inference model is used to determine a root cause of a performance problem recorded in run-time KPI values of the application, predict the health of the application, and generate recommendations for optimizing performance of the application in a distributed computing system. Based on the recommendations, the operations manager executes remedial measures that correct the performance problem, which optimizes performance of the application).
determining a plurality of reference performance indicators of the first indicator in a second business line direction and a regression coefficient of each reference performance indicator (Paragraph Number [0163] teaches highest ranked metrics associated with different types of performance problems. FIG. 22A shows an example of metrics, importance scores and ranks of metrics with importance scores above 50. The combination of metrics with importance scores greater than 50 are associated with inadequate memory allocated to VMs of an application. FIG. 22B shows an example of metrics, importance scores and ranks of metrics with importance scores above 50 and are associated with inadequate CPU allocated to VMs of an application. FIG. 22C shows an example of metrics, importance scores and ranks of metrics with importance scores above 50 and are associated with inadequate data storage allocated to files and databases used by an application. FIG. 22D shows an example of metrics, importance scores and ranks of metrics with importance scores above 50 and are associated with inadequate network bandwidth allocated to an application. Other types of combinations of metrics (not shown) that are used to identify the root cause of a performance problem include metrics of the host used to run the VMs of an application and metrics of disk space of data-storage devices used to store files and data used and generated by an application).
determining, according to the plurality of target performance indicators, the regression coefficient of each target performance indicator, the plurality of reference performance indicators and the regression coefficient of each reference performance indicator, an exchange relationship between the reference performance indicators and the target performance indicators, the exchange relationship being used for indicating whether the reference performance indicators are adjustable (Paragraph Number [0108] teaches the operations manager uses the p metrics, {X.sub.j}.sub.j=1.sup.p, and the KPI, Y, to train an inference model for the application. The inference model can be a parametric inference model or a non-parametric inference model. The inference model is used to determine a root cause of a performance problem recorded in run-time KPI values of the application, predict the health of the application, and generate recommendations for optimizing performance of the application in a distributed computing system. Based on the recommendations, the operations manager executes remedial measures that correct the performance problem, which optimizes performance of the application. Paragraph Number [0147] teaches in cases where there is no linear relationship between metrics and a KPI, the operations manager trains a non-parametric inference model based on K-nearest neighbor regression. K-nearest neighbor regression is performed by first determining an optimum positive integer number, K. of nearest neighbors for the metrics and the KPI. The optimum K is then used to predict, or forecast, a KPI value for prospective changes to metric values of the metrics and troubleshoot a root cause of an application performance problem. (Applicant's specification Paragraph Number [0088] provides that the exchange relationship may be used for indicating whether the reference performance indicators are adjustable)).
Response to Arguments
Applicant’s arguments filed 1/16/2026 have been fully considered but they are not persuasive.
Applicant argues that the claims are eligible under 35 USC 101. (See Applicant’s Remarks, 1/16/2026, pgs. 12-14). Examiner respectfully disagrees. As noted in the 35 USC 101 analysis presented above, the claims recite an abstract concept that is encapsulated by decision making analogous to a method of organizing human activity or mathematical concepts. Examiner notes that each of the limitations that encapsulate the abstract concepts are recited in the above 35 USC 101. Additionally, the claims do not recite a practical application of the abstract concepts in that there is no specific use or application of the method steps other than to make conclusory determinations and provide for direction for either a person or machine to follow at some future time or to make calculations that are mathematical operations. The claims do not recite any particular use for these determinations and directions that improve upon the underlying computer technology (in this instance the computer software, processor, and memory). Instead, Examiner asserts that the additional elements in the claim language are only used as implementation of the abstract concepts utilizing technology. The concepts described in the limitations when taken both as a whole and individually are not meaningfully different than those found by the courts to be abstract ideas and are similarly considered to be certain methods of organizing human activity such as managing personal behavior or relationships or interactions between people, including social activities, teaching, and following rules or instructions or to make calculations that are mathematical operations. The steps are then encapsulated into a particular technological environment by executing these steps upon a computer processor and utilizing features such as a computer interface or sending and receiving data over a network or displaying information via a computerized graphical user interface. However, sending and receiving of information over a network and execution of algorithms on a computer are utilized only to facilitate the abstract concepts (i.e. selecting data on an interface, publishing/displaying information, etc.). As such, Examiner asserts that the implementation of the abstract concepts recited by the claims utilize computer technology in a way that is considered to be generally linking the use of the judicial exception to a particular technological environment or field of use (See MPEP 2106.05(h)). Accordingly, Examiner does not find that the claims recite a practical application of the abstract concepts recited by the claims.
Applicant argues that the previously cited reference does not teach the newly amended portions including the new limitations recited by the independent claims. (See Applicant’s Remarks, 1/16/2025, pgs. 14-17). Examiner respectfully disagrees. Examiner has added citations from the Harutyunyan reference in response to Applicant’s amendments and provided for additional citations as explanation for the claim language in general as shown in the above new 35 USC 102 rejection. As such, Applicant’s arguments directed towards the previous 35 USC 102 rejection are moot. In response to Applicant’s arguments, Examiner directs Applicant to review the new citations and explanations provided in the new 35 USC 102 rejection presented above. In response to Applicant’s specific assertions that the Harutyunyan reference does not teach “determining, based on the regression coefficient of each of the plurality of target performance indicators and a minimum difference value of a significant change of the first indicator, a maximum acceptable deterioration value corresponding to each of the plurality of target performance indicator.” There are no value limitations in the claims so under a broadest reasonable interpretation the terms can be broadly construed to encompass a variety of threshold and mathematical processes. As such, Examiner asserts that Paragraph Number [0155]-[0156] teach this limitation (Examiner asserts that the plotting of various KPIs with lines that represent normal thresholds constitutes a general regression as required by the claim limitations. Additionally, these thresholds include maximum acceptance values (i.e. a threshold by which an alert is triggered) and minimum difference values (the specific distinction between normal and abnormal KPIs)). Accordingly, Examiner asserts that the Harutyunyan reference teaches the asserted limitations under a broadest reasonable interpretation of both the claims and the reference. Examiner is not persuaded by the distinctions Applicant is attempting to make.
Conclusion
Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MATTHEW H. DIVELBISS whose telephone number is (571) 270-0166. The fax phone number is 571-483-7110. The examiner can normally be reached on M-Th, 7:00 - 5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jerry O'Connor can be reached on (571) 272-6787.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/M.H.D/Examiner, Art Unit 3624
/Jerry O'Connor/Supervisory Patent Examiner,Group Art Unit 3624