Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Status of the Application
The following is a Final Office Action.
In response to Examiner's communication of 9/4/2025, Applicant responded on 1/5/2026. Amended claim 17.
Claims 1-20 are pending in this application and have been examined.
Response to Amendment
Applicant's amendments to claims 17 are sufficient to overcome the 35 USC 112 rejections set forth in the previous action. The 35 USC 112 rejections are hereby withdrawn.
Applicant's amendments to claims 17 are not sufficient to overcome the 35 USC 101 rejections set forth in the previous action.
Applicant's amendments to claims 17 are not sufficient to overcome the prior art rejections set forth in the previous action.
Response to Arguments – 35 USC § 101
Applicant’s arguments with respect to the rejections have been fully considered, but they are not persuasive.
Applicant submits, “…These limitations operate on continuous sensor telemetry and vast collections of historical alerts and associated outcomes. Such operations cannot practically be performed in the human mind, nor with pen and paper. This is particularly so for performance as claimed in an ongoing surveillance mode. The Office's assertion that the entire claimed method "can include a human using their mind and pen and paper" is conclusory and unsupported, and reflects exactly the type of improper expansion of the mental process category that the August Memo prohibits.…This analysis fails to follow the August Memo's explicit instruction to distinguish claims that actually recite a judicial exception from claims that merely involve one.15 As the August Memo explains, a claim recites a mathematical concept only when it sets forth or describes a mathematical relationship, calculation, formula, or equation...Claims 1, 11, and 18 do not recite any mathematical formula, equation, algorithm, or calculation-by name or otherwise, as is plain on the face of the claims. Instead, the claims recite an applied prognostic surveillance workflow that uses an irrelevance filter to suppress false alarms based on historically observed device outcomes (failure versus non-failure)…the claims may involve statistical techniques as part of this technological monitoring process, but this does not rise to the level of actual recitation of a mathematical relationship, calculation, formula, or equation…The Office's description ignores a technical limitation of the claim-the irrelevance filter that removes anomaly alarms based on empirical outcome-based correlations and prior non-incident pattern matching….This irrelevance filter is not generic data gathering or output. It is a specific technical mechanism that alters how anomaly alarms are selected and directly affects how the RUL estimate is produced. The Examiner's failure to meaningfully address this limitation violates the required claim-as-a-whole analysis…the Office's rejection is deficient because it dismissed the meaningful technical limitations of the irrelevance filter without explanation…The independent claims address a technological problem in prognostic surveillance systems: false and irrelevant anomaly alarms degrade the accuracy and usefulness of remaining useful life (RUL) estimates. As the specification puts it, "lack of historical failure data means that prognostic-surveillance techniques are likely to generate a high rate of false alarms, which leads to unnecessary maintenance operations, and may cause utility system assets to be prematurely replaced…The claimed irrelevance filter improves the operation of the prognostic surveillance system by: removing alarms that are not correlated with prior failures of similar devices,19 and . removing alarms associated with patterns previously observed in devices that operated without incident…This reduction in false alarms directly improves the technical functioning of the surveillance and RUL estimation pipeline. The claim does not merely state a desired result; it recites a particular way how the result is achieved through outcome-based alarm suppression…when properly evaluated as a whole, the independent claims are directed to an improvement in prognostic surveillance, and integrates any alleged abstract idea into a practical application under Step 2A, Prong Two….This reasoning improperly isolates individual steps and ignores their interaction with the irrelevance filter…the receiving of sensor signals, generation of alarms, and generation of the RUL notification are integral parts of the technical surveillance process whose operation is materially altered by the irrelevance filter….the December Memo (revising MPEP § 2106.05(f)) makes clear that claims reciting a technological solution to a technological problem are not mere "apply it" implementations. The independent claims specify how alarms are filtered using prior outcome data, overriding naive alarm generation and improving prognostic accuracy. This is precisely the type of technical solution that the revised MPEP recognizes as more than "apply it….these limitations further show integration with the practical application of prognostic surveillance at Step 2A, Prong two, and show the ordered combination of the claims to be directed to significantly more that the alleged judicial exception at Step 2B...They do not admit that the claimed irrelevance filter or the outcome-based alarm suppression logic is well-understood, routine, or conventional. To the contrary, specification paragraph [0025] expressly describes the irrelevance filter as novel and inspired by biomimicry….the specification provides evidence that the claimed combination, including the irrelevance filter, is novel, and thus not well-understood, routine, or conventional….The independent claims recite prognostic surveillance in which there is an irrelevance filter that removes (1) anomaly alarms for anomalous signal patterns that are not correlated with previous failures and (2) anomaly alarms for anomalous signal patterns that match signal patterns that are observed for operation without incident. This combination of elements is not well-understood, routine, or conventional. Further, the Office has provided no evidentiary support, as required by Berkheimer,28 that these claim elements, alone or in combination, were conventional. Conclusory statements regarding generic computing functions are insufficient to sustain a§ 101 rejection…” The Examiner respectfully disagrees.
Respectfully, [w]hen performing the analysis at Step 2A Prong One, it is sufficient for the examiner to provide a reasoned rationale that identifies the judicial exception recited in the claim and explains why it is considered a judicial exception (e.g., that the claim limitation(s) falls within one of the abstract idea groupings). Therefore, there is no requirement for the examiner to rely on evidence, such as publications or an affidavit or declaration under 37 CFR 1.104(d)(2), to find that a claim recites a judicial exception. Cf. Affinity Labs of Tex., LLC v. Amazon.com Inc., 838 F.3d 1266, 1271-72, 120 USPQ2d 1210, 1214-15 (Fed. Cir. 2016) (affirming district court decision that identified an abstract idea in the claims without relying on evidence); OIP Techs., Inc. v. Amazon.com, Inc., 788 F.3d 1359, 1362-64, 115 USPQ2d 1090, 1092-94 (Fed. Cir. 2015) (same); Content Extraction & Transmission LLC v. Wells Fargo Bank, N.A., 776 F.3d 1343, 1347, 113 USPQ2d 1354, 1357-58 (Fed. Cir. 2014) (same).
At Step 2A Prong Two or Step 2B, there is no requirement for evidence to support a finding that the exception is not integrated into a practical application or that the additional elements do not amount to significantly more than the exception unless the examiner asserts that additional limitations are well-understood, routine, conventional activities in Step 2B. See MPEP 2106.07(a).
Analyzing the claims as a whole, under Step 2A, Prong 1:
The limitations regarding, …estimating a remaining useful life, RUL, of an electronic device…receiving a set of time-series signals gathered from … while the electronic device is operating; detecting statistical changes in the set of time-series signals that are deemed as anomalous signal patterns; generating a set of anomaly alarms, wherein an anomaly alarm is generated for each of the anomalous signal patterns; applying an irrelevance filter to the set of anomaly alarms to produce filtered anomaly alarms that do not include suspected false alarms, wherein the irrelevance filter removes anomaly alarms associated with one or more anomalous signal patterns that are not correlated with previous failures of similar electronic devices that are similar to the electronic device; wherein removing the suspected false alarms from the set of anomaly alarms, by the irrelevance filter, comprises removing a target anomaly alarm associated with an anomalous signal pattern when the anomalous signal pattern matches a similar signal pattern that was previously observed from the similar electrical devices that have operated without incident; and generating a notification indicating an estimated remaining useful life of the electronic device based on at least the anomalous signal patterns associated with the filtered anomaly alarms.…., under the broadest reasonable interpretation, can include a human using their mind and using pen and paper to, …estimating a remaining useful life, RUL, of an electronic device…receiving a set of time-series signals gathered from … while the electronic device is operating; detecting statistical changes in the set of time-series signals that are deemed as anomalous signal patterns; generating a set of anomaly alarms, wherein an anomaly alarm is generated for each of the anomalous signal patterns; applying an irrelevance filter to the set of anomaly alarms to produce filtered anomaly alarms that do not include suspected false alarms, wherein the irrelevance filter removes anomaly alarms associated with one or more anomalous signal patterns that are not correlated with previous failures of similar electronic devices that are similar to the electronic device; wherein removing the suspected false alarms from the set of anomaly alarms, by the irrelevance filter, comprises removing a target anomaly alarm associated with an anomalous signal pattern when the anomalous signal pattern matches a similar signal pattern that was previously observed from the similar electrical devices that have operated without incident; and generating a notification indicating an estimated remaining useful life of the electronic device based on at least the anomalous signal patterns associated with the filtered anomaly alarms…; therefore, the claims are directed to a mental process.
Further, …estimating a remaining useful life, RUL, of an electronic device…receiving a set of time-series signals gathered from … while the electronic device is operating; detecting statistical changes in the set of time-series signals that are deemed as anomalous signal patterns; generating a set of anomaly alarms, wherein an anomaly alarm is generated for each of the anomalous signal patterns; applying an irrelevance filter to the set of anomaly alarms to produce filtered anomaly alarms that do not include suspected false alarms, wherein the irrelevance filter removes anomaly alarms associated with one or more anomalous signal patterns that are not correlated with previous failures of similar electronic devices that are similar to the electronic device; wherein removing the suspected false alarms from the set of anomaly alarms, by the irrelevance filter, comprises removing a target anomaly alarm associated with an anomalous signal pattern when the anomalous signal pattern matches a similar signal pattern that was previously observed from the similar electrical devices that have operated without incident; and generating a notification indicating an estimated remaining useful life of the electronic device based on at least the anomalous signal patterns associated with the filtered anomaly alarms, are mathematical concepts.
Accordingly, the claims are directed to a mental process, mathematical concepts, and thus, the claims are directed to an abstract idea under the first prong of Step 2A.
Analyzing the claims as a whole, under Step 2A, Prong 2:
This judicial exception is not integrated into a practical application under the second prong of Step 2A.
In particular, the claims recite the additional elements beyond the recited abstract idea identified under Step 2A, Prong 1, such as:
Claim 1, 11, 18: sensors in the electronic device, A non-transitory computer-readable storage medium storing instructions that when executed by a computing system comprising one or more computing devices, cause the computing system to, system comprising: one or more computing devices comprising at least one processor and at least one associated memory; and a notification mechanism configured to execute on the at least one processor, wherein the notification mechanism is configured to
, and pursuant to the broadest reasonable interpretation, as an ordered combination, each of the additional elements are computing elements recited at high level of generality implementing the abstract idea, and thus, are no more than applying the abstract idea with generic computer components. Further, these additional elements generally link the abstract idea to a technical environment, namely the environment of a computer.
Additionally, with respect to, “…receiving…”, “…generating a set of anomaly alarms…” “…generating a notification…”, these elements do not add a meaningful limitations to integrate the abstract idea into a practical application because they are extra-solution activity, pre and post solution activity - i.e. data gathering – “…receiving…”, data output – “…generating a set of anomaly alarms…”, “…generating a notification…”
Analyzing the claims as a whole, under Step 2B:
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception under Step 2B.
As noted above, the aforementioned additional elements beyond the recited abstract idea are not sufficient to amount to significantly more than the recited abstract idea because, as an order combination, the additional elements are no more than mere instructions to implement the idea using generic computer components (i.e. apply it).
Additionally, as an order combination, the additional elements append the recited abstract idea to well-understood, routine, and conventional activities in the field as individually evinced by the applicant’s own disclosure, as required by the Berkheimer Memo, in at least:
[0006]The disclosed embodiments provide systems and methods that estimate a remaining useful life (RUL) of an electronic device, which may be a utility system asset, an electro-mechanical device, or other type of electronic-based device. Although the present disclosure is described with reference to a utility system asset as an embodiment, the present systems and methods may be applied to other types of electronic devices. For example, utility system assets may include but are not limited to power transformers, switches, circuit breakers, power storage units (e.g., batteries, cells), power generating systems and/or components (e.g., power generators, solar panels, wind turbines, hydroelectric components, or other type of electronic devices. The present systems and methods may be applied in a similar manner to other electronic devices, for example, including but not limited to, vehicle components including engines, electric vehicle batteries, control systems, etc.; computing systems and computing components including smart devices, phones, laptops, servers, processors, data storage devices, displays/monitors, networking equipment, or other types of computing system-based components.
[0022]The following description is presented to enable any person skilled in the art to make and use the present embodiments, and is provided in the context of a particular application and its requirements. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present embodiments. Thus, the present embodiments are not limited to the embodiments shown, but are to be accorded the widest scope consistent with the principles and features disclosed herein.
[0023]The data structures and code described in this detailed description are typically stored on a computer-readable storage medium, which may be any device or medium that can store code and/or data for use by a computer system. The computer-readable storage medium includes, but is not limited to, volatile memory, non-volatile memory, magnetic and optical storage devices such as disk drives, magnetic tape, CDs (compact discs), DVDs (digital versatile discs or digital video discs), or other media capable of storing computer-readable media now known or later developed.
[0024]The methods and processes described in the detailed description section can be embodied as code and/or data, which can be stored in a computer-readable storage medium as described above. When a computer system reads and executes the code and/or data stored on the computer-readable storage medium, the computer system performs the methods and processes embodied as data structures and code and stored within the computer-readable storage medium. Furthermore, the methods and processes described below can be included in hardware modules. For example, the hardware modules can include, but are not limited to, application-specific integrated circuit (ASIC) chips, field-programmable gate arrays (FPGAs), and other programmable-logic devices now known or later developed. When the hardware modules are activated, the hardware modules perform the methods and processes included within the hardware modules.
[0025]The disclosed embodiments make use of a novel “irrelevance filter,” which mimics the functionality of the human brain’s basal ganglia to facilitate improved RUL prognostics for large populations of high-cost utility grid assets, especially high-voltage transformers. Many industries are presently benefitting from a new science called "biomimicry" that analyzes nature’s best ideas and adapts them for engineering use cases. The invention disclosed herein provides an example of biomimicry.
[0026]Swedish researchers performing MRI studies on human brains discovered that the basal ganglia act as an “irrelevance filter,” which plays a crucial role in human memory and cognition. If the human brain tried to process and store all inputs coming in through the senses, the brain would be overwhelmed. The basal ganglia weeds out unnecessary information, thereby leaving only those details essential to form memories that contribute to survival of a species, such as memories associated with: acquisition of food; avoidance of danger; propagation of the species; and assurance that basic needs are met. It has been shown that humans with the best memories have highly active basal ganglia.
[0030]Hence, what is needed is an “irrelevance filter” that processes time-series signals for utility system assets that have been run to failure, and produces optimal weighting factors for an associated RUL methodology. Note that this is analogous to the functionality of a basal ganglia “filter” for a human brain, which receives large streams of neural “signals” associated with the five primary senses, and periodically “alerts” the human to patterns that have direct relevance to danger, subsistence, or propagation-of-species opportunities.
[0032]Our anomaly discovery process uses a systematic binary hypothesis technique called the “sequential probability ratio test” (SPRT) as an irrelevance filter for large volumes of time-series signals, and identifies small subsets of time-series signals that warrant further pattern-recognition analyses to facilitate anomaly detection. Hence, our new technique substantially reduces RUL-analysis costs by systematically and safely filtering anomaly alerts generated for individual utility system assets so that RUL-analysis operations are only performed for “relevant” signature patterns that are likely to be associated with incipient fault conditions.
[0034]FIG. 1 illustrates an exemplary prognostic-surveillance system 100 in accordance with the disclosed embodiments. As illustrated in FIG. 1, prognostic-surveillance system 100 operates on a set of time-series sensor signals 104 obtained from sensors in an electronic device. In one embodiment as described herein, the electronic device may be a utility system asset 102, such as a power transformer, but other electronic devices may be used. Note that time-series signals 104 can originate from any type of sensor, which can be located in a component in utility system asset 102, including: a voltage sensor; a current sensor; a pressure sensor; a rotational speed sensor; and a vibration sensor.
[0073] Referring to FIG. 1, NLNP regression model 108 and difference module 112 work together to remove (filter) the dynamics in the signals X(t) so that the residual R(t) is a stationary random process when the system is in good condition. As the system ages or degrades due to a failure mechanism, the statistical properties of the residual change. This change is detected by SPRT module 116, which generates corresponding SPRT alarms 118.
[0082] Next, the system applies an irrelevance filter to the anomaly alarms (e.g., SPRT alarms) to produce a filtered anomaly alarms (e.g., SPRT alarms), wherein the irrelevance filter removes SPRT alarms for signals that are not correlated with previous failures of similar utility system assets (step 210).
[0083] The system then uses a logistic-regression model to compute an RUL-based risk index for the utility system asset based on tripping frequencies of the filtered SPRT alarms (step 212). If the risk index exceeds a risk-index threshold, the system generates a notification indicating that the electronic device has a limited remaining useful life (e.g., is near a predicted failing point) and should be replaced (step 214).
[0085] FIG. 4 presents a flow chart illustrating a process for training a logistic-regression model to predict an RUL for an asset and for configuring an associated irrelevance filter
[0086] The irrelevance filter is also configured to remove SPRT alarms (e.g., anomaly alarms) that are not relevant (step 416). SPRT alarms that are not relevant include alarms that occur in time intervals that are not near a failure time of the asset/device (e.g., a time beyond/outside the time threshold).
[0087]Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present invention. Thus, the present invention is not limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features disclosed herein.
[0088]The foregoing descriptions of embodiments have been presented for purposes of illustration and description only. They are not intended to be exhaustive or to limit the present description to the forms disclosed. Accordingly, many modifications and variations will be apparent to practitioners skilled in the art. Additionally, the above disclosure is not intended to limit the present description. The scope of the present description is defined by the appended claims.
Furthermore, as an ordered combination, these elements amount to generic computer components receiving or transmitting data over a network, performing repetitive calculations, electronic record keeping, and storing and retrieving information in memory, which, as held by the courts, are well-understood, routine, and conventional. See MPEP 2106.05(d).
The claims are directed to, …an applied prognostic surveillance workflow that uses an irrelevance filter to suppress false alarms based on historically observed device outcomes (failure versus non-failure)…irrelevance filter that removes anomaly alarms based on empirical outcome-based correlations and prior non-incident pattern matching…, which is a problem directed to a mental process (i.e. human observing and evaluating electronic devices’ operation patterns, humans judging anomalous operating pattern with statistical analysis and filtering out anomalies and irrelevant anomaly with statistical SPRT technique, humans notifying humans remaining useful life of electronic devices after humans evaluating with statical analysis) and mathematical concepts (i.e. humans judging anomalous operating pattern with statistical analysis and filtering out anomalies and irrelevant anomaly with statistical SPRT technique, humans notifying humans remaining useful life of electronic devices after humans evaluating with statical analysis), as established in Step 2A Prong 1. This problem does not specifically arise in the realm of computer technology, but rather, this problem existed and was addressed long before the advent of computers. Thus, the claims do not recite a technical improvement to a technical problem or necessarily roots in computing technologies. Pursuant to the broadest reasonable interpretation, as an ordered combination, each of the additional elements are computing elements recited at high level of generality implementing the abstract idea, and thus, are no more than applying the abstract idea with generic computer components. Further, these additional elements generally link the abstract idea to a technical environment, namely the environment of a computer, performing extra solution activities. Therefore, as a whole, the additional elements do not integrate the abstract ideas into a practical application in Step 2A Prong 2 or amount to significantly more in Step 2B.
Applicant’s arguments hinge on “irrelevance filter ” element being an additional element beyond the identified abstract ideas. However, as claimed, under the broadest reasonable interpretation, Examiner is interpreting the “irrelevance filter” to be an abstract element that is part of and directed to the identified abstract idea and this element is addressed in Step 2A, Prong1.
Further, as per Berkheimer memo, according to Applicant’s own specifications, the “irrelevance filter” element is indeed an abstract element directed to a mental process and mathematical concepts.
[0030]Hence, what is needed is an “irrelevance filter” that processes time-series signals for utility system assets that have been run to failure, and produces optimal weighting factors for an associated RUL methodology. Note that this is analogous to the functionality of a basal ganglia “filter” for a human brain, which receives large streams of neural “signals” associated with the five primary senses, and periodically “alerts” the human to patterns that have direct relevance to danger, subsistence, or propagation-of-species opportunities.
[0032]Our anomaly discovery process uses a systematic binary hypothesis technique called the “sequential probability ratio test” (SPRT) as an irrelevance filter for large volumes of time-series signals, and identifies small subsets of time-series signals that warrant further pattern-recognition analyses to facilitate anomaly detection. Hence, our new technique substantially reduces RUL-analysis costs by systematically and safely filtering anomaly alerts generated for individual utility system assets so that RUL-analysis operations are only performed for “relevant” signature patterns that are likely to be associated with incipient fault conditions.
[0073] Referring to FIG. 1, NLNP regression model 108 and difference module 112 work together to remove (filter) the dynamics in the signals X(t) so that the residual R(t) is a stationary random process when the system is in good condition. As the system ages or degrades due to a failure mechanism, the statistical properties of the residual change. This change is detected by SPRT module 116, which generates corresponding SPRT alarms 118.
[0082] Next, the system applies an irrelevance filter to the anomaly alarms (e.g., SPRT alarms) to produce a filtered anomaly alarms (e.g., SPRT alarms), wherein the irrelevance filter removes SPRT alarms for signals that are not correlated with previous failures of similar utility system assets (step 210).
[0083] The system then uses a logistic-regression model to compute an RUL-based risk index for the utility system asset based on tripping frequencies of the filtered SPRT alarms (step 212). If the risk index exceeds a risk-index threshold, the system generates a notification indicating that the electronic device has a limited remaining useful life (e.g., is near a predicted failing point) and should be replaced (step 214).
[0085] FIG. 4 presents a flow chart illustrating a process for training a logistic-regression model to predict an RUL for an asset and for configuring an associated irrelevance filter
[0086] The irrelevance filter is also configured to remove SPRT alarms (e.g., anomaly alarms) that are not relevant (step 416). SPRT alarms that are not relevant include alarms that occur in time intervals that are not near a failure time of the asset/device (e.g., a time beyond/outside the time threshold).
Examiner respectfully notes, by Applicant’s own admission in Applicant’s specification,
[0030]Hence, what is needed is an “irrelevance filter” that processes time-series signals for utility system assets that have been run to failure, and produces optimal weighting factors for an associated RUL methodology. Note that this is analogous to the functionality of a basal ganglia “filter” for a human brain, which receives large streams of neural “signals” associated with the five primary senses, and periodically “alerts” the human to patterns that have direct relevance to danger, subsistence, or propagation-of-species opportunities. (i.e. mental process)
[0032]Our anomaly discovery process uses a systematic binary hypothesis technique called the “sequential probability ratio test” (SPRT) as an irrelevance filter for large volumes of time-series signals (i.e. mathematical concepts)
The “irrelevance filter” indeed an abstract element directed to a mental process and mathematical concepts.
Furthermore, according to, https://web.archive.org/web/20090802201425/http://en.wikipedia.org/wiki/Sequential_probability_ratio_test, 8/2/2009, “The sequential probability ratio test (SPRT) is a specific sequential hypothesis test, developed by Abraham Wald.[1] Neyman and Pearson's 1933 result inspired Wald to reformulate it as a sequential analysis problem. The Neyman-Pearson lemma, by contrast, offers a rule of thumb for when the all the data is collected (and its likelihood ratio known). While originally developed for use in quality control studies in the realm of manufacturing, SPRT has been formulated for use in the computerized testing of human examinees as a termination criterion.” Thus, sequential probability ratio test (SPRT) is an abstract mathematical statistical method developed for the purpose of organizing human activities.
Thus, Examiner’s interpretation and analysis of the argued element “irrelevance filter” being an abstract element is indeed correct and addressed in Step 2A Prong1.
The limitations are abstract elements that are part of and directed to the recited abstract idea as described above with respect to the first prong of Step 2A, i.e. mental process, mathematical concepts, applied with generic computing components and generally linked to a technical environment, i.e. computer. Even novel and newly discovered judicial exceptions are still exceptions, despite their novelty. July 2015 Update, p. 3; see SAP America Inc. v. Investpic, LLC, No. 2017-2081, slip op. at 2 (Fed Cir. May 15, 2018).
Simply reciting specific limitations that narrow the abstract idea does not make an abstract idea non-abstract. 79 Fed. Reg. 74631; buySAFE Inc. v. Google, Inc., 765 F.3d 1350, 1355 (2014); see SAP America at p. 12. As discussed in SAP America, no matter how much of an advance the claims recite, when “the advance lies entirely in the realm of abstract ideas, with no plausibly alleged innovation in the non-abstract application realm,” “[a]n advance of that nature is ineligible for patenting.” Id. at p. 3.
Additionally, the courts have indicated may not be sufficient to show an improvement to technology include:
i. A commonplace business method being applied on a general purpose computer, Alice Corp., 573 U.S. at 223, 110 USPQ2d at 1976; Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015);
ii. Using well-known standard laboratory techniques to detect enzyme levels in a bodily sample such as blood or plasma, Cleveland Clinic Foundation v. True Health Diagnostics, LLC, 859 F.3d 1352, 1355, 1362, 123 USPQ2d 1081, 1082-83, 1088 (Fed. Cir. 2017);
iii. Gathering and analyzing information using conventional techniques and displaying the result, TLI Communications, 823 F.3d at 612-13, 118 USPQ2d at 1747-48;
iv. Delivering broadcast content to a portable electronic device such as a cellular telephone, when claimed at a high level of generality, Affinity Labs of Tex. v. Amazon.com, 838 F.3d 1266, 1270, 120 USPQ2d 1210, 1213 (Fed. Cir. 2016); Affinity Labs of Tex. v. DirecTV, LLC, 838 F.3d 1253, 1262, 120 USPQ2d 1201, 1207 (Fed. Cir. 2016);
vii. Selecting one type of content (e.g., FM radio content) from within a range of existing broadcast content types, or selecting a particular generic function for computer hardware to perform (e.g., buffering content) from within a range of well-known, routine, conventional functions performed by the hardware, Affinity Labs of Tex. v. DirecTV, LLC, 838 F.3d 1253, 1264, 120 USPQ2d 1201, 1208 (Fed. Cir. 2016).
Even more, the courts have found the additional elements to be mere instructions to apply an exception, because they do no more than merely invoke computers or machinery as a tool to perform an existing process include:
i. A commonplace business method or mathematical algorithm being applied on a general purpose computer, Alice Corp. Pty. Ltd. V. CLS Bank Int’l, 573 U.S. 208, 223, 110 USPQ2d 1976, 1983 (2014); Gottschalk v. Benson, 409 U.S. 63, 64, 175 USPQ 673, 674 (1972); Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015);
iii. A process for monitoring audit log data that is executed on a general-purpose computer where the increased speed in the process comes solely from the capabilities of the general-purpose computer, FairWarning IP, LLC v. Iatric Sys., 839 F.3d 1089, 1095, 120 USPQ2d 1293, 1296 (Fed. Cir. 2016);
v. Requiring the use of software to tailor information and provide it to the user on a generic computer, Intellectual Ventures I LLC v. Capital One Bank (USA), 792 F.3d 1363, 1370-71, 115 USPQ2d 1636, 1642 (Fed. Cir. 2015);
Response to Arguments – Prior Art
Applicant’s arguments with respect to the rejections have been fully considered, but they are not persuasive.
Applicant submits, “…The rationale for combination provided by the Officer does not demonstrate that a skilled artisan would be led to combine the references in the particular ways claimed by the invention. Moreover, "the proposed modification cannot render the prior art unsatisfactory for its intended purpose...Removing alerts would render Salunke unsatisfactory for the very intended purpose that was cited by the Office as the rationale for combination. "If a proposed modification would render the prior art invention being modified unsatisfactory for its intended purpose, there may be no suggestion or motivation to make the proposed modification…Thus, the rationale for combination applied in rejection of all claims under Section 103 would not prompt a skilled artisan to combine the references in the way claimed. No prima facie case of obviousness can be sustained based on a combination of references that lacks a reason that prompts combination of the references in the way that is claimed…an obviousness rejection requires an articulated reasoning with a rational underpinning explaining why a person of ordinary skill would have made the specific modification recited in the claims, not merely why improvement is desirable in the abstract. Absent an explanation tying the teachings of the cited references to the particular operations recited in the claims, the rejection relies on impermissible hindsight…Thus, the Office must articulate why one would have modified Gross to: (1) remove anomaly alarms specifically when patterns match prior non-failure behavior of similar devices; and (2) condition RUL estimation on historically outcome-validated alarm relevance, rather than on raw alarm frequency or short-term trending (as recited in the claims). Neither Gross nor Salunke is described as framing alarm relevance in terms of cross-device historical non-failure pattern matching used to exclude alarms from prognostic modeling. The rejection does not identify any disclosure in Salunke that would suggest using prior successful operation of similar devices as a basis for suppressing alarms that would otherwise influence RUL estimation in Gross...” Examiner respectfully disagrees.
Respectfully, Applicant’s argument requires that the each of the features of supporting references are bodily incorporated into primary reference that teach and every element is individually taught by a single reference. However, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981). The test for obviousness is not whether the features of a secondary reference may be bodily incorporated into the structure of the primary reference; nor is it that the claimed invention must be expressly suggested in any one single or in all of the references. See id. Rather, the test is what the combined teachings of the references would have suggested to those of ordinary skill in the art. See id.; In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986).
Under the broadest reasonable interpretation, Gross teaches: A method for estimating a remaining useful life, RUL, of an electronic device, wherein during a surveillance mode, the method comprises:
receiving a set of time-series signals gathered from sensors in the electronic device while the electronic device is operating; (in at least [0022] FIG. 1 illustrates real-time telemetry system 100 in accordance with an embodiment of the present invention. Real-time telemetry system 100 contains computer system 102. Computer system 102 can be any type of computer system, such as a computer system based on a microprocessor, a mainframe computer, a digital signal processor, a portable computing device, a personal organizer, a device controller, and a computational engine within an appliance. [0023] Real-time telemetry system 100 also contains telemetry device 104, which gathers telemetry signals 106 from the various sensors and monitoring tools within computer system 102, and directs telemetry signals 106 to a local or a remote location that contains remaining useful life (RUL) prediction tool 108. [0024] that telemetry signals 106 gathered by real-time telemetry system 104 can include signals associated with physical and/or software performance parameters measured through sensors within the computer system. The physical parameters can include, but are not limited to: distributed temperatures within the computer system, relative humidity, cumulative or differential vibrations within the computer system, fan speed, acoustic signals, currents, voltages, time-domain reflectometry (TDR) readings, and miscellaneous environmental variables. The software parameters can include, but are not limited to: load metrics, CPU utilization, idle time, memory utilization, disk activity, transaction latencies, system throughput, queue lengths, I/O traffic, bus saturation metrics, FIFO overflow statistics, and other performance metrics reported by the operating system.)
detecting statistical changes in the set of time-series signals that are deemed as anomalous signal patterns; (in at least [0031] The foregoing step takes away (or filters) the dynamics in the signals X(t) so that the residual R(t) is a stationary random process when the system is in good condition. As the system ages or degrades due to a failure mechanism, the statistical properties of the residual change. This change is detected by SPRT mechanism 204. [0032] SPRT mechanism 204 applies a sequential probability ratio test to the residuals and produces an alarm when one or several residuals become statistically different from the residual corresponding to the undegraded condition of the system. As degradation progresses, the tripping frequency of the alarms produced SPRT mechanism 204 increases. We denote these alarm-tripping frequencies as F=[F —1, . . . F_m], where F(t)=[F—1(t), . . . F_m(t)] is the value of the prognostic parameters at time t. Hence, at time t:F(t)=SPRT(R(t)).)
generating a set of anomaly alarms, wherein an anomaly alarm is generated for each of the anomalous signal patterns; (in at least [0031] The foregoing step takes away (or filters) the dynamics in the signals X(t) so that the residual R(t) is a stationary random process when the system is in good condition. As the system ages or degrades due to a failure mechanism, the statistical properties of the residual change. This change is detected by SPRT mechanism 204. [0032] SPRT mechanism 204 applies a sequential probability ratio test to the residuals and produces an alarm when one or several residuals become statistically different from the residual corresponding to the undegraded condition of the system. As degradation progresses, the tripping frequency of the alarms produced SPRT mechanism 204 increases. We denote these alarm-tripping frequencies as F=[F —1, . . . F_m], where F(t)=[F—1(t), . . . F_m(t)] is the value of the prognostic parameters at time t. Hence, at time t:F(t)=SPRT(R(t)). [0033] Logistic regression mechanism 206 records each instance of SPRT mechanism 204 tripping an alarm and uses these instances to determine the current alarm-tripping frequency of SPRT mechanism 204. Logistic regression mechanism 206 then calculates the RUL of the computer system in the following way. We denote the probability of system S to fail within next T hours given the current condition determined by the current SPRT alarm-tripping frequencies F as p(T,F). The relationship between the p and the current condition F is modeled using the linear logistic regression model:p(T,X)=1/(1+exp(−(a(T)+b —1(T)* F —1+b —2(T)*F —2+ . . . +b — m(T)*F — m))).where a(T) and b(T)=[b—1(T), . . . , b_m(T)] are estimated from historical or experimental failure data for the system. Note that the tripping frequencies are normalized to have values between 0 and 1 to simplify this calculation.)
… suspected false alarms, …; (in at least [0037] The graph in FIG. 3B presents a 2-dimensional failure model in accordance with an embodiment of the present invention. As shown in FIG. 3B, when the actual model is 1-dimensional, the addition of an extra parameter does not improve fit. However, the addition of an irrelevant prognostic parameter increases uncertainty in the RUL estimation.)
… associated with an anomalous signal pattern when the anomalous signal pattern matches a similar signal pattern that was previously observed from the similar electrical devices that have operated without incident; and
generating a notification indicating an estimated remaining useful life of the electronic device based on at least the anomalous signal patterns associated with the … anomaly alarms. (in at least [0047] The system then performs a linear logistic regression analysis using the SPRT alarm-tripping frequency (step 510). From this analysis, the system returns a prediction for the remaining useful life (step 512).)
Although implied, Gross does not expressly disclose the following limitations, which however, are taught by Salunke,
applying an irrelevance filter to the set of anomaly alarms to produce filtered anomaly alarms that do not include suspected false alarms, wherein the irrelevance filter removes anomaly alarms associated with one or more anomalous signal patterns that are not correlated with previous failures of similar electronic devices that are similar to the electronic device (in at least [0053] To reduce the volume of overall and false alerts 234, management apparatus 230 and/or another component of signal-monitoring module 220 may suppress an alert based on further analysis of performance parameters 210 of virtual machine 204. As described in further detail below, an indication of an anomalous event may trigger the analysis of performance parameters 210 for an upward trend in the memory usage of virtual machine 204 and/or a decrease in the free memory of virtual machine 204 below a threshold. If the component detects the upward trend in memory usage and/or decrease in free memory below the threshold, the component may generate the alert. Conversely, if the memory usage is not trending upward and/or the amount of free memory is higher than the threshold, the component may suppress the alert. [0069] filtered time-series performance data 406 is generated from time-series performance data 402 by removing a subset of time-series performance data 402 around one or more known anomalous events 404 in the virtual machine. For example, one or more times of known anomalous events 404 such as OOM events and/or virtual machine restarts may be obtained from records of anomalous events 404 from the computer system, a service processor, and/or another monitoring mechanism. Subsets of time-series performance data 402 within an interval 408 (e.g., 24 hours) before and after known anomalous events 404 may then be removed from time-series performance data 402 to produce filtered time-series performance data 406. [0074] Complexity threshold 414 may be set to mitigate the generation of false alerts using statistical model 410. In particular, number of unique patterns 412 may be affected by variations in the time spent in GC (e.g., as a number of seconds per hour), number of GC invocations, and/or other metrics in filtered time-series performance data 406. An active virtual machine may have values for time spent in GC and number of GC invocations that vary according to fluctuations in the activity level of the virtual machine, while a virtual machine that experiences little to no activity may have many samples of zero values for the time spent in GC and number of GC invocations. A lack of activity in the virtual machine may cause statistical model 410 to learn a sparse pattern set from filtered time-series performance data 406, and any behavioral pattern that is outside the learned set may automatically be flagged as anomalous by statistical model 410. Thus, complexity threshold 414 may be set to a minimum number of unique patterns 412 learned by statistical model 410 to mitigate the subsequent generation of false positives by statistical model 410. [0102] If OOM pattern 516 is not detected in a subsequent time window, the current level of OOM risk 522 may be maintained for a pre-specified period. For example, a “flattened” OOM risk 522 may remain associated with the status of the virtual machine until OOM pattern 516 is not detected for a certain number of consecutive time windows. If features 514 in a subsequent time window 510 match a clear condition 520, OOM risk 522 may also be removed, and the status of the virtual machine may be updated with a cleared OOM risk 526. For example, the status of the virtual machine may be changed from OOM risk 522 to cleared OOM risk 526 after a statistically significant positive slope is found in free memory metric 512 for a certain number of consecutive time windows and/or the probability that free memory metric 512 drops below 50 MB is lower than 0.05%. [0103] By performing free memory trending on time-series GC data 502 based on a custom time window 510 that encompasses multiple cycles of short-term activity of the virtual machine, the trend-estimation technique of FIG. 5 may avoid the detection of short-term trends in free memory metric 512 while allowing for detection of slow-developing OOM risks. Conversely, a conventional trend-estimation technique with a fixed time window may either produce a large number of false alarms or miss a significant number of real OOM events in the virtual machine. [0113] a subset of the performance data is removed within an interval around one or more known anomalous events to generate filtered time-series performance data (operation 704). To generate the filtered time-series performance data, one or more times of the known anomalous event(s) are obtained (e.g., from records of the known anomalous events), and portions of the machine-generated time-series performance data are removed within the interval (e.g., 24 hours) before and after the time(s).)
wherein removing the suspected false alarms from the set of anomaly alarms, by the irrelevance filter, comprises removing a target anomaly alarm associated with an anomalous signal pattern when the anomalous signal pattern matches a similar signal pattern that was previously observed from the similar electrical devices that have operated without incident (in at least [0053] To reduce the volume of overall and false alerts 234, management apparatus 230 and/or another component of signal-monitoring module 220 may suppress an alert based on further analysis of performance parameters 210 of virtual machine 204. As described in further detail below, an indication of an anomalous event may trigger the analysis of performance parameters 210 for an upward trend in the memory usage of virtual machine 204 and/or a decrease in the free memory of virtual machine 204 below a threshold. If the component detects the upward trend in memory usage and/or decrease in free memory below the threshold, the component may generate the alert. Conversely, if the memory usage is not trending upward and/or the amount of free memory is higher than the threshold, the component may suppress the alert. [0069] filtered time-series performance data 406 is generated from time-series performance data 402 by removing a subset of time-series performance data 402 around one or more known anomalous events 404 in the virtual machine. For example, one or more times of known anomalous events 404 such as OOM events and/or virtual machine restarts may be obtained from records of anomalous events 404 from the computer system, a service processor, and/or another monitoring mechanism. Subsets of time-series performance data 402 within an interval 408 (e.g., 24 hours) before and after known anomalous events 404 may then be removed from time-series performance data 402 to produce filtered time-series performance data 406. [0074] Complexity threshold 414 may be set to mitigate the generation of false alerts using statistical model 410. In particular, number of unique patterns 412 may be affected by variations in the time spent in GC (e.g., as a number of seconds per hour), number of GC invocations, and/or other metrics in filtered time-series performance data 406. An active virtual machine may have values for time spent in GC and number of GC invocations that vary according to fluctuations in the activity level of the virtual machine, while a virtual machine that experiences little to no activity may have many samples of zero values for the time spent in GC and number of GC invocations. A lack of activity in the virtual machine may cause statistical model 410 to learn a sparse pattern set from filtered time-series performance data 406, and any behavioral pattern that is outside the learned set may automatically be flagged as anomalous by statistical model 410. Thus, complexity threshold 414 may be set to a minimum number of unique patterns 412 learned by statistical model 410 to mitigate the subsequent generation of false positives by statistical model 410. [0102] If OOM pattern 516 is not detected in a subsequent time window, the current level of OOM risk 522 may be maintained for a pre-specified period. For example, a “flattened” OOM risk 522 may remain associated with the status of the virtual machine until OOM pattern 516 is not detected for a certain number of consecutive time windows. If features 514 in a subsequent time window 510 match a clear condition 520, OOM risk 522 may also be removed, and the status of the virtual machine may be updated with a cleared OOM risk 526. For example, the status of the virtual machine may be changed from OOM risk 522 to cleared OOM risk 526 after a statistically significant positive slope is found in free memory metric 512 for a certain number of consecutive time windows and/or the probability that free memory metric 512 drops below 50 MB is lower than 0.05%. [0103] By performing free memory trending on time-series GC data 502 based on a custom time window 510 that encompasses multiple cycles of short-term activity of the virtual machine, the trend-estimation technique of FIG. 5 may avoid the detection of short-term trends in free memory metric 512 while allowing for detection of slow-developing OOM risks. Conversely, a conventional trend-estimation technique with a fixed time window may either produce a large number of false alarms or miss a significant number of real OOM events in the virtual machine. [0113] a subset of the performance data is removed within an interval around one or more known anomalous events to generate filtered time-series performance data (operation 704). To generate the filtered time-series performance data, one or more times of the known anomalous event(s) are obtained (e.g., from records of the known anomalous events), and portions of the machine-generated time-series performance data are removed within the interval (e.g., 24 hours) before and after the time(s). [0127] If the features from the next time window match the OOM pattern, the OOM risk is increased (operation 912). For example, the “level” of the OOM risk may be incremented each time the features from a subsequent time window match the OOM pattern. If the features match the clear condition, the OOM risk is cleared (operation 914), and monitoring of OOM risks in the virtual machine is reset. For example, the OOM risk may be cleared if the features no longer match the OOM pattern and the amount of free memory in the virtual machine is trending upward. If the features match neither the OOM pattern nor the clear condition, the OOM risk is flattened (operation 916). For example, the “flattened” OOM risk may represent an unchanged level of OOM risk in the virtual machine. Each update to the OOM risk in operations 912-916 may be used to update the indication of the OOM risk for the virtual machine. For example, each change in the level of OOM risk for the virtual machine may result in the generation of a corresponding alert, while alerting of the flattened OOM risk may optionally be omitted. [0127] The OOM risk may continue to be analyzed (operation 918) while the OOM risk is present. If the OOM risk is to be analyzed, the set of features is estimated within subsequent time windows (operation 908), and the OOM risk is updated accordingly (operations 910-914) until the OOM risk is cleared.)
… filtered anomaly alarms…(in at least [0053] To reduce the volume of overall and false alerts 234, management apparatus 230 and/or another component of signal-monitoring module 220 may suppress an alert based on further analysis of performance parameters 210 of virtual machine 204. As described in further detail below, an indication of an anomalous event may trigger the analysis of performance parameters 210 for an upward trend in the memory usage of virtual machine 204 and/or a decrease in the free memory of virtual machine 204 below a threshold. If the component detects the upward trend in memory usage and/or decrease in free memory below the threshold, the component may generate the alert. Conversely, if the memory usage is not trending upward and/or the amount of free memory is higher than the threshold, the component may suppress the alert. [0069] filtered time-series performance data 406 is generated from time-series performance data 402 by removing a subset of time-series performance data 402 around one or more known anomalous events 404 in the virtual machine. For example, one or more times of known anomalous events 404 such as OOM events and/or virtual machine restarts may be obtained from records of anomalous events 404 from the computer system, a service processor, and/or another monitoring mechanism. Subsets of time-series performance data 402 within an interval 408 (e.g., 24 hours) before and after known anomalous events 404 may then be removed from time-series performance data 402 to produce filtered time-series performance data 406. [0074] Complexity threshold 414 may be set to mitigate the generation of false alerts using statistical model 410. In particular, number of unique patterns 412 may be affected by variations in the time spent in GC (e.g., as a number of seconds per hour), number of GC invocations, and/or other metrics in filtered time-series performance data 406. An active virtual machine may have values for time spent in GC and number of GC invocations that vary according to fluctuations in the activity level of the virtual machine, while a virtual machine that experiences little to no activity may have many samples of zero values for the time spent in GC and number of GC invocations. A lack of activity in the virtual machine may cause statistical model 410 to learn a sparse pattern set from filtered time-series performance data 406, and any behavioral pattern that is outside the learned set may automatically be flagged as anomalous by statistical model 410. Thus, complexity threshold 414 may be set to a minimum number of unique patterns 412 learned by statistical model 410 to mitigate the subsequent generation of false positives by statistical model 410. [0102] If OOM pattern 516 is not detected in a subsequent time window, the current level of OOM risk 522 may be maintained for a pre-specified period. For example, a “flattened” OOM risk 522 may remain associated with the status of the virtual machine until OOM pattern 516 is not detected for a certain number of consecutive time windows. If features 514 in a subsequent time window 510 match a clear condition 520, OOM risk 522 may also be removed, and the status of the virtual machine may be updated with a cleared OOM risk 526. For example, the status of the virtual machine may be changed from OOM risk 522 to cleared OOM risk 526 after a statistically significant positive slope is found in free memory metric 512 for a certain number of consecutive time windows and/or the probability that free memory metric 512 drops below 50 MB is lower than 0.05%. [0103] By performing free memory trending on time-series GC data 502 based on a custom time window 510 that encompasses multiple cycles of short-term activity of the virtual machine, the trend-estimation technique of FIG. 5 may avoid the detection of short-term trends in free memory metric 512 while allowing for detection of slow-developing OOM risks. Conversely, a conventional trend-estimation technique with a fixed time window may either produce a large number of false alarms or miss a significant number of real OOM events in the virtual machine. [0113] a subset of the performance data is removed within an interval around one or more known anomalous events to generate filtered time-series performance data (operation 704). To generate the filtered time-series performance data, one or more times of the known anomalous event(s) are obtained (e.g., from records of the known anomalous events), and portions of the machine-generated time-series performance data are removed within the interval (e.g., 24 hours) before and after the time(s).)
At the time the invention was filed, in the same field of endeavor, it would have been obvious for one of ordinary skill in the art to have modified the teachings of Gross, as taught by Salunke above, with a reasonable expectation of success if arriving at the claimed invention. One of ordinary skill in the art would have been motivated to make this modification to the teachings of Gross with the motivation of, …improve the management of anomalous events by an administrator of computer system 200… to improve the performance of stateless detection of anomalous events in the virtual machine... businesses are increasingly relying on enterprise computing systems to process ever-larger volumes of electronic transactions. A failure in one of these enterprise computing systems can be disastrous, potentially resulting in millions of dollars of lost business. More importantly, a failure can seriously undermine consumer confidence in a business, making customers less likely to purchase goods and services from the business. Hence, it is important to ensure reliability and/or high availability in such enterprise computing systems… To reduce the volume of overall and false alerts 234, management apparatus 230 and/or another component of signal-monitoring module 220 may suppress an alert…, as recited in Salunke.
Additionally, see MPEP 2143, Examples of rationales that may support a conclusion of obviousness include:
(C) Use of known technique to improve similar devices (methods, or products) in the same way;
(D) Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results;
Furthermore, the motivation of, …improve the management of anomalous events by an administrator of computer system 200… to improve the performance of stateless detection of anomalous events in the virtual machine... businesses are increasingly relying on enterprise computing systems to process ever-larger volumes of electronic transactions. A failure in one of these enterprise computing systems can be disastrous, potentially resulting in millions of dollars of lost business. More importantly, a failure can seriously undermine consumer confidence in a business, making customers less likely to purchase goods and services from the business. Hence, it is important to ensure reliability and/or high availability in such enterprise computing systems… To reduce the volume of overall and false alerts 234, management apparatus 230 and/or another component of signal-monitoring module 220 may suppress an alert…, as recited in Salunke, are not a general desire of improvement, but rather the improvements expressly recognized by Salunka as the benefits of the elements taught by Salunka, which in the same field of endeavor, an ordinary artisan would be motivated to combine.
Claim Rejections – 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 is/are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter.
Claim 1 (similarly 11, 18) recite, “A method for estimating a remaining useful life, RUL, of an electronic device, wherein during a surveillance mode, the method comprises:
receiving a set of time-series signals gathered from … while the electronic device is operating;
detecting statistical changes in the set of time-series signals that are deemed as anomalous signal patterns;
generating a set of anomaly alarms, wherein an anomaly alarm is generated for each of the anomalous signal patterns;
applying an irrelevance filter to the set of anomaly alarms to produce filtered anomaly alarms that do not include suspected false alarms, wherein the irrelevance filter removes anomaly alarms associated with one or more anomalous signal patterns that are not correlated with previous failures of similar electronic devices that are similar to the electronic device;
wherein removing the suspected false alarms from the set of anomaly alarms, by the irrelevance filter, comprises removing a target anomaly alarm associated with an anomalous signal pattern when the anomalous signal pattern matches a similar signal pattern that was previously observed from the similar electrical devices that have operated without incident; and
generating a notification indicating an estimated remaining useful life of the electronic device based on at least the anomalous signal patterns associated with the filtered anomaly alarms”
Analyzing under Step 2A, Prong 1:
The limitations regarding, …estimating a remaining useful life, RUL, of an electronic device…receiving a set of time-series signals gathered from … while the electronic device is operating; detecting statistical changes in the set of time-series signals that are deemed as anomalous signal patterns; generating a set of anomaly alarms, wherein an anomaly alarm is generated for each of the anomalous signal patterns; applying an irrelevance filter to the set of anomaly alarms to produce filtered anomaly alarms that do not include suspected false alarms, wherein the irrelevance filter removes anomaly alarms associated with one or more anomalous signal patterns that are not correlated with previous failures of similar electronic devices that are similar to the electronic device; wherein removing the suspected false alarms from the set of anomaly alarms, by the irrelevance filter, comprises removing a target anomaly alarm associated with an anomalous signal pattern when the anomalous signal pattern matches a similar signal pattern that was previously observed from the similar electrical devices that have operated without incident; and generating a notification indicating an estimated remaining useful life of the electronic device based on at least the anomalous signal patterns associated with the filtered anomaly alarms.…., under the broadest reasonable interpretation, can include a human using their mind and using pen and paper to, …estimating a remaining useful life, RUL, of an electronic device…receiving a set of time-series signals gathered from … while the electronic device is operating; detecting statistical changes in the set of time-series signals that are deemed as anomalous signal patterns; generating a set of anomaly alarms, wherein an anomaly alarm is generated for each of the anomalous signal patterns; applying an irrelevance filter to the set of anomaly alarms to produce filtered anomaly alarms that do not include suspected false alarms, wherein the irrelevance filter removes anomaly alarms associated with one or more anomalous signal patterns that are not correlated with previous failures of similar electronic devices that are similar to the electronic device; wherein removing the suspected false alarms from the set of anomaly alarms, by the irrelevance filter, comprises removing a target anomaly alarm associated with an anomalous signal pattern when the anomalous signal pattern matches a similar signal pattern that was previously observed from the similar electrical devices that have operated without incident; and generating a notification indicating an estimated remaining useful life of the electronic device based on at least the anomalous signal patterns associated with the filtered anomaly alarms…; therefore, the claims are directed to a mental process.
Further, …estimating a remaining useful life, RUL, of an electronic device…receiving a set of time-series signals gathered from … while the electronic device is operating; detecting statistical changes in the set of time-series signals that are deemed as anomalous signal patterns; generating a set of anomaly alarms, wherein an anomaly alarm is generated for each of the anomalous signal patterns; applying an irrelevance filter to the set of anomaly alarms to produce filtered anomaly alarms that do not include suspected false alarms, wherein the irrelevance filter removes anomaly alarms associated with one or more anomalous signal patterns that are not correlated with previous failures of similar electronic devices that are similar to the electronic device; wherein removing the suspected false alarms from the set of anomaly alarms, by the irrelevance filter, comprises removing a target anomaly alarm associated with an anomalous signal pattern when the anomalous signal pattern matches a similar signal pattern that was previously observed from the similar electrical devices that have operated without incident; and generating a notification indicating an estimated remaining useful life of the electronic device based on at least the anomalous signal patterns associated with the filtered anomaly alarms, are mathematical concepts.
Accordingly, the claims are directed to a mental process, mathematical concepts, and thus, the claims are directed to an abstract idea under the first prong of Step 2A.
Analyzing under Step 2A, Prong 2:
This judicial exception is not integrated into a practical application under the second prong of Step 2A.
In particular, the claims recite the additional elements beyond the recited abstract idea identified under Step 2A, Prong 1, such as:
Claim 1, 11, 18: sensors in the electronic device, A non-transitory computer-readable storage medium storing instructions that when executed by a computing system comprising one or more computing devices, cause the computing system to, system comprising: one or more computing devices comprising at least one processor and at least one associated memory; and a notification mechanism configured to execute on the at least one processor, wherein the notification mechanism is configured to
, and pursuant to the broadest reasonable interpretation, as an ordered combination, each of the additional elements are computing elements recited at high level of generality implementing the abstract idea, and thus, are no more than applying the abstract idea with generic computer components. Further, these additional elements generally link the abstract idea to a technical environment, namely the environment of a computer.
Additionally, with respect to, “…receiving…”, “…generating a set of anomaly alarms…” “…generating a notification…”, these elements do not add a meaningful limitations to integrate the abstract idea into a practical application because they are extra-solution activity, pre and post solution activity - i.e. data gathering – “…receiving…”, data output – “…generating a set of anomaly alarms…”, “…generating a notification…”
Analyzing under Step 2B:
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception under Step 2B.
As noted above, the aforementioned additional elements beyond the recited abstract idea are not sufficient to amount to significantly more than the recited abstract idea because, as an order combination, the additional elements are no more than mere instructions to implement the idea using generic computer components (i.e. apply it).
Additionally, as an order combination, the additional elements append the recited abstract idea to well-understood, routine, and conventional activities in the field as individually evinced by the applicant’s own disclosure, as required by the Berkheimer Memo, in at least:
[0006]The disclosed embodiments provide systems and methods that estimate a remaining useful life (RUL) of an electronic device, which may be a utility system asset, an electro-mechanical device, or other type of electronic-based device. Although the present disclosure is described with reference to a utility system asset as an embodiment, the present systems and methods may be applied to other types of electronic devices. For example, utility system assets may include but are not limited to power transformers, switches, circuit breakers, power storage units (e.g., batteries, cells), power generating systems and/or components (e.g., power generators, solar panels, wind turbines, hydroelectric components, or other type of electronic devices. The present systems and methods may be applied in a similar manner to other electronic devices, for example, including but not limited to, vehicle components including engines, electric vehicle batteries, control systems, etc.; computing systems and computing components including smart devices, phones, laptops, servers, processors, data storage devices, displays/monitors, networking equipment, or other types of computing system-based components.
[0022]The following description is presented to enable any person skilled in the art to make and use the present embodiments, and is provided in the context of a particular application and its requirements. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present embodiments. Thus, the present embodiments are not limited to the embodiments shown, but are to be accorded the widest scope consistent with the principles and features disclosed herein.
[0023]The data structures and code described in this detailed description are typically stored on a computer-readable storage medium, which may be any device or medium that can store code and/or data for use by a computer system. The computer-readable storage medium includes, but is not limited to, volatile memory, non-volatile memory, magnetic and optical storage devices such as disk drives, magnetic tape, CDs (compact discs), DVDs (digital versatile discs or digital video discs), or other media capable of storing computer-readable media now known or later developed.
[0024]The methods and processes described in the detailed description section can be embodied as code and/or data, which can be stored in a computer-readable storage medium as described above. When a computer system reads and executes the code and/or data stored on the computer-readable storage medium, the computer system performs the methods and processes embodied as data structures and code and stored within the computer-readable storage medium. Furthermore, the methods and processes described below can be included in hardware modules. For example, the hardware modules can include, but are not limited to, application-specific integrated circuit (ASIC) chips, field-programmable gate arrays (FPGAs), and other programmable-logic devices now known or later developed. When the hardware modules are activated, the hardware modules perform the methods and processes included within the hardware modules.
[0025]The disclosed embodiments make use of a novel “irrelevance filter,” which mimics the functionality of the human brain’s basal ganglia to facilitate improved RUL prognostics for large populations of high-cost utility grid assets, especially high-voltage transformers. Many industries are presently benefitting from a new science called "biomimicry" that analyzes nature’s best ideas and adapts them for engineering use cases. The invention disclosed herein provides an example of biomimicry.
[0026]Swedish researchers performing MRI studies on human brains discovered that the basal ganglia act as an “irrelevance filter,” which plays a crucial role in human memory and cognition. If the human brain tried to process and store all inputs coming in through the senses, the brain would be overwhelmed. The basal ganglia weeds out unnecessary information, thereby leaving only those details essential to form memories that contribute to survival of a species, such as memories associated with: acquisition of food; avoidance of danger; propagation of the species; and assurance that basic needs are met. It has been shown that humans with the best memories have highly active basal ganglia.
[0030]Hence, what is needed is an “irrelevance filter” that processes time-series signals for utility system assets that have been run to failure, and produces optimal weighting factors for an associated RUL methodology. Note that this is analogous to the functionality of a basal ganglia “filter” for a human brain, which receives large streams of neural “signals” associated with the five primary senses, and periodically “alerts” the human to patterns that have direct relevance to danger, subsistence, or propagation-of-species opportunities.
[0032]Our anomaly discovery process uses a systematic binary hypothesis technique called the “sequential probability ratio test” (SPRT) as an irrelevance filter for large volumes of time-series signals, and identifies small subsets of time-series signals that warrant further pattern-recognition analyses to facilitate anomaly detection. Hence, our new technique substantially reduces RUL-analysis costs by systematically and safely filtering anomaly alerts generated for individual utility system assets so that RUL-analysis operations are only performed for “relevant” signature patterns that are likely to be associated with incipient fault conditions.
[0034]FIG. 1 illustrates an exemplary prognostic-surveillance system 100 in accordance with the disclosed embodiments. As illustrated in FIG. 1, prognostic-surveillance system 100 operates on a set of time-series sensor signals 104 obtained from sensors in an electronic device. In one embodiment as described herein, the electronic device may be a utility system asset 102, such as a power transformer, but other electronic devices may be used. Note that time-series signals 104 can originate from any type of sensor, which can be located in a component in utility system asset 102, including: a voltage sensor; a current sensor; a pressure sensor; a rotational speed sensor; and a vibration sensor.
[0073] Referring to FIG. 1, NLNP regression model 108 and difference module 112 work together to remove (filter) the dynamics in the signals X(t) so that the residual R(t) is a stationary random process when the system is in good condition. As the system ages or degrades due to a failure mechanism, the statistical properties of the residual change. This change is detected by SPRT module 116, which generates corresponding SPRT alarms 118.
[0082] Next, the system applies an irrelevance filter to the anomaly alarms (e.g., SPRT alarms) to produce a filtered anomaly alarms (e.g., SPRT alarms), wherein the irrelevance filter removes SPRT alarms for signals that are not correlated with previous failures of similar utility system assets (step 210).
[0083] The system then uses a logistic-regression model to compute an RUL-based risk index for the utility system asset based on tripping frequencies of the filtered SPRT alarms (step 212). If the risk index exceeds a risk-index threshold, the system generates a notification indicating that the electronic device has a limited remaining useful life (e.g., is near a predicted failing point) and should be replaced (step 214).
[0085] FIG. 4 presents a flow chart illustrating a process for training a logistic-regression model to predict an RUL for an asset and for configuring an associated irrelevance filter
[0086] The irrelevance filter is also configured to remove SPRT alarms (e.g., anomaly alarms) that are not relevant (step 416). SPRT alarms that are not relevant include alarms that occur in time intervals that are not near a failure time of the asset/device (e.g., a time beyond/outside the time threshold).
[0087]Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present invention. Thus, the present invention is not limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features disclosed herein.
[0088]The foregoing descriptions of embodiments have been presented for purposes of illustration and description only. They are not intended to be exhaustive or to limit the present description to the forms disclosed. Accordingly, many modifications and variations will be apparent to practitioners skilled in the art. Additionally, the above disclosure is not intended to limit the present description. The scope of the present description is defined by the appended claims.
Furthermore, as an ordered combination, these elements amount to generic computer components receiving or transmitting data over a network, performing repetitive calculations, electronic record keeping, and storing and retrieving information in memory, which, as held by the courts, are well-understood, routine, and conventional. See MPEP 2106.05(d).
Moreover, the remaining elements of dependent claims do not transform the recited abstract idea into a patent eligible invention because these remaining elements merely recite further abstract limitations that provide nothing more than simply a narrowing of the abstract idea recited in the independent claims.
Looking at these limitations as an ordered combination adds nothing additional that is sufficient to amount to significantly more than the recited abstract idea because they simply provide instructions to use a generic arrangement of generic computer components to “apply” the recited abstract idea, perform insignificant extra-solution activity, and generally link the abstract idea to a technical environment. Thus, the elements of the claims, considered both individually and as an ordered combination, are not sufficient to ensure that the claim as a whole amounts to significantly more than the abstract idea itself. Since there are no limitations in these claims that transform the exception into a patent eligible application such that these claims amount to significantly more than the exception itself, claims 1-20 are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter.
Claim Rejections – 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
Determining the scope and contents of the prior art.
Ascertaining the differences between the prior art and the claims at issue.
Resolving the level of ordinary skill in the pertinent art.
Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable by JP Patent Publication to US20080140362A1 to Gross et al., (hereinafter referred to as “Gross”) in view of US Patent Publication to US20160371170A1 to Salunke et al., (hereinafter referred to as “Salunke”)
As per Claim 1, Gross teaches: A method for estimating a remaining useful life, RUL, of an electronic device, wherein during a surveillance mode, the method comprises:
receiving a set of time-series signals gathered from sensors in the electronic device while the electronic device is operating; (in at least [0022] FIG. 1 illustrates real-time telemetry system 100 in accordance with an embodiment of the present invention. Real-time telemetry system 100 contains computer system 102. Computer system 102 can be any type of computer system, such as a computer system based on a microprocessor, a mainframe computer, a digital signal processor, a portable computing device, a personal organizer, a device controller, and a computational engine within an appliance. [0023] Real-time telemetry system 100 also contains telemetry device 104, which gathers telemetry signals 106 from the various sensors and monitoring tools within computer system 102, and directs telemetry signals 106 to a local or a remote location that contains remaining useful life (RUL) prediction tool 108. [0024] that telemetry signals 106 gathered by real-time telemetry system 104 can include signals associated with physical and/or software performance parameters measured through sensors within the computer system. The physical parameters can include, but are not limited to: distributed temperatures within the computer system, relative humidity, cumulative or differential vibrations within the computer system, fan speed, acoustic signals, currents, voltages, time-domain reflectometry (TDR) readings, and miscellaneous environmental variables. The software parameters can include, but are not limited to: load metrics, CPU utilization, idle time, memory utilization, disk activity, transaction latencies, system throughput, queue lengths, I/O traffic, bus saturation metrics, FIFO overflow statistics, and other performance metrics reported by the operating system.)
detecting statistical changes in the set of time-series signals that are deemed as anomalous signal patterns; (in at least [0031] The foregoing step takes away (or filters) the dynamics in the signals X(t) so that the residual R(t) is a stationary random process when the system is in good condition. As the system ages or degrades due to a failure mechanism, the statistical properties of the residual change. This change is detected by SPRT mechanism 204. [0032] SPRT mechanism 204 applies a sequential probability ratio test to the residuals and produces an alarm when one or several residuals become statistically different from the residual corresponding to the undegraded condition of the system. As degradation progresses, the tripping frequency of the alarms produced SPRT mechanism 204 increases. We denote these alarm-tripping frequencies as F=[F —1, . . . F_m], where F(t)=[F—1(t), . . . F_m(t)] is the value of the prognostic parameters at time t. Hence, at time t:F(t)=SPRT(R(t)).)
generating a set of anomaly alarms, wherein an anomaly alarm is generated for each of the anomalous signal patterns; (in at least [0031] The foregoing step takes away (or filters) the dynamics in the signals X(t) so that the residual R(t) is a stationary random process when the system is in good condition. As the system ages or degrades due to a failure mechanism, the statistical properties of the residual change. This change is detected by SPRT mechanism 204. [0032] SPRT mechanism 204 applies a sequential probability ratio test to the residuals and produces an alarm when one or several residuals become statistically different from the residual corresponding to the undegraded condition of the system. As degradation progresses, the tripping frequency of the alarms produced SPRT mechanism 204 increases. We denote these alarm-tripping frequencies as F=[F —1, . . . F_m], where F(t)=[F—1(t), . . . F_m(t)] is the value of the prognostic parameters at time t. Hence, at time t:F(t)=SPRT(R(t)). [0033] Logistic regression mechanism 206 records each instance of SPRT mechanism 204 tripping an alarm and uses these instances to determine the current alarm-tripping frequency of SPRT mechanism 204. Logistic regression mechanism 206 then calculates the RUL of the computer system in the following way. We denote the probability of system S to fail within next T hours given the current condition determined by the current SPRT alarm-tripping frequencies F as p(T,F). The relationship between the p and the current condition F is modeled using the linear logistic regression model:p(T,X)=1/(1+exp(−(a(T)+b —1(T)* F —1+b —2(T)*F —2+ . . . +b — m(T)*F — m))).where a(T) and b(T)=[b—1(T), . . . , b_m(T)] are estimated from historical or experimental failure data for the system. Note that the tripping frequencies are normalized to have values between 0 and 1 to simplify this calculation.)
… suspected false alarms, …; (in at least [0037] The graph in FIG. 3B presents a 2-dimensional failure model in accordance with an embodiment of the present invention. As shown in FIG. 3B, when the actual model is 1-dimensional, the addition of an extra parameter does not improve fit. However, the addition of an irrelevant prognostic parameter increases uncertainty in the RUL estimation.)
… associated with an anomalous signal pattern when the anomalous signal pattern matches a similar signal pattern that was previously observed from the similar electrical devices that have operated without incident; and
generating a notification indicating an estimated remaining useful life of the electronic device based on at least the anomalous signal patterns associated with the … anomaly alarms. (in at least [0047] The system then performs a linear logistic regression analysis using the SPRT alarm-tripping frequency (step 510). From this analysis, the system returns a prediction for the remaining useful life (step 512).)
Although implied, Gross does not expressly disclose the following limitations, which however, are taught by Salunke,
applying an irrelevance filter to the set of anomaly alarms to produce filtered anomaly alarms that do not include suspected false alarms, wherein the irrelevance filter removes anomaly alarms associated with one or more anomalous signal patterns that are not correlated with previous failures of similar electronic devices that are similar to the electronic device (in at least [0053] To reduce the volume of overall and false alerts 234, management apparatus 230 and/or another component of signal-monitoring module 220 may suppress an alert based on further analysis of performance parameters 210 of virtual machine 204. As described in further detail below, an indication of an anomalous event may trigger the analysis of performance parameters 210 for an upward trend in the memory usage of virtual machine 204 and/or a decrease in the free memory of virtual machine 204 below a threshold. If the component detects the upward trend in memory usage and/or decrease in free memory below the threshold, the component may generate the alert. Conversely, if the memory usage is not trending upward and/or the amount of free memory is higher than the threshold, the component may suppress the alert. [0069] filtered time-series performance data 406 is generated from time-series performance data 402 by removing a subset of time-series performance data 402 around one or more known anomalous events 404 in the virtual machine. For example, one or more times of known anomalous events 404 such as OOM events and/or virtual machine restarts may be obtained from records of anomalous events 404 from the computer system, a service processor, and/or another monitoring mechanism. Subsets of time-series performance data 402 within an interval 408 (e.g., 24 hours) before and after known anomalous events 404 may then be removed from time-series performance data 402 to produce filtered time-series performance data 406. [0074] Complexity threshold 414 may be set to mitigate the generation of false alerts using statistical model 410. In particular, number of unique patterns 412 may be affected by variations in the time spent in GC (e.g., as a number of seconds per hour), number of GC invocations, and/or other metrics in filtered time-series performance data 406. An active virtual machine may have values for time spent in GC and number of GC invocations that vary according to fluctuations in the activity level of the virtual machine, while a virtual machine that experiences little to no activity may have many samples of zero values for the time spent in GC and number of GC invocations. A lack of activity in the virtual machine may cause statistical model 410 to learn a sparse pattern set from filtered time-series performance data 406, and any behavioral pattern that is outside the learned set may automatically be flagged as anomalous by statistical model 410. Thus, complexity threshold 414 may be set to a minimum number of unique patterns 412 learned by statistical model 410 to mitigate the subsequent generation of false positives by statistical model 410. [0102] If OOM pattern 516 is not detected in a subsequent time window, the current level of OOM risk 522 may be maintained for a pre-specified period. For example, a “flattened” OOM risk 522 may remain associated with the status of the virtual machine until OOM pattern 516 is not detected for a certain number of consecutive time windows. If features 514 in a subsequent time window 510 match a clear condition 520, OOM risk 522 may also be removed, and the status of the virtual machine may be updated with a cleared OOM risk 526. For example, the status of the virtual machine may be changed from OOM risk 522 to cleared OOM risk 526 after a statistically significant positive slope is found in free memory metric 512 for a certain number of consecutive time windows and/or the probability that free memory metric 512 drops below 50 MB is lower than 0.05%. [0103] By performing free memory trending on time-series GC data 502 based on a custom time window 510 that encompasses multiple cycles of short-term activity of the virtual machine, the trend-estimation technique of FIG. 5 may avoid the detection of short-term trends in free memory metric 512 while allowing for detection of slow-developing OOM risks. Conversely, a conventional trend-estimation technique with a fixed time window may either produce a large number of false alarms or miss a significant number of real OOM events in the virtual machine. [0113] a subset of the performance data is removed within an interval around one or more known anomalous events to generate filtered time-series performance data (operation 704). To generate the filtered time-series performance data, one or more times of the known anomalous event(s) are obtained (e.g., from records of the known anomalous events), and portions of the machine-generated time-series performance data are removed within the interval (e.g., 24 hours) before and after the time(s).)
wherein removing the suspected false alarms from the set of anomaly alarms, by the irrelevance filter, comprises removing a target anomaly alarm associated with an anomalous signal pattern when the anomalous signal pattern matches a similar signal pattern that was previously observed from the similar electrical devices that have operated without incident (in at least [0053] To reduce the volume of overall and false alerts 234, management apparatus 230 and/or another component of signal-monitoring module 220 may suppress an alert based on further analysis of performance parameters 210 of virtual machine 204. As described in further detail below, an indication of an anomalous event may trigger the analysis of performance parameters 210 for an upward trend in the memory usage of virtual machine 204 and/or a decrease in the free memory of virtual machine 204 below a threshold. If the component detects the upward trend in memory usage and/or decrease in free memory below the threshold, the component may generate the alert. Conversely, if the memory usage is not trending upward and/or the amount of free memory is higher than the threshold, the component may suppress the alert. [0069] filtered time-series performance data 406 is generated from time-series performance data 402 by removing a subset of time-series performance data 402 around one or more known anomalous events 404 in the virtual machine. For example, one or more times of known anomalous events 404 such as OOM events and/or virtual machine restarts may be obtained from records of anomalous events 404 from the computer system, a service processor, and/or another monitoring mechanism. Subsets of time-series performance data 402 within an interval 408 (e.g., 24 hours) before and after known anomalous events 404 may then be removed from time-series performance data 402 to produce filtered time-series performance data 406. [0074] Complexity threshold 414 may be set to mitigate the generation of false alerts using statistical model 410. In particular, number of unique patterns 412 may be affected by variations in the time spent in GC (e.g., as a number of seconds per hour), number of GC invocations, and/or other metrics in filtered time-series performance data 406. An active virtual machine may have values for time spent in GC and number of GC invocations that vary according to fluctuations in the activity level of the virtual machine, while a virtual machine that experiences little to no activity may have many samples of zero values for the time spent in GC and number of GC invocations. A lack of activity in the virtual machine may cause statistical model 410 to learn a sparse pattern set from filtered time-series performance data 406, and any behavioral pattern that is outside the learned set may automatically be flagged as anomalous by statistical model 410. Thus, complexity threshold 414 may be set to a minimum number of unique patterns 412 learned by statistical model 410 to mitigate the subsequent generation of false positives by statistical model 410. [0102] If OOM pattern 516 is not detected in a subsequent time window, the current level of OOM risk 522 may be maintained for a pre-specified period. For example, a “flattened” OOM risk 522 may remain associated with the status of the virtual machine until OOM pattern 516 is not detected for a certain number of consecutive time windows. If features 514 in a subsequent time window 510 match a clear condition 520, OOM risk 522 may also be removed, and the status of the virtual machine may be updated with a cleared OOM risk 526. For example, the status of the virtual machine may be changed from OOM risk 522 to cleared OOM risk 526 after a statistically significant positive slope is found in free memory metric 512 for a certain number of consecutive time windows and/or the probability that free memory metric 512 drops below 50 MB is lower than 0.05%. [0103] By performing free memory trending on time-series GC data 502 based on a custom time window 510 that encompasses multiple cycles of short-term activity of the virtual machine, the trend-estimation technique of FIG. 5 may avoid the detection of short-term trends in free memory metric 512 while allowing for detection of slow-developing OOM risks. Conversely, a conventional trend-estimation technique with a fixed time window may either produce a large number of false alarms or miss a significant number of real OOM events in the virtual machine. [0113] a subset of the performance data is removed within an interval around one or more known anomalous events to generate filtered time-series performance data (operation 704). To generate the filtered time-series performance data, one or more times of the known anomalous event(s) are obtained (e.g., from records of the known anomalous events), and portions of the machine-generated time-series performance data are removed within the interval (e.g., 24 hours) before and after the time(s). [0127] If the features from the next time window match the OOM pattern, the OOM risk is increased (operation 912). For example, the “level” of the OOM risk may be incremented each time the features from a subsequent time window match the OOM pattern. If the features match the clear condition, the OOM risk is cleared (operation 914), and monitoring of OOM risks in the virtual machine is reset. For example, the OOM risk may be cleared if the features no longer match the OOM pattern and the amount of free memory in the virtual machine is trending upward. If the features match neither the OOM pattern nor the clear condition, the OOM risk is flattened (operation 916). For example, the “flattened” OOM risk may represent an unchanged level of OOM risk in the virtual machine. Each update to the OOM risk in operations 912-916 may be used to update the indication of the OOM risk for the virtual machine. For example, each change in the level of OOM risk for the virtual machine may result in the generation of a corresponding alert, while alerting of the flattened OOM risk may optionally be omitted. [0127] The OOM risk may continue to be analyzed (operation 918) while the OOM risk is present. If the OOM risk is to be analyzed, the set of features is estimated within subsequent time windows (operation 908), and the OOM risk is updated accordingly (operations 910-914) until the OOM risk is cleared.)
… filtered anomaly alarms…(in at least [0053] To reduce the volume of overall and false alerts 234, management apparatus 230 and/or another component of signal-monitoring module 220 may suppress an alert based on further analysis of performance parameters 210 of virtual machine 204. As described in further detail below, an indication of an anomalous event may trigger the analysis of performance parameters 210 for an upward trend in the memory usage of virtual machine 204 and/or a decrease in the free memory of virtual machine 204 below a threshold. If the component detects the upward trend in memory usage and/or decrease in free memory below the threshold, the component may generate the alert. Conversely, if the memory usage is not trending upward and/or the amount of free memory is higher than the threshold, the component may suppress the alert. [0069] filtered time-series performance data 406 is generated from time-series performance data 402 by removing a subset of time-series performance data 402 around one or more known anomalous events 404 in the virtual machine. For example, one or more times of known anomalous events 404 such as OOM events and/or virtual machine restarts may be obtained from records of anomalous events 404 from the computer system, a service processor, and/or another monitoring mechanism. Subsets of time-series performance data 402 within an interval 408 (e.g., 24 hours) before and after known anomalous events 404 may then be removed from time-series performance data 402 to produce filtered time-series performance data 406. [0074] Complexity threshold 414 may be set to mitigate the generation of false alerts using statistical model 410. In particular, number of unique patterns 412 may be affected by variations in the time spent in GC (e.g., as a number of seconds per hour), number of GC invocations, and/or other metrics in filtered time-series performance data 406. An active virtual machine may have values for time spent in GC and number of GC invocations that vary according to fluctuations in the activity level of the virtual machine, while a virtual machine that experiences little to no activity may have many samples of zero values for the time spent in GC and number of GC invocations. A lack of activity in the virtual machine may cause statistical model 410 to learn a sparse pattern set from filtered time-series performance data 406, and any behavioral pattern that is outside the learned set may automatically be flagged as anomalous by statistical model 410. Thus, complexity threshold 414 may be set to a minimum number of unique patterns 412 learned by statistical model 410 to mitigate the subsequent generation of false positives by statistical model 410. [0102] If OOM pattern 516 is not detected in a subsequent time window, the current level of OOM risk 522 may be maintained for a pre-specified period. For example, a “flattened” OOM risk 522 may remain associated with the status of the virtual machine until OOM pattern 516 is not detected for a certain number of consecutive time windows. If features 514 in a subsequent time window 510 match a clear condition 520, OOM risk 522 may also be removed, and the status of the virtual machine may be updated with a cleared OOM risk 526. For example, the status of the virtual machine may be changed from OOM risk 522 to cleared OOM risk 526 after a statistically significant positive slope is found in free memory metric 512 for a certain number of consecutive time windows and/or the probability that free memory metric 512 drops below 50 MB is lower than 0.05%. [0103] By performing free memory trending on time-series GC data 502 based on a custom time window 510 that encompasses multiple cycles of short-term activity of the virtual machine, the trend-estimation technique of FIG. 5 may avoid the detection of short-term trends in free memory metric 512 while allowing for detection of slow-developing OOM risks. Conversely, a conventional trend-estimation technique with a fixed time window may either produce a large number of false alarms or miss a significant number of real OOM events in the virtual machine. [0113] a subset of the performance data is removed within an interval around one or more known anomalous events to generate filtered time-series performance data (operation 704). To generate the filtered time-series performance data, one or more times of the known anomalous event(s) are obtained (e.g., from records of the known anomalous events), and portions of the machine-generated time-series performance data are removed within the interval (e.g., 24 hours) before and after the time(s).)
At the time the invention was filed, in the same field of endeavor, it would have been obvious for one of ordinary skill in the art to have modified the teachings of Gross, as taught by Salunke above, with a reasonable expectation of success if arriving at the claimed invention. One of ordinary skill in the art would have been motivated to make this modification to the teachings of Gross with the motivation of, …improve the management of anomalous events by an administrator of computer system 200… to improve the performance of stateless detection of anomalous events in the virtual machine... businesses are increasingly relying on enterprise computing systems to process ever-larger volumes of electronic transactions. A failure in one of these enterprise computing systems can be disastrous, potentially resulting in millions of dollars of lost business. More importantly, a failure can seriously undermine consumer confidence in a business, making customers less likely to purchase goods and services from the business. Hence, it is important to ensure reliability and/or high availability in such enterprise computing systems… To reduce the volume of overall and false alerts 234, management apparatus 230 and/or another component of signal-monitoring module 220 may suppress an alert…, as recited in Salunke.
As per Claim 2, Gross teaches: The method of claim 1, further comprising:
generating the estimated remaining useful life RUL using a logistic-regression model to compute a RUL-based risk index for the electronic device based on the … anomaly alarms; and (in at least [0035] FIGS. 3A-3F present a series of graphs illustrating RUL values in accordance with embodiments of the present invention. For the graphs we use T=70 hours. Each circle in the graphs represents an individual system/component from either historical or experimental data. Note that in the following graphs, a “1-dimensional” failure model is a mechanism relating a single parameter x —1 to the probability of failure of the system in next T hours. [0036] The graph in FIG. 3A presents a 1-dimensional failure model in accordance with an embodiment of the present invention. When the actual failure model is 1-dimensional, the 1-dimensional logistic regression model provides an adequate approximation to the probability of failure in next T hours given the current condition x —1. [0047] performs a linear logistic regression analysis using the SPRT alarm-tripping frequency (step 510). From this analysis, the system returns a prediction for the remaining useful life (step 512).)
when the risk index exceeds a risk-index threshold, generating a notification indicating that the electronic device has a limited remaining useful life. (in at least [0043] FIG. 4B presents a graph illustrating estimated probability density functions (PDF) of the RUL in accordance with an embodiment of the present invention. When calculating the PDFs, we used the prognostic vector [0.6 0.6 0.6]. As can be seen in FIG. 4B, the 3-variable model gives the most accurate RUL prediction, whereas the 1-variable and 2-variable models produce predictions with larger uncertainties. Thus, according to the model, a system for which all 3 SPRT tripping frequencies have values 0.6 fails on average in 20 hours and with 0.99 probability in 40 hours. Whereas the models with lesser number of parameters provide prognosis in which more than ˜30% of systems with current condition of 0.6 live beyond 40 hours. [0047] The system then performs a linear logistic regression analysis using the SPRT alarm-tripping frequency (step 510). From this analysis, the system returns a prediction for the remaining useful life (step 512).)
Although implied, Gross does not expressly disclose the following limitations, which however, are taught by Salunke,
… filtered anomaly alarms…(in at least [0053] To reduce the volume of overall and false alerts 234, management apparatus 230 and/or another component of signal-monitoring module 220 may suppress an alert based on further analysis of performance parameters 210 of virtual machine 204. As described in further detail below, an indication of an anomalous event may trigger the analysis of performance parameters 210 for an upward trend in the memory usage of virtual machine 204 and/or a decrease in the free memory of virtual machine 204 below a threshold. If the component detects the upward trend in memory usage and/or decrease in free memory below the threshold, the component may generate the alert. Conversely, if the memory usage is not trending upward and/or the amount of free memory is higher than the threshold, the component may suppress the alert. [0069] filtered time-series performance data 406 is generated from time-series performance data 402 by removing a subset of time-series performance data 402 around one or more known anomalous events 404 in the virtual machine. For example, one or more times of known anomalous events 404 such as OOM events and/or virtual machine restarts may be obtained from records of anomalous events 404 from the computer system, a service processor, and/or another monitoring mechanism. Subsets of time-series performance data 402 within an interval 408 (e.g., 24 hours) before and after known anomalous events 404 may then be removed from time-series performance data 402 to produce filtered time-series performance data 406. [0074] Complexity threshold 414 may be set to mitigate the generation of false alerts using statistical model 410. In particular, number of unique patterns 412 may be affected by variations in the time spent in GC (e.g., as a number of seconds per hour), number of GC invocations, and/or other metrics in filtered time-series performance data 406. An active virtual machine may have values for time spent in GC and number of GC invocations that vary according to fluctuations in the activity level of the virtual machine, while a virtual machine that experiences little to no activity may have many samples of zero values for the time spent in GC and number of GC invocations. A lack of activity in the virtual machine may cause statistical model 410 to learn a sparse pattern set from filtered time-series performance data 406, and any behavioral pattern that is outside the learned set may automatically be flagged as anomalous by statistical model 410. Thus, complexity threshold 414 may be set to a minimum number of unique patterns 412 learned by statistical model 410 to mitigate the subsequent generation of false positives by statistical model 410. [0102] If OOM pattern 516 is not detected in a subsequent time window, the current level of OOM risk 522 may be maintained for a pre-specified period. For example, a “flattened” OOM risk 522 may remain associated with the status of the virtual machine until OOM pattern 516 is not detected for a certain number of consecutive time windows. If features 514 in a subsequent time window 510 match a clear condition 520, OOM risk 522 may also be removed, and the status of the virtual machine may be updated with a cleared OOM risk 526. For example, the status of the virtual machine may be changed from OOM risk 522 to cleared OOM risk 526 after a statistically significant positive slope is found in free memory metric 512 for a certain number of consecutive time windows and/or the probability that free memory metric 512 drops below 50 MB is lower than 0.05%. [0103] By performing free memory trending on time-series GC data 502 based on a custom time window 510 that encompasses multiple cycles of short-term activity of the virtual machine, the trend-estimation technique of FIG. 5 may avoid the detection of short-term trends in free memory metric 512 while allowing for detection of slow-developing OOM risks. Conversely, a conventional trend-estimation technique with a fixed time window may either produce a large number of false alarms or miss a significant number of real OOM events in the virtual machine. [0113] a subset of the performance data is removed within an interval around one or more known anomalous events to generate filtered time-series performance data (operation 704). To generate the filtered time-series performance data, one or more times of the known anomalous event(s) are obtained (e.g., from records of the known anomalous events), and portions of the machine-generated time-series performance data are removed within the interval (e.g., 24 hours) before and after the time(s).)
The reason and rationale to combine Gross and Salunke is the same as recited above.
As per Claim 3, Gross teaches: The method of claim 1, wherein detecting the statistical changes in the set of time-series signals includes:
performing a sequential probability ratio test, SPRT, on the set of time-series signals or on residual signals produced from the set of time-series signals, wherein the SPRT produces SPRT alarms for the anomalous signal patterns; and (in at least [0032] SPRT mechanism 204 applies a sequential probability ratio test to the residuals and produces an alarm when one or several residuals become statistically different from the residual corresponding to the undegraded condition of the system. As degradation progresses, the tripping frequency of the alarms produced SPRT mechanism 204 increases. We denote these alarm-tripping frequencies as F=[F —1, . . . F_m], where F(t)=[F—1(t), . . . F_m(t)] is the value of the prognostic parameters at time t. Hence, at time t:F(t)=SPRT(R(t)).)
wherein the SPRT alarms are the anomaly alarms. (in at least [0031] The foregoing step takes away (or filters) the dynamics in the signals X(t) so that the residual R(t) is a stationary random process when the system is in good condition. As the system ages or degrades due to a failure mechanism, the statistical properties of the residual change. This change is detected by SPRT mechanism 204.)
As per Claim 4, Gross teaches: The method of claim 1,
wherein detecting the statistical changes in the set of time-series signals is based at least in part on detecting the statistical changes in residual signals produced from the set of time series signals; (in at least [0032] SPRT mechanism 204 applies a sequential probability ratio test to the residuals and produces an alarm when one or several residuals become statistically different from the residual corresponding to the undegraded condition of the system. As degradation progresses, the tripping frequency of the alarms produced SPRT mechanism 204 increases. We denote these alarm-tripping frequencies as F=[F —1, . . . F_m], where F(t)=[F—1(t), . . . F_m(t)] is the value of the prognostic parameters at time t. Hence, at time t: F(t)=SPRT(R(t)).)
wherein the method further comprises, prior to the detecting:
using an inferential model to generate estimated values for the set of time-series signals; and (in at least [0029] NLNP regression mechanism 202 uses a multivariate state estimation technique (“MSET”) to perform the regression analysis. The term MSET as used in this specification refers to a technique that loosely represents a class of pattern recognition algorithms. For example, see [Gribok] “Use of Kernel Based Techniques for Sensor Validation in Nuclear Power Plants,” by Andrei V. Gribok, J. Wesley Hines, and Robert E. Uhrig, The Third American Nuclear Society International Topical Meeting on Nuclear Plant Instrumentation and Control and Human-Machine Interface Technologies, Washington D.C., Nov. 13-17, 2000. This paper outlines several different pattern recognition approaches. Hence, the term “MSET” as used in this specification can refer to (among other things) any technique outlined in [Qribok], including Ordinary Least Squares (OLS), Support Vector Machines (SVM), Artificial Neural Networks (ANNs), MSET, or Regularized MSET (RMSET). [0045] When a sufficient number of values have been collected, the system inputs the value into a mechanism that uses a non-linear, non-parametric regression analysis to calculate a projected value for the current sample (step 502). The system then computes a residual by subtracting the projected value from the current value (step 504).)
performing a pairwise differencing operation between actual values of the set of time-series signal and the estimated values for the set of time-series signals to produce the residual signals. (in at least [0045] When a sufficient number of values have been collected, the system inputs the value into a mechanism that uses a non-linear, non-parametric regression analysis to calculate a projected value for the current sample (step 502). The system then computes a residual by subtracting the projected value from the current value (step 504).)
As per Claim 5, Gross teaches: The method of claim 4,
wherein the inferential model comprises a Multivariate State Estimation Technique, MSET, model. (in at least [0029] NLNP regression mechanism 202 uses a multivariate state estimation technique (“MSET”) to perform the regression analysis. The term MSET as used in this specification refers to a technique that loosely represents a class of pattern recognition algorithms. For example, see [Gribok] “Use of Kernel Based Techniques for Sensor Validation in Nuclear Power Plants,” by Andrei V. Gribok, J. Wesley Hines, and Robert E. Uhrig, The Third American Nuclear Society International Topical Meeting on Nuclear Plant Instrumentation and Control and Human-Machine Interface Technologies, Washington D.C., Nov. 13-17, 2000. This paper outlines several different pattern recognition approaches. Hence, the term “MSET” as used in this specification can refer to (among other things) any technique outlined in [Qribok], including Ordinary Least Squares (OLS), Support Vector Machines (SVM), Artificial Neural Networks (ANNs), MSET, or Regularized MSET (RMSET). [0045] When a sufficient number of values have been collected, the system inputs the value into a mechanism that uses a non-linear, non-parametric regression analysis to calculate a projected value for the current sample (step 502). The system then computes a residual by subtracting the projected value from the current value (step 504).)
As per Claim 6, Gross teaches: The method of claim 1, wherein during an RUL-training mode, which precedes the surveillance mode, the method comprises:
receiving an RUL training set comprising time-series signals gathered from sensors in similar electronic devices while the similar electronic devices are run to failure; (in at least [0033] Logistic regression mechanism 206 records each instance of SPRT mechanism 204 tripping an alarm and uses these instances to determine the current alarm-tripping frequency of SPRT mechanism 204. Logistic regression mechanism 206 then calculates the RUL of the computer system in the following way. We denote the probability of system S to fail within next T hours given the current condition determined by the current SPRT alarm-tripping frequencies F as p(T,F). The relationship between the p and the current condition F is modeled using the linear logistic regression model:p(T,X)=1/(1+exp(−(a(T)+b —1(T)* F —1+b —2(T)*F —2+ . . . +b — m(T)*F — m))). where a(T) and b(T)=[b—1(T), . . . , b_m(T)] are estimated from historical or experimental failure data for the system.)
receiving associated failure times for the similar electronic devices; (in at least [0035] FIGS. 3A-3F present a series of graphs illustrating RUL values in accordance with embodiments of the present invention. For the graphs we use T=70 hours. Each circle in the graphs represents an individual system/component from either historical or experimental data. Note that in the following graphs, a “1-dimensional” failure model is a mechanism relating a single parameter x —1 to the probability of failure of the system in next T hours.)
using an inferential model to generate estimated values for the RUL training set of time-series signals; (in at least [0036] The graph in FIG. 3A presents a 1-dimensional failure model in accordance with an embodiment of the present invention. When the actual failure model is 1-dimensional, the 1-dimensional logistic regression model provides an adequate approximation to the probability of failure in next T hours given the current condition x —1.)
performing a pairwise differencing operation between actual values and the estimated values for the RUL training set of time-series signals to produce residuals; (in at least [0045] When a sufficient number of values have been collected, the system inputs the value into a mechanism that uses a non-linear, non-parametric regression analysis to calculate a projected value for the current sample (step 502). The system then computes a residual by subtracting the projected value from the current value (step 504).)
performing a sequential probability ratio test, SPRT, on the residuals to produce SPRT alarms with associated tripping frequencies; and (in at least [0046] The system next passes the residual to a mechanism that tracks the residuals using a SPRT (step 506). If the residual differs in a statistically significant way from the expected value for an undegraded computer system, the SPRT mechanism trips an alarm. The system monitors and records the frequency at which the SPRT mechanism is tripping alarms (step 508).)
training a logistic-regression model to predict an RUL for the electronic device based on correlations between the SPRT alarm tripping frequencies and the failure times for the similar electronic devices. (in at least [0047] The system then performs a linear logistic regression analysis using the SPRT alarm-tripping frequency (step 510). From this analysis, the system returns a prediction for the remaining useful life (step 512).)
As per Claim 7, Gross teaches: The method of claim 6, wherein during the RUL-training mode, the method additionally configures the irrelevance filter by:
identifying relevant SPRT alarms that were generated during a time interval near the associated failure times of a similar electronic device; and (in at least [0033] Logistic regression mechanism 206 records each instance of SPRT mechanism 204 tripping an alarm and uses these instances to determine the current alarm-tripping frequency of SPRT mechanism 204. Logistic regression mechanism 206 then calculates the RUL of the computer system in the following way. We denote the probability of system S to fail within next T hours given the current condition determined by the current SPRT alarm-tripping frequencies F as p(T,F). The relationship between the p and the current condition F is modeled using the linear logistic regression model: p(T,X)=1/(1+exp(−(a(T)+b —1(T)* F —1+b —2(T)*F —2+ . . . +b — m(T)*F — m))). where a(T) and b(T)=[b—1(T), . . . , b_m(T)] are estimated from historical or experimental failure data for the system. Note that the tripping frequencies are normalized to have values between 0 and 1 to simplify this calculation. [0035] FIGS. 3A-3F present a series of graphs illustrating RUL values in accordance with embodiments of the present invention. For the graphs we use T=70 hours. Each circle in the graphs represents an individual system/component from either historical or experimental data. Note that in the following graphs, a “1-dimensional” failure model is a mechanism relating a single parameter x —1 to the probability of failure of the system in next T hours.)
… SPRT alarms that are not relevant. (in at least [0037] The graph in FIG. 3B presents a 2-dimensional failure model in accordance with an embodiment of the present invention. As shown in FIG. 3B, when the actual model is 1-dimensional, the addition of an extra parameter does not improve fit. However, the addition of an irrelevant prognostic parameter increases uncertainty in the RUL estimation.)
Although implied, Gross does not expressly disclose the following limitations, which however, are taught by Salunke,
… configuring the irrelevance filter to remove…(in at least [0053] To reduce the volume of overall and false alerts 234, management apparatus 230 and/or another component of signal-monitoring module 220 may suppress an alert based on further analysis of performance parameters 210 of virtual machine 204. As described in further detail below, an indication of an anomalous event may trigger the analysis of performance parameters 210 for an upward trend in the memory usage of virtual machine 204 and/or a decrease in the free memory of virtual machine 204 below a threshold. If the component detects the upward trend in memory usage and/or decrease in free memory below the threshold, the component may generate the alert. Conversely, if the memory usage is not trending upward and/or the amount of free memory is higher than the threshold, the component may suppress the alert. [0069] filtered time-series performance data 406 is generated from time-series performance data 402 by removing a subset of time-series performance data 402 around one or more known anomalous events 404 in the virtual machine. For example, one or more times of known anomalous events 404 such as OOM events and/or virtual machine restarts may be obtained from records of anomalous events 404 from the computer system, a service processor, and/or another monitoring mechanism. Subsets of time-series performance data 402 within an interval 408 (e.g., 24 hours) before and after known anomalous events 404 may then be removed from time-series performance data 402 to produce filtered time-series performance data 406. [0074] Complexity threshold 414 may be set to mitigate the generation of false alerts using statistical model 410. In particular, number of unique patterns 412 may be affected by variations in the time spent in GC (e.g., as a number of seconds per hour), number of GC invocations, and/or other metrics in filtered time-series performance data 406. An active virtual machine may have values for time spent in GC and number of GC invocations that vary according to fluctuations in the activity level of the virtual machine, while a virtual machine that experiences little to no activity may have many samples of zero values for the time spent in GC and number of GC invocations. A lack of activity in the virtual machine may cause statistical model 410 to learn a sparse pattern set from filtered time-series performance data 406, and any behavioral pattern that is outside the learned set may automatically be flagged as anomalous by statistical model 410. Thus, complexity threshold 414 may be set to a minimum number of unique patterns 412 learned by statistical model 410 to mitigate the subsequent generation of false positives by statistical model 410. [0102] If OOM pattern 516 is not detected in a subsequent time window, the current level of OOM risk 522 may be maintained for a pre-specified period. For example, a “flattened” OOM risk 522 may remain associated with the status of the virtual machine until OOM pattern 516 is not detected for a certain number of consecutive time windows. If features 514 in a subsequent time window 510 match a clear condition 520, OOM risk 522 may also be removed, and the status of the virtual machine may be updated with a cleared OOM risk 526. For example, the status of the virtual machine may be changed from OOM risk 522 to cleared OOM risk 526 after a statistically significant positive slope is found in free memory metric 512 for a certain number of consecutive time windows and/or the probability that free memory metric 512 drops below 50 MB is lower than 0.05%. [0103] By performing free memory trending on time-series GC data 502 based on a custom time window 510 that encompasses multiple cycles of short-term activity of the virtual machine, the trend-estimation technique of FIG. 5 may avoid the detection of short-term trends in free memory metric 512 while allowing for detection of slow-developing OOM risks. Conversely, a conventional trend-estimation technique with a fixed time window may either produce a large number of false alarms or miss a significant number of real OOM events in the virtual machine. [0113] a subset of the performance data is removed within an interval around one or more known anomalous events to generate filtered time-series performance data (operation 704). To generate the filtered time-series performance data, one or more times of the known anomalous event(s) are obtained (e.g., from records of the known anomalous events), and portions of the machine-generated time-series performance data are removed within the interval (e.g., 24 hours) before and after the time(s).)
The reason and rationale to combine Gross and Salunke is the same as recited above.
As per Claim 8, Gross teaches: The method of claim 7,
wherein while training the logistic-regression model to predict the RUL for the electronic device, the method considers SPRT alarm tripping frequencies associated with relevant SPRT alarms. (in at least [0047] performs a linear logistic regression analysis using the SPRT alarm-tripping frequency (step 510). From this analysis, the system returns a prediction for the remaining useful life (step 512).)
As per Claim 9, Gross teaches: The method of claim 1,
wherein the time-series signals gathered from sensors in the electronic device include signals specifying one or more combinations of the following: temperatures; currents; voltages; resistances; capacitances; vibrations; dissolved gas metrics; cooling system parameters; and control signals. (in at least [0024] telemetry signals 106 gathered by real-time telemetry system 104 can include signals associated with physical and/or software performance parameters measured through sensors within the computer system. The physical parameters can include, but are not limited to: distributed temperatures within the computer system, relative humidity, cumulative or differential vibrations within the computer system, fan speed, acoustic signals, currents, voltages, time-domain reflectometry (TDR) readings, and miscellaneous environmental variables. The software parameters can include, but are not limited to: load metrics, CPU utilization, idle time, memory utilization, disk activity, transaction latencies, system throughput, queue lengths, I/O traffic, bus saturation metrics, FIFO overflow statistics, and other performance metrics reported by the operating system.)
As per Claim 10, Gross teaches: The method of claim 1,
wherein the electronic device is a utility system asset, a vehicle component, or a computing system device. (in at least [0022] FIG. 1 illustrates real-time telemetry system 100 in accordance with an embodiment of the present invention. Real-time telemetry system 100 contains computer system 102. Computer system 102 can be any type of computer system, such as a computer system based on a microprocessor, a mainframe computer, a digital signal processor, a portable computing device, a personal organizer, a device controller, and a computational engine within an appliance.)
As per Claim 11-17 for A non-transitory computer-readable medium (see at least Gross [0022]-[0026]), substantially recite the subject matter of Claim 1-4, 6-8 and are rejected based on the same reasoning and rationale.
As per Claim 18-20 for a system…a notification mechanism configured to execute on the at least one processor, wherein the notification mechanism is configured to iteratively…(see at least Gross [0022]-[0026][0035]), substantially recite the subject matter of Claim 1, 3, 6 and are rejected based on the same reasoning and rationale.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PO HAN MAX LEE whose telephone number is (571)272-3821. The examiner can normally be reached on Mon-Thurs 8:00 am - 7:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rutao Wu can be reached on (571) 272-6045. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PO HAN LEE/Primary Examiner, Art Unit 3623