DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-3, 8-9,12, 14, 17-19, 24-25, 28, 30, and 33 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Dinh et al. (US 2020/0250559) (hereinafter Dinh).
Regarding claims 1, 17, and 33, Dinh teaches a method, system (fig. 1, system 105), and non-transitory media (fig. 1, memory 112) for automatically managing an event in a cloud system, comprising:
determining a candidate action to be applied to the cloud system for managing the event (ph. [0052], “After the system is provided with exhaustive labels to cover all possible root cause analyses, the remediation process may be automated by automatically identifying labels for root cause analysis of each new predicted anomalous system behavior based on, e.g., image processing ML algorithms. This allows, for example, the system to automatically choose an appropriate remediation action, as the labels are tied to exclusive and relevant remediation actions from the historical data.”; Fig. 2, data set 206; ph. [0053], “In some example embodiments, dataset 206 includes one or more logs (e.g., support and/or system logs) that are formatted in an ‘event-action-end event’ format, for example.”);
applying the candidate action to a model of the cloud system (fig. 2, action-reward valuation 240; ph. [0054], “A reward valuation may be performed for each action taken to generate a state-reward policy 242 for each state-root cause combination. Each time an action is performed, the state-reward policy 242 is updated based on if, and by how much, the action successfully remediated the risk associated with the state. As a non-limiting example, the inference model may identify a start state and an end state on dataset 206, and assign each start state and each end state with a risk probability. In this way, the reward evaluation may be equal to the difference in risk probability between the start state and the end state.”); and
upon determining that the model of the cloud system meets at least one performance indicator (ph. [0050], “for each event that the inference model predicts as anomalous system behavior, corresponding independent variables are labeled if any independent statistical anomalous behavior is indicated for the same.”) and that the candidate action is a proved action (fig. 2, recommended action(s) 243 generated by action-reward valuation 240), applying the proved action to the cloud system (fig. 5, step 510, execute one or more actions of the action policy in response to the prediction).
Regarding claims 2 and 18, Dinh teaches the method of claim 1 and system of claim 17, wherein the event is detected or predicted (fig. 5, step 504, predict when the system will enter one or more adverse states or entering the adverse state based on at least a subset of the set of performance indicators).
Regarding claims 3 and 19, Dinh teaches the method of claim 2 and system of claim 18, wherein the event is predicted by feeding online data collected from the cloud system to a model trained by machine learning using data samples of previous events and getting an output from the model predicting the event (fig. 2, dataset 204 fed to model trained by machine learning using data samples of previous events; fig. 5, step 504, generate an inference model to predict when the system will enter one or more adverse states based at least on a given value of the target variable and identify one or more root causes of the system entering the adverse state based at least a subset of the set of performance indicators, wherein generating comprises generating a linear model and a non-linear model based on at least a portion of the first set of system log data.).
Regarding claims 8 and 24, Dinh teaches the method of claim 1 and system of claim 17, wherein the model of the cloud system is a formal model of the cloud system the formal model describing logical connections between blocks of the cloud system (ph. [0049]-[0050], “the linear model and the non-linear model can be combined to generate an ensemble model (which is referred to herein as an inference model) that can be used to probabilistically predict when a state change will occur and also to recognize key variables for identifying root causes associated with the states, as indicated by block 233…using at least one ensemble modeling technique which assumes the higher of the classifier probability of an anomalous state scored through both linear and non-linear model as the scale for predicted anomalous behavior. Also, in one or more embodiments, for each event that the inference model predicts as anomalous system behavior, corresponding independent variables are labeled if any independent statistical anomalous behavior is indicated for the same.”), and wherein applying the candidate action to the model of the cloud system comprises applying the candidate action to the formal model of the cloud system (ph. [0054], “The one or more actions may correspond to actions performed by a support team to remediate those issues, such as by reducing the overall risk to the system. A reward valuation may be performed for each action taken to generate a state-reward policy 242 for each state-root cause combination. Each time an action is performed, the state-reward policy 242 is updated based on if, and by how much, the action successfully remediated the risk associated with the state. As a non-limiting example, the inference model may identify a start state and an end state on dataset 206, and assign each start state and each end state with a risk probability.”).
Regarding claims 9 and 25, Dinh teaches the method of claim 1 and system of claim 17, wherein the at least one performance indicator comprises key performance indicators (KPIs) that reflect characteristics of the cloud system when functioning in a normal state, wherein the KPIs are monitored through metrics that are used to track deviations in the cloud system, and wherein the metrics comprise at least one of central processing unit (CPU) load, storage usage, memory usage, Input/Output usage, temperature, node used capacity (ph. [0060], “The set of performance indicators may include one or more of: at least one performance indicator corresponding to usage of a processor”).
Regarding claims 12 and 28, Dinh teaches the method of claim 1 and system of claim 17, wherein applying the proved action to the cloud system comprises getting feedback from the cloud system to determine if the event was properly handled (fig. 2, feedback loop from recommend action(s) 243 to action-reward valuation 240; ph. [0065], “Additionally, the techniques depicted in FIG. 5 may include using the machine reinforcement learning to update the action policy in response to performance of one of the actions.”).
Regarding claims 14 and 30, Dinh teaches the method of claim 1 and system of claim 17, wherein the event is a fault, a change in a performance indicator or a security alarm (ph. [0060], “The set of performance indicators may include one or more of: at least one performance indicator corresponding to usage of a processor; at least one performance indicator corresponding to availability of a processor; at least one performance indicator corresponding to a response time; at least one performance indicator corresponding to a number of timeouts associated with a service-oriented architecture platform; and at least one performance indicator corresponding to a number of errors associated with a service-oriented architecture platform.”).
Allowable Subject Matter
Claims 4-7, 13, 20-23, and 29 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
Regarding claims 4 and 20, the art of record neither teaches nor renders obvious the specific combination of limitations in claims 4 and 20 when combined with claims 1 and 17.
Claims 5 and 21 are allowed as depending from and including the limitations of claims 4 and 20.
Regarding claims 6 and 22, the art of record neither teaches nor renders obvious the specific combination of limitations in claims 6 and 22 when combined with claims 1 and 17.
Claims 7 and 22 are allowed as depending from and including the limitations of claims 6 and 22.
Regarding claims 13 and 29, the art of record neither teaches nor renders obvious the specific combination of limitations in claims 13 and 29 when combined with claims 12 and 18.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Huang et al. (US 2020/0358683) teaches composite KPIs for network health monitoring.
Dinh et al. (US 2021/0135966) teaches streaming and event management infrastructure performance prediction.
Dome et al. (US 2019/0095265) teaches intelligent preventative maintenance of critical applications in could environments.
Gossler (US 2017/0308424) teaches automatically determining causes of the malfunction of a system made up of a plurality of hardware or software components.
Bhan et al. (US Pat. 7,894,361) teaches network capacity engineering.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRIAN W WATHEN whose telephone number is (571)270-5570. The examiner can normally be reached M-F 9-5:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, James Trujillo can be reached at 571-272-3677. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
BRIAN W. WATHEN
Primary Examiner
Art Unit 2151
/BRIAN W WATHEN/ Primary Examiner, Art Unit 2151