Prosecution Insights
Last updated: April 19, 2026
Application No. 18/127,672

AUTOMATED ENTERPRISE INFORMATION TECHNOLOGY ALERTING SYSTEM

Final Rejection §101§103
Filed
Mar 29, 2023
Examiner
WALTON, CHESIREE A
Art Unit
3624
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Omnissa LLC
OA Round
4 (Final)
30%
Grant Probability
At Risk
5-6
OA Rounds
3y 5m
To Grant
58%
With Interview

Examiner Intelligence

Grants only 30% of cases
30%
Career Allow Rate
63 granted / 211 resolved
-22.1% vs TC avg
Strong +29% interview lift
Without
With
+28.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
52 currently pending
Career history
263
Total Applications
across all art units

Statute-Specific Performance

§101
38.8%
-1.2% vs TC avg
§103
48.9%
+8.9% vs TC avg
§102
4.7%
-35.3% vs TC avg
§112
5.6%
-34.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 211 resolved cases

Office Action

§101 §103
Detailed Action The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Notice to Applicant The following is a Final Office action to Application Serial Number 18/127,672, filed on March 29, 2023. In response to Examiner’s Non-Final Office Action of October 7, 2025, Applicant, on January 7, 2025, amended claims 1, 5-6, 8, 12-13, 15, 19 and 21; and cancelled claims 4, 7, 11, 14, 18, and 20. Claims 1-2, 5-6, 8-9, 12-13, 15-16, 19, and 21-22 are pending in this application and have been rejected below. Response to Amendment Applicant’s amendments are acknowledged. Regarding 35 U.S.C. § 101 rejection, the amendment has been considered and is insufficient to overcome the rejection. The 35 U.S.C. § 103 rejections are hereby amended pursuant to applicants amendments. Updated 35 U.S.C. § 103 rejections have been applied to amended claims. Please refer to the § 103 rejection for further explanation and rationale. Response to Arguments Applicant’s arguments filed January 7, 2025 have been fully considered but they are not persuasive and/or are moot in view of the revised rejections. Applicant’s arguments will be addressed herein below in the order in which they appear in the response filed January 7, 2025. On page 9-10 of the Remarks regarding 35 U.S.C. § 101, Applicant states the amended claim integrates any alleged abstract idea into a practical application by reciting specific feature engineering, model training, and anomaly scoring operations that improve the technical operation of a telemetry monitoring system. In response, regarding the 35 U.S.C. § 101 rejection, The present claims amount to no more than utilizing computer components as tools to perform telemetry data analysis. Examiner finds the present claims improve an existing business process of event alert/notification and there are currently no functional advancement to any technology or technological field, in order for the claim elements to be considered significantly more than the abstract idea itself. Utilizing computer structure and technology to collect, analyze and notify users of data change events are all, both individually and in combination, generic computer functions such as receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information and Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93 (See MPEP 2106.05(d)(II). On Pg. 10-11, regarding the 35 U.S.C. § 103 rejection, Applicant argues that The cited references in combination fail to teach or disclose all of the above features. In particular, the amended claim requires assigning time-series data to a time-series group based on a specific combination of attributes, where the group explicitly includes data originating from client devices associated with multiple different organizations, and further requires retrieving a time-series forecasting model that is trained using aggregated historical time-series data from those multiple organizations. None of the cited references disclose or suggest (1) training a single forecasting model on aggregated telemetry data spanning multiple different organizations based on shared attribute combinations or (2) learning an error distribution associated with a forecasting model trained on cross-organizational data and using that learned distribution as the basis for anomaly scoring. In response, new ground(s) of rejection is made necessitated by amendment see MPEP 706.07a where Azeez is applied to independent claims. . Regarding the 35 U.S.C. § 103 rejection, Applicant’s arguments with respect to claims has been considered but are moot in view of the new grounds of rejection. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-2, 5-6, 8-9, 12-13, 15-16, 19, and 21-22 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claims 1-2, 5-6, 8-9, 12-13, 15-16, 19, and 21-22 are directed to alerting enterprise technology information. Claim 1 recites a system for alerting enterprise technology information, Claim 8 recites an article of manufacture for alerting enterprise technology information and Claim 15 recites a method for alerting enterprise technology information, which collecting telemetry data from a plurality of client devices associated with a plurality of different organizations and store the collected telemetry data, wherein the telemetry data is continuously collected wherein the telemetry data includes one or more of: device performance data, device health data, application performance data, application usage data, network performance data, or network health data ; at periodic intervals, retrieving the telemetry data and processing the telemetry data to produce time-series data associated with a number of occurrences of a type of event over a predefined period of time; generating, for the time-series data, a plurality of features comprising one or more of: scaled lag features, rolling features, and time-based features associated with the predefined period of time; assigning the time-series data to a time-series group from a plurality of time-series groups based on a specific combination of attributes associated with the telemetry data and the type of event, wherein the time-series group includes the time-series data originating from the client devices associated with the plurality of different organizations that share the specific combination of attributes; retrieve a time-series forecasting model from a plurality of time- series forecasting models associated with the time-series group; applying the time-series data to the particular time- series forecasting model to produce a predicted value of the number of occurrences of the type of event for the time-series data; generating an anomaly score by comparing the predicted value of the number of occurrences of the type of event to an observed value of the number of occurrences of the type of event in the time-series data relative to the learned error distribution associated with the time-series forecasting model; and generate an alert in an instance in which the anomaly score meets or exceeds a predefined threshold, wherein the alert containing an indication of an anomaly associated with the anomaly score. As drafted, this is, under its broadest reasonable interpretation, within the Abstract idea grouping of Mental Processes- “evaluation” and “Methods of Organizing Human Activity” – managing interactions. The recitation of “computing device”, “client device”, “network”; “data lake”; “processor”, “memory”, and “computer readable medium”, provide nothing in the claim elements to preclude the step from being Mental Processes- evaluation and “Methods of Organizing Human Activity”- managing interactions. Accordingly, the claim recites an abstract idea. This judicial exception is not integrated into a practical application. The claims primarily recite the additional element of using computer components to perform each step. The “computing device”, “client device”, “network”; “data lake”; “processor”, “memory”, “user interface component”, and “computer readable medium” is recited at a high-level of generality, such that it amounts no more than mere instructions to apply the exception using a computer component. See MPEP 2106.05(f). Accordingly, the additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Accordingly, the additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claims also fail to recite any improvements to another technology or technical field, improvements to the functioning of the computer itself, use of a particular machine, effecting a transfo9999rmation or reduction of a particular article to a different state or thing, and/or an additional element applies or uses the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception. See 84 Fed. Reg. 55. In particular, there is a lack of improvement to a computer or technical field in data analytics. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements when considered both individually and as an ordered combination do not amount to significantly more than the abstract idea. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of “computing device”, “client device”, “network”; “data lake”; “processor”, “memory”, and “computer readable medium” is insufficient to amount to significantly more. (See MPEP 2106.05(f) – Mere Instructions to Apply an Exception – “Thus, for example, claims that amount to nothing more than an instruction to apply the abstract idea using a generic computer do not render an abstract idea eligible.” Alice Corp., 134 S. Ct. at 235). Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim fails to recite any improvements to another technology or technical field, improvements to the functioning of the computer itself, use of a particular machine, effecting a transformation or reduction of a particular article to a different state or thing, adding unconventional steps that confine the claim to a particular useful application, and/or meaningful limitations beyond generally linking the use of an abstract idea to a particular environment. See 84 Fed. Reg. 55. Viewed individually or as a whole, these additional claim element(s) do not provide meaningful limitation(s) to transform the abstract idea into a patent eligible application of the abstract idea such that the claim(s) amounts to significantly more than the abstract idea itself. With regards to collecting data and step 2B, it is M2106.05(d)- Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information) and Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015). Examiner concludes that the additional elements in combination fail to amount to significantly more than the abstract idea based on findings that each element merely performs the same function(s) in combination as each element performs separately. The claim is not patent eligible. Thus, taken alone, the additional elements do not amount to significantly more than the above-identified judicial exception (the abstract idea). Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. Dependent Claims 2, 5-6, 9, 12-13, 16, 19 and 21-22 recite the type of event comprises at least one of a system crash, an application crash, a device boot time, a device shutdown time, an application hang, an application foreground event, battery utilization, device central processing unit (CPU) utilization, device memory utilization, a virtual desktop session logon duration time, a failed application installation; specific combination of attributes comprises two or more of. a system platform, an organization identifier, an application identifier, or a geographic location; send the alert to the administrator client device via a push notification; time-series forecasting model is trained using historical data retrieved from the data lake for a periodic training interval that is longer than the predefined period of time for the time-series data for which the anomaly score is generated; the plurality of client devices are enrolled for management with a management service, and wherein each of the plurality of client devices includes a management component that is configured to transmit the telemetry data to the management service over the network and further narrowing the abstract idea. These recited limitations in the dependent claims do not amount to significantly more than the above-identified judicial exceptions in Claims 1, 8 and 15. Regarding Claims, 6, 21-22 and the additional elements of “processor”; “computing device”; “data lake”; “client device” and “network” - it is M2106.05(d)- Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 5-6, 8-9, 12-13, 15-16, 19, and 21-22 are rejected under 35 U.S.C. 103 as being unpatentable over Hamilton et al. , US Publication No. 20240202187A1, [hereinafter Hamilton], in view of Higginson et al., US Publication No. 20230195591A1, [hereinafter Higginson], in further view of Azeez et al., US Publication No. 20240036963A1, [hereinafter Azeez]. Regarding Claim 1, Hamilton teaches A system, comprising: a computing device comprising a processor and a memory; and machine-readable instructions stored in the memory which, when executed by the processor, cause the computing device to at least: collect telemetry data from a plurality of client devices associated with a plurality of different organizations and store the collected telemetry data in a data lake, wherein the telemetry data is continuously collected from the plurality of client devices over a network, wherein the telemetry data includes one or more of: device performance data, device health data, application performance data, application usage data, network performance data, or network health data ( Hamilton Abstract – “A system includes at least one hardware processor and at least one memory storing instructions that cause the at least one hardware processor to perform operations. The operations include configuring a processing stack in an execution node process. The processing stack includes a telemetry application programming interface (API). At least one configuration of a trace event is retrieved using an API call received by the execution node process. Telemetry information of the trace even is collected using the telemetry API based on the at least one configuration. An event table is updated based on the telemetry information.”; Par. 26- 28- “As used herein, the term “trace event” (TE) indicates one or more values representing a structured payload occurring at an arbitrary point in time. Trace events can be used to capture aperiodic events, similar to logs. Unlike logs, trace events have a structured payload format that includes a set of key-value pairs. This makes them more suitable for capturing counter data for aggregation at a later query phase. In some aspects, trace events can be used for usage analysis with relatively low volume and, therefore, are associated with higher quality of service (Qos) applications. In some aspects, trace events can be auto-instrumented without additional API calls.”; Par. 112-113; Par. 32-33-“The cloud computing platform 101 may include a three-tier architecture: data storage (e.g., storage platforms 104 and 122), an execution platform 110 (e.g., providing query processing), and a compute service manager 108 providing cloud services. It is often the case that organizations that are customers of a given data platform also maintain data storage (e.g., a data lake) that is external to the data platform (i.e., one or more external storage locations).; Par. 179”); at periodic intervals, retrieve the telemetry data from the data lake and process the telemetry data to produce time-series data associated with a number of occurrences of a type of event across the plurality of client devices over a predefined period of time (Hamilton Par. 25; Par. 32-33; Par. 37-“The compute service manager 108 can support any number of client accounts such as end-users providing data storage and retrieval requests, system administrators managing the systems and methods described herein, and other components/devices that interact with compute service manager 108. “; Par. 93); Hamilton teaches telemetry analysis and the feature is expounded upon by Higginson: generate, for the time-series data, a plurality of features comprising one or more of: scaled lag features, rolling features, and time-based features associated with the predefined period of time (Higginson Par. 110- The system analyzes the correlogram data to determine a candidate set of parameter values to be used by the time-series models (Operation 510). The system analyzes the correlogram data to determine a candidate set of autoregressive terms to be used by the time-series models. For example, using an autocorrelation function, a set of time-series data is copied, and the copy is adjusted to lag the original set of time-series data. By comparing the original set of time-series data with multiple copies having different lag intervals, the system identifies sets of parameter values for time-series models that are likely to result in the most accurate predictions.); assign the time-series data to a time-series group from a plurality of time-series groups based on a specific combination of attributes associated with the telemetry data and the type of event, wherein the time-series group includes the time-series data originating from the client devices associated with the plurality of different organizations that share the specific combination of attributes (Higginson Par. 28-30- A system utilizes time-series machine learning models to forecast workloads of computing resources in a computing system. Time-series machine learning models are defined by parameters, so that changing parameter values changes a response by the model to a set of input data. A system trains and tests multiple different versions of a time-series model and selects the most accurate version to generate forecasts for a particular workload in the computing system. Tens or hundreds of combinations of parameters could be applied to a time-series model to generate predictions. In addition, when a system identifies related workloads, training and testing machine learning models for the different related workloads results in thousands or tens of thousands of permutations of parameter values. However, attempting to train tens, hundreds, or thousands of models to generate workload forecasts for a computing system takes too much time to be useful and consumes too many computing resources to be practical. The system creates a candidate set of time-series models for forecasting computing workloads by filtering the sets of parameter values for the models from tens, hundreds, or thousands of sets to a number that meets system performance specifications for generating forecasts. A candidate set of time series models includes multiple versions of the same time-series model. The multiple e versions are associated with respective sets of parameter values. For example, two different models include the same parameter types but different values for the parameters. The system trains the candidate set of time-series models with a training data set. The system selects the best-performing time-series model to generate forecasts for a particular computing resource in a computing system.”; Par. 38; Par. 7; Par. 160); retrieve a time-series forecasting model from a plurality of time- series forecasting models associated with the time-series group, wherein the time-series forecasting model is trained using the plurality of features derived from aggregated historical time- series data of the plurality of different organizations included in the time-series group and wherein the time-series forecasting model is associated with an error distribution learned based on the aggregated historical time-series data of the plurality of different organizations included in the time-series group (Higginson Par. 28-“ A system trains and tests multiple different versions of a time-series model and selects the most accurate version to generate forecasts for a particular workload in the computing system.”; Par. 40-44- The model parameter selection engine 181 further selects candidate parameter values by analyzing the historical time-series data to determine whether the time-series data includes seasonal patterns, multi-seasonal patterns, trends, and outliers or shocks. Based on the identified characteristics of the historical time-series data, the model parameter selection engine 181 selects particular time-series models that are likely a good fit for the historical data. For example, a model parameter selection engine 181 may compute the ACF/PACF and identify which parameters for time-series models are most likely to result in accurate forecasts. Accordingly, the ACF and/or PACF calculations filter to reduce a number of iterations of time-series model parameters the system tests to predict future workflow values. This filtering technique reduces the parameters of the time series models (p, d, q, P, D, Q, f) and their combinations of the SARIMAX-type model, to be trained to the historical data. ; Par. 51; Par. 70; Par. 117;Par. 127); apply the time-series data to the time-series forecasting model to produce a predicted value of the number of occurrences of the type of event for the time-series data (Higginson Par. 29-31- The system may predict that no anomaly will occur when the system forecasts only one of the processors will exceed a threshold processing capacity value. Conversely, the system may predict an anomaly will occur if both processors are forecasted to simultaneously exceed respective processing capacity values.; Par. 72-73- The monitoring module 131 includes functionality to predict anomalies based on comparisons of forecasts generated by the workload forecast module 160 with corresponding thresholds. For example, thresholds may represent limits on utilization of resources by the entities and/or service level objectives for performance metrics associated with the entities. When a forecasted metric violates (e.g., exceeds) a corresponding threshold, monitoring module 131 may detect a potential future anomaly, error, outage, and/or failure in the operation of hardware and/or software resources associated with the entity. For example, the entity within a topology that makes up a system may suffer a fault that is reflected in the time-series data as a spike or growth/trend. The prediction from the models can pick up this sudden change in resource utilization that is reflected to the user identifying a “change” in usage that requires investigation.; Par. 97-“If the system determines that the target workload is not part of a workload cluster, the system generates a workload forecast for the target workload in response to the request to generate a forecast for the target entity workload (Operation 308). The system generates the forecast for the target workload by applying a time-series model trained on a set of attributes associated with the target entity to time-series attribute data from the target entity.”) Hamilton and Higginson are directed to time series data analysis. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have improve upon data analysis of Hamilton , as taught by Higginson, by utilizing additional time series forecasting analysis with a reasonable expectation of success of arriving at the claimed invention. One of ordinary skill in the art would have been motivated to make the modification to the teachings of Hamilton with the motivation of improving the accuracy of the models and allow the models to be adapted to various types of time-series data collected from the monitored systems (Higginson Par. 48). Hamilton in view Higginson teach data analytics and the feature is expounded upon by Azeez: generate an anomaly score by comparing the predicted value of the number of occurrences of the type of event to an observed value of the number of occurrences of the type of event in the time-series data relative to the learned error distribution associated with the time-series forecasting model (Azeez Par. 38- The ML models 120A-N may each be trained using respective machine learning techniques to detect respective behaviors of the metric 101. For example, the ML model 120A may be trained to learn seasonality and trend in historical data values used as training data for the metric 101. To do so, ML model 120A may be trained to analyze a time series of values for the metric 101 to learn seasonality and trends in the time series. For example, the ML model 120A may analyze seasonality and trend in the data to generate feature prediction. Thus, for a given point in time, the ML model 120A may determine whether a data value being assessed at the given point is anomalous compared to similar points in time (seasonally or trend adjusted). ML model 120A may also generate an upper bound and a lower bound by analyzing uncertainty based on the historical seasonal and trend data. If an analyzed input data value for the metric 101 is outside the upper and lower bounds, then the ML model 120A may determine that the data value the metric 101 is anomalous.; Par. 40-52; Par. 138-140) and generate an alert in an instance in which the anomaly score meets or exceeds a predefined threshold, wherein the alert causes a user interface component to be displayed on an administrator client device containing an indication of an anomaly associated with the anomaly score. (Azeez Par. 64; Par. 148-149- At 614, the method 600 may include generating for display an indication of the mitigative action and the identified source of the data value based on the stored association. For example, the UI subsystem 140 may generate data, for display via a user interface of a client device 160, an indication of the mitigative action and identified source… Furthermore, the mitigative actions provided to various engineers may result from the same core issue. For example, an application that causes an anomaly may cause anomalous readings across a range of sources. In this example, the application may cause an endless loop of calls to an application service, which may make network calls to a server device. The endless loop of calls may therefore cause anomalous readings across a different range of sources. Each party responsible for each of the different range of sources may be alerted to the anomalous readings to aid in troubleshooting and mitigative efforts. ; Par. 152) Hamilton, Higginson and Azeez are directed to time series data analysis. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have improve upon data analysis of Hamilton in view of Higginson, as taught by Azeez, by utilizing notifications with a reasonable expectation of success of arriving at the claimed invention. One of ordinary skill in the art would have been motivated to make the modification to the teachings of Hamilton in view of Higginson with the motivation of detecting anomalies across multiple different types of metrics (Azeez Abstract). Regarding Claim 2, Claim 9 and Claim 16, Hamilton in view of Higginson in further view of Savir teach the system of claim 1, …, the non-transitory computer-readable medium of claim 8,… and the computer-implemented method of claim 15,… wherein the type of event comprises at least one of a system crash, an application crash, a device boot time, a device shutdown time, an application hang, an application foreground event, battery utilization, device central processing unit (CPU) utilization, device memory utilization, a virtual desktop session logon duration time, a failed SSO login, or a failed application installation (Hamilton Par. 76; Par122-130 a) LOG: the events representing calls to standard logging APIs (e.g., slf4j in Java and logging in Python).(b) SPAN: these events record the execution of all or part of a user function or procedure. Among other things, spans record the start and end timestamp of execution. (c)SPAN_EVENT: these are structured trace events generated within the execution of a span. Span events can be created by calling trace event APIs in their function or procedure.”) Regarding Claim 3, Claim 10 and Claim 17 - Cancelled Regarding Claim 4, Claim 11 and Claim 18, - Cancelled Regarding Claim 5, Claim 12 and Claim 19, Hamilton in view of Higginson in further view of Savir teach the system of claim 1, …, the non-transitory computer-readable medium of claim 8,… and the computer-implemented method of claim 15,… Hamilton teaches telemetry analysis and the feature is expounded upon by Higginson: wherein the specific combination of attributes comprises two or more of: a system platform, an organization identifier, an application identifier, or a geographic location. (Higginson Par.38- “Resource management system 130 may additionally define an entity using a collection of entity attributes and perform monitoring and/or analysis based on metrics associated with entity attributes. For example, resource management system 130 may identify an entity as a combination of a customer, type of metric (e.g., processor utilization, memory utilization, etc.), and/or level of granularity (e.g., virtual machine, application, database, application server, database server, transaction, etc.); Par. 95) Hamilton and Higginson are directed to time series data analysis. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have improve upon data analysis of Hamilton , as taught by Higginson, by utilizing additional time series forecasting analysis with a reasonable expectation of success of arriving at the claimed invention. One of ordinary skill in the art would have been motivated to make the modification to the teachings of Hamilton with the motivation of improving the accuracy of the models and allow the models to be adapted to various types of time-series data collected from the monitored systems (Higginson Par. 48). Regarding Claim 6 and Claim 13, Hamilton in view of Higginson in further view of Savir teach the system of claim 1, … and the non-transitory computer-readable medium of claim 8,… wherein, when executed by the processor, the machine readable instructions further cause the computing device to at least send the alert to the administrator client device via a push notification. (Hamilton Par.39- “For example, a notification to a user may be understood to be a notification transmitted to client device 114, input or instruction from a user may be understood to be received by way of the client device 114, and interaction with an interface by a user shall be understood to be interaction with the interface on the client device 114.; Par. 95-“For example, the ingestion scheduling component 514 notifies ingestion component 502 that formatted trace event data is available in table stage 411 and is ready to be ingested into the MET”) Regarding Claim 7 , Claim 14 and Claim 20, - cancelled Regarding Claim 8, Hamilton teaches A non-transitory computer-readable medium embodying executable instructions which, when executed by a computing device, cause the computing device to at least: collect telemetry data from a plurality of client devices within a plurality of organizations and store the collected telemetry data in a data lake, wherein the telemetry data is continuously collected from the plurality of client devices over a network, wherein the telemetry data includes one or more of: device performance data, device health data, application performance data, application usage data, network performance data, or network health data ( Hamilton Abstract – “A system includes at least one hardware processor and at least one memory storing instructions that cause the at least one hardware processor to perform operations. The operations include configuring a processing stack in an execution node process. The processing stack includes a telemetry application programming interface (API). At least one configuration of a trace event is retrieved using an API call received by the execution node process. Telemetry information of the trace even is collected using the telemetry API based on the at least one configuration. An event table is updated based on the telemetry information.”; Par. 26- 28- “As used herein, the term “trace event” (TE) indicates one or more values representing a structured payload occurring at an arbitrary point in time. Trace events can be used to capture aperiodic events, similar to logs. Unlike logs, trace events have a structured payload format that includes a set of key-value pairs. This makes them more suitable for capturing counter data for aggregation at a later query phase. In some aspects, trace events can be used for usage analysis with relatively low volume and, therefore, are associated with higher quality of service (Qos) applications. In some aspects, trace events can be auto-instrumented without additional API calls.”;; Par. 112-113; Par. 32-33-“The cloud computing platform 101 may include a three-tier architecture: data storage (e.g., storage platforms 104 and 122), an execution platform 110 (e.g., providing query processing), and a compute service manager 108 providing cloud services. It is often the case that organizations that are customers of a given data platform also maintain data storage (e.g., a data lake) that is external to the data platform (i.e., one or more external storage locations).”; Par. 179); at periodic intervals, retrieve the telemetry data from the data lake and process the telemetry data to produce time-series data associated with a number of occurrences of a type of event across the plurality of client devices over a predefined period of time; (Hamilton Par. 32-33; Par. 37-“The compute service manager 108 can support any number of client accounts such as end-users providing data storage and retrieval requests, system administrators managing the systems and methods described herein, and other components/devices that interact with compute service manager 108. “; Par. 93); Hamilton teaches telemetry analysis and the feature is expounded upon by Higginson: generate, for the time-series data, a plurality of features comprising one or more of: scaled lag features, rolling features, and time-based features associated with the predefined period of time (Higginson Par. 110- The system analyzes the correlogram data to determine a candidate set of parameter values to be used by the time-series models (Operation 510). The system analyzes the correlogram data to determine a candidate set of autoregressive terms to be used by the time-series models. For example, using an autocorrelation function, a set of time-series data is copied, and the copy is adjusted to lag the original set of time-series data. By comparing the original set of time-series data with multiple copies having different lag intervals, the system identifies sets of parameter values for time-series models that are likely to result in the most accurate predictions.); assign the time-series data to a time-series group from a plurality of time-series groups based on a specific combination of attributes associated with the telemetry data and the type of event, wherein the time-series group includes the time-series data originating from the client devices associated with the plurality of different organizations that share the specific combination of attributes (Higginson Par. 28-30- A system utilizes time-series machine learning models to forecast workloads of computing resources in a computing system. Time-series machine learning models are defined by parameters, so that changing parameter values changes a response by the model to a set of input data. A system trains and tests multiple different versions of a time-series model and selects the most accurate version to generate forecasts for a particular workload in the computing system. Tens or hundreds of combinations of parameters could be applied to a time-series model to generate predictions. In addition, when a system identifies related workloads, training and testing machine learning models for the different related workloads results in thousands or tens of thousands of permutations of parameter values. However, attempting to train tens, hundreds, or thousands of models to generate workload forecasts for a computing system takes too much time to be useful and consumes too many computing resources to be practical. The system creates a candidate set of time-series models for forecasting computing workloads by filtering the sets of parameter values for the models from tens, hundreds, or thousands of sets to a number that meets system performance specifications for generating forecasts. A candidate set of time series models includes multiple versions of the same time-series model. The multiple e versions are associated with respective sets of parameter values. For example, two different models include the same parameter types but different values for the parameters. The system trains the candidate set of time-series models with a training data set. The system selects the best-performing time-series model to generate forecasts for a particular computing resource in a computing system.”; Par. 38; Par. 7; Par. 160); retrieve a time-series forecasting model from a plurality of time- series forecasting models associated with the time-series group, wherein the time-series forecasting model is trained using the plurality of features derived from aggregated historical time- series data of the plurality of different organizations included in the time-series group and wherein the time-series forecasting model is associated with an error distribution learned based on the aggregated historical time-series data of the plurality of different organizations included in the time-series group (Higginson Par. 28-“ A system trains and tests multiple different versions of a time-series model and selects the most accurate version to generate forecasts for a particular workload in the computing system.”; Par. 40-44- The model parameter selection engine 181 further selects candidate parameter values by analyzing the historical time-series data to determine whether the time-series data includes seasonal patterns, multi-seasonal patterns, trends, and outliers or shocks. Based on the identified characteristics of the historical time-series data, the model parameter selection engine 181 selects particular time-series models that are likely a good fit for the historical data. For example, a model parameter selection engine 181 may compute the ACF/PACF and identify which parameters for time-series models are most likely to result in accurate forecasts. Accordingly, the ACF and/or PACF calculations filter to reduce a number of iterations of time-series model parameters the system tests to predict future workflow values. This filtering technique reduces the parameters of the time series models (p, d, q, P, D, Q, f) and their combinations of the SARIMAX-type model, to be trained to the historical data. ; Par. 51; Par. 70; Par. 117;Par. 127); apply the time-series data to the particular time-series forecasting model to produce a predicted value of the number of occurrences of the type of event for the time-series data (Higginson Par. 29-31- The system may predict that no anomaly will occur when the system forecasts only one of the processors will exceed a threshold processing capacity value. Conversely, the system may predict an anomaly will occur if both processors are forecasted to simultaneously exceed respective processing capacity values.; Par. 72-73- The monitoring module 131 includes functionality to predict anomalies based on comparisons of forecasts generated by the workload forecast module 160 with corresponding thresholds. For example, thresholds may represent limits on utilization of resources by the entities and/or service level objectives for performance metrics associated with the entities. When a forecasted metric violates (e.g., exceeds) a corresponding threshold, monitoring module 131 may detect a potential future anomaly, error, outage, and/or failure in the operation of hardware and/or software resources associated with the entity. For example, the entity within a topology that makes up a system may suffer a fault that is reflected in the time-series data as a spike or growth/trend. The prediction from the models can pick up this sudden change in resource utilization that is reflected to the user identifying a “change” in usage that requires investigation.; Par. 97-“If the system determines that the target workload is not part of a workload cluster, the system generates a workload forecast for the target workload in response to the request to generate a forecast for the target entity workload (Operation 308). The system generates the forecast for the target workload by applying a time-series model trained on a set of attributes associated with the target entity to time-series attribute data from the target entity.”) Hamilton and Higginson are directed to time series data analysis. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have improve upon data analysis of Hamilton , as taught by Higginson, by utilizing additional time series forecasting analysis with a reasonable expectation of success of arriving at the claimed invention. One of ordinary skill in the art would have been motivated to make the modification to the teachings of Hamilton with the motivation of improving the accuracy of the models and allow the models to be adapted to various types of time-series data collected from the monitored systems (Higginson Par. 48). Hamilton in view Higginson teach data analytics and the feature is expounded upon by Azeez: generate an anomaly score by comparing the predicted value of the number of occurrences of the type of event to an observed value of the number of occurrences of the type of event in the time-series data relative to the learned error distribution associated with the time-series forecasting model (Azeez Par. 38- The ML models 120A-N may each be trained using respective machine learning techniques to detect respective behaviors of the metric 101. For example, the ML model 120A may be trained to learn seasonality and trend in historical data values used as training data for the metric 101. To do so, ML model 120A may be trained to analyze a time series of values for the metric 101 to learn seasonality and trends in the time series. For example, the ML model 120A may analyze seasonality and trend in the data to generate feature prediction. Thus, for a given point in time, the ML model 120A may determine whether a data value being assessed at the given point is anomalous compared to similar points in time (seasonally or trend adjusted). ML model 120A may also generate an upper bound and a lower bound by analyzing uncertainty based on the historical seasonal and trend data. If an analyzed input data value for the metric 101 is outside the upper and lower bounds, then the ML model 120A may determine that the data value the metric 101 is anomalous.; Par. 40-52; Par. 138-140) and generate an alert in an instance in which the anomaly score meets or exceeds a predefined threshold, wherein the alert causes a user interface component to be displayed on an administrator client device containing an indication of an anomaly associated with the anomaly score. (Azeez Par. 64; Par. 148-149- At 614, the method 600 may include generating for display an indication of the mitigative action and the identified source of the data value based on the stored association. For example, the UI subsystem 140 may generate data, for display via a user interface of a client device 160, an indication of the mitigative action and identified source… Furthermore, the mitigative actions provided to various engineers may result from the same core issue. For example, an application that causes an anomaly may cause anomalous readings across a range of sources. In this example, the application may cause an endless loop of calls to an application service, which may make network calls to a server device. The endless loop of calls may therefore cause anomalous readings across a different range of sources. Each party responsible for each of the different range of sources may be alerted to the anomalous readings to aid in troubleshooting and mitigative efforts. ; Par. 152) Hamilton, Higginson and Azeez are directed to time series data analysis. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have improve upon data analysis of Hamilton in view of Higginson, as taught by Azeez, by utilizing notifications with a reasonable expectation of success of arriving at the claimed invention. One of ordinary skill in the art would have been motivated to make the modification to the teachings of Hamilton in view of Higginson with the motivation of detecting anomalies across multiple different types of metrics (Azeez Abstract). Regarding Claim 15, Hamilton teaches A computer-implemented method, comprising: collecting, via at least one computing device, telemetry data from a plurality of client devices and storing the collected telemetry data in a data lake, wherein the telemetry data is continuously collected from the plurality of client devices over a network, wherein the telemetry data includes one or more of: device performance data, device health data, application performance data, application usage data, network performance data, or network health data ( Hamilton Abstract – “A system includes at least one hardware processor and at least one memory storing instructions that cause the at least one hardware processor to perform operations. The operations include configuring a processing stack in an execution node process. The processing stack includes a telemetry application programming interface (API). At least one configuration of a trace event is retrieved using an API call received by the execution node process. Telemetry information of the trace even is collected using the telemetry API based on the at least one configuration. An event table is updated based on the telemetry information.”; Par. 26- 28- “As used herein, the term “trace event” (TE) indicates one or more values representing a structured payload occurring at an arbitrary point in time. Trace events can be used to capture aperiodic events, similar to logs. Unlike logs, trace events have a structured payload format that includes a set of key-value pairs. This makes them more suitable for capturing counter data for aggregation at a later query phase. In some aspects, trace events can be used for usage analysis with relatively low volume and, therefore, are associated with higher quality of service (Qos) applications. In some aspects, trace events can be auto-instrumented without additional API calls.”;; Par. 112-113; Par. 32-33-“The cloud computing platform 101 may include a three-tier architecture: data storage (e.g., storage platforms 104 and 122), an execution platform 110 (e.g., providing query processing), and a compute service manager 108 providing cloud services. It is often the case that organizations that are customers of a given data platform also maintain data storage (e.g., a data lake) that is external to the data platform (i.e., one or more external storage locations).”; Par. 179); at periodic intervals, retrieving the telemetry data from the data lake and process the telemetry data to produce time-series data associated with a number of occurrences of a type of event across the plurality of client devices over a predefined period of time; (Hamilton Par. 32-33; Par. 37-“The compute service manager 108 can support any number of client accounts such as end-users providing data storage and retrieval requests, system administrators managing the systems and methods described herein, and other components/devices that interact with compute service manager 108. “; Par. 93); Hamilton teaches telemetry analysis and the feature is expounded upon by Higginson: generating, for the time-series data, a plurality of features comprising one or more of: scaled lag features, rolling features, and time-based features associated with the predefined period of time (Higginson Par. 110- The system analyzes the correlogram data to determine a candidate set of parameter values to be used by the time-series models (Operation 510). The system analyzes the correlogram data to determine a candidate set of autoregressive terms to be used by the time-series models. For example, using an autocorrelation function, a set of time-series data is copied, and the copy is adjusted to lag the original set of time-series data. By comparing the original set of time-series data with multiple copies having different lag intervals, the system identifies sets of parameter values for time-series models that are likely to result in the most accurate predictions.); assigning the time-series data to a time-series group from a plurality of time-series groups based on a specific combination of attributes associated with the telemetry data and the type of event, wherein the time-series group includes the time-series data originating from the client devices associated with the plurality of different organizations that share the specific combination of attributes (Higginson Par. 28-30- A system utilizes time-series machine learning models to forecast workloads of computing resources in a computing system. Time-series machine learning models are defined by parameters, so that changing parameter values changes a response by the model to a set of input data. A system trains and tests multiple different versions of a time-series model and selects the most accurate version to generate forecasts for a particular workload in the computing system. Tens or hundreds of combinations of parameters could be applied to a time-series model to generate predictions. In addition, when a system identifies related workloads, training and testing machine learning models for the different related workloads results in thousands or tens of thousands of permutations of parameter values. However, attempting to train tens, hundreds, or thousands of models to generate workload forecasts for a computing system takes too much time to be useful and consumes too many computing resources to be practical. The system creates a candidate set of time-series models for forecasting computing workloads by filtering the sets of parameter values for the models from tens, hundreds, or thousands of sets to a number that meets system performance specifications for generating forecasts. A candidate set of time series models includes multiple versions of the same time-series model. The multiple e versions are associated with respective sets of parameter values. For example, two different models include the same parameter types but different values for the parameters. The system trains the candidate set of time-series models with a training data set. The system selects the best-performing time-series model to generate forecasts for a particular computing resource in a computing system.”; Par. 38; Par. 7; Par. 160); retrieving, via the at least one computing device, a time-series forecasting model from a plurality of time- series forecasting models associated with the time-series group, wherein the time-series forecasting model is trained using the plurality of features derived from aggregated historical time- series data of the plurality of different organizations included in the time-series group and wherein the time-series forecasting model is associated with an error distribution learned based on the aggregated historical time-series data of the plurality of different organizations included in the time-series group (Higginson Par. 28-“ A system trains and tests multiple different versions of a time-series model and selects the most accurate version to generate forecasts for a particular workload in the computing system.”; Par. 40-44- The model parameter selection engine 181 further selects candidate parameter values by analyzing the historical time-series data to determine whether the time-series data includes seasonal patterns, multi-seasonal patterns, trends, and outliers or shocks. Based on the identified characteristics of the historical time-series data, the model parameter selection engine 181 selects particular time-series models that are likely a good fit for the historical data. For example, a model parameter selection engine 181 may compute the ACF/PACF and identify which parameters for time-series models are most likely to result in accurate forecasts. Accordingly, the ACF and/or PACF calculations filter to reduce a number of iterations of time-series model parameters the system tests to predict future workflow values. This filtering technique reduces the parameters of the time series models (p, d, q, P, D, Q, f) and their combinations of the SARIMAX-type model, to be trained to the historical data. ; Par. 51; Par. 70; Par. 117;Par. 127); applying, via the at least one computing device, the time-series data to the particular time-series forecasting model to produce a predicted value of the number of occurrences of the type of event for the time-series data (Higginson Par. 29-31- The system may predict that no anomaly will occur when the system forecasts only one of the processors will exceed a threshold processing capacity value. Conversely, the system may predict an anomaly will occur if both processors are forecasted to simultaneously exceed respective processing capacity values.; Par. 72-73- The monitoring module 131 includes functionality to predict anomalies based on comparisons of forecasts generated by the workload forecast module 160 with corresponding thresholds. For example, thresholds may represent limits on utilization of resources by the entities and/or service level objectives for performance metrics associated with the entities. When a forecasted metric violates (e.g., exceeds) a corresponding threshold, monitoring module 131 may detect a potential future anomaly, error, outage, and/or failure in the operation of hardware and/or software resources associated with the entity. For example, the entity within a topology that makes up a system may suffer a fault that is reflected in the time-series data as a spike or growth/trend. The prediction from the models can pick up this sudden change in resource utilization that is reflected to the user identifying a “change” in usage that requires investigation.; Par. 97-“If the system determines that the target workload is not part of a workload cluster, the system generates a workload forecast for the target workload in response to the request to generate a forecast for the target entity workload (Operation 308). The system generates the forecast for the target workload by applying a time-series model trained on a set of attributes associated with the target entity to time-series attribute data from the target entity.”) Hamilton and Higginson are directed to time series data analysis. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have improve upon data analysis of Hamilton , as taught by Higginson, by utilizing additional time series forecasting analysis with a reasonable expectation of success of arriving at the claimed invention. One of ordinary skill in the art would have been motivated to make the modification to the teachings of Hamilton with the motivation of improving the accuracy of the models and allow the models to be adapted to various types of time-series data collected from the monitored systems (Higginson Par. 48). Hamilton in view Higginson teach data analytics and the feature is expounded upon by Azeez: generating, via the at least one computing device, an anomaly score by comparing the predicted value of the number of occurrences of the type of event to an observed value of the number of occurrences of the type of event in the time-series data relative to the learned error distribution associated with the time-series forecasting model (Azeez Par. 38- The ML models 120A-N may each be trained using respective machine learning techniques to detect respective behaviors of the metric 101. For example, the ML model 120A may be trained to learn seasonality and trend in historical data values used as training data for the metric 101. To do so, ML model 120A may be trained to analyze a time series of values for the metric 101 to learn seasonality and trends in the time series. For example, the ML model 120A may analyze seasonality and trend in the data to generate feature prediction. Thus, for a given point in time, the ML model 120A may determine whether a data value being assessed at the given point is anomalous compared to similar points in time (seasonally or trend adjusted). ML model 120A may also generate an upper bound and a lower bound by analyzing uncertainty based on the historical seasonal and trend data. If an analyzed input data value for the metric 101 is outside the upper and lower bounds, then the ML model 120A may determine that the data value the metric 101 is anomalous.; Par. 40-52; Par. 138-140) and generating, via the at least one computing device, an alert in an instance in which the anomaly score meets or exceeds a predefined threshold, wherein the alert causes a user interface component to be displayed on an administrator client device containing an indication of an anomaly associated with the anomaly score. (Azeez Par. 64; Par. 148-149- At 614, the method 600 may include generating for display an indication of the mitigative action and the identified source of the data value based on the stored association. For example, the UI subsystem 140 may generate data, for display via a user interface of a client device 160, an indication of the mitigative action and identified source… Furthermore, the mitigative actions provided to various engineers may result from the same core issue. For example, an application that causes an anomaly may cause anomalous readings across a range of sources. In this example, the application may cause an endless loop of calls to an application service, which may make network calls to a server device. The endless loop of calls may therefore cause anomalous readings across a different range of sources. Each party responsible for each of the different range of sources may be alerted to the anomalous readings to aid in troubleshooting and mitigative efforts. ; Par. 152) Hamilton, Higginson and Azeez are directed to time series data analysis. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have improve upon data analysis of Hamilton in view of Higginson, as taught by Azeez, by utilizing notifications with a reasonable expectation of success of arriving at the claimed invention. One of ordinary skill in the art would have been motivated to make the modification to the teachings of Hamilton in view of Higginson with the motivation of detecting anomalies across multiple different types of metrics (Azeez Abstract). Regarding Claim 21, Hamilton in view of Higginson in further view of Savir teach The system of claim 1,… Hamilton teaches telemetry analysis and the feature is expounded upon by Higginson: wherein the time-series forecasting model is trained using historical data retrieved from the data lake for a periodic training interval that is longer than the predefined period of time for the time-series data… (Higginson Par. 42-44- The model parameter selection engine 181 further selects candidate parameter values by analyzing the historical time-series data to determine whether the time-series data includes seasonal patterns, multi-seasonal patterns, trends, and outliers or shocks. Based on the identified characteristics of the historical time-series data, the model parameter selection engine 181 selects particular time-series models that are likely a good fit for the historical data… The training module 150 generates time-series models for various entities associated with the monitored systems using machine learning techniques. The training module 150 obtains the historical time-series data 171 for a given entity (e.g., a combination of a customer, metric, and level of granularity) from the data repository 170. The training module 150 divides the historical time-series data into a training data set 151, a test data set 152, and a validation data set 153. The training module 150 trains a set of time-series models with the training data set 151 and tests the set of time-series models using the test data set 152. The set of time-series models trained by the training module 150 includes multiple different versions of a same model type defined by different combinations of model parameters (such as p, d, and q, for an ARIMA-type model).”) Hamilton and Higginson are directed to time series data analysis. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have improve upon data analysis of Hamilton , as taught by Higginson, by utilizing additional time series forecasting analysis with a reasonable expectation of success of arriving at the claimed invention. One of ordinary skill in the art would have been motivated to make the modification to the teachings of Hamilton with the motivation of improving the accuracy of the models and allow the models to be adapted to various types of time-series data collected from the monitored systems (Higginson Par. 48). Hamilton in view Higginson teach data analytics and the feature is expounded upon by Azeez: … for which the anomaly score is generated. (Azeez Par. 5- The system may generate an aggregate anomaly score based on the anomaly scores from the machine learning models, thereby detecting anomalies based on different behavioral patterns of the same metric. In this way, the system may determine whether a data value of a metric is an anomaly based on multiple learned behaviors of the metric. For example, in operation, the system may access a data value to determine whether the data value is anomalous. The system may provide the data value as input to the plurality of machine learning models, which may each output a respective anomaly score. Each anomaly score may represent a prediction, by the machine learning model that generated the anomaly score, that the data value is anomalous. Hamilton, Higginson and Azeez are directed to time series data analysis. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have improve upon data analysis of Hamilton in view of Higginson, as taught by Azeez, by utilizing notifications with a reasonable expectation of success of arriving at the claimed invention. One of ordinary skill in the art would have been motivated to make the modification to the teachings of Hamilton in view of Higginson with the motivation of detecting anomalies across multiple different types of metrics (Azeez Abstract). Regarding Claim 22, Hamilton in view of Higginson in further view of Savir teach The system of claim 1,… wherein the plurality of client devices are enrolled for management with a management service, and wherein each of the plurality of client devices includes a management component that is configured to transmit the telemetry data to the management service over the network (Higginson Par. 36-37-“In addition, resource management system 130 may perform such monitoring and/or management at different levels of granularity and/or for different entities. For example, resource management system 130 may assess resource utilization and/or workloads at the environment, cluster, host, virtual machine, database, database server, application, application server, transaction (e.g., a sequence of clicks on a website or web application to complete an online order), and/or data (e.g., database records, metadata, request/response attributes, etc.) level. Resource management system 130 may additionally define an entity using a collection of entity attributes and perform monitoring and/or analysis based on metrics associated with entity attributes. For example, resource management system 130 may identify an entity as a combination of a customer, type of metric (e.g., processor utilization, memory utilization, etc.), and/or level of granularity (e.g., virtual machine, application, database, application server, database server, transaction, etc.). In the example illustrated in FIG. 1A, the system may define an entity as an organization associated with the client device 126. The attributes associated with the entity may include the virtual machines run by the client devices of the organization, the nodes hosting the virtual machines, the applications running on the virtual machines, and the hardware (e.g., processors, processing threads, memory) that make up the nodes hosting the virtual machines. Additional attributes may include applications, node clusters, nodes, databases, processors, memory, and workflows associated with the organization.”) Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: US Publication No. 20190228296A1 to Gefen et al.- Abstract-“ Embodiments for identifying significant events for finding a root cause of an anomaly collecting time series data for events for each network device by detecting an anomaly in the time series data comprising an outlier on an edge of the time series data by comparing a predicted value of the event to an actual value of the event using a selected forecasting model; declaring the event to be an anomaly at a particular time if a difference between the predicted value and actual value exceed a defined threshold based on residual values for other devices; analyzing in a combined RNN/LSTM process all events for all devices of the network within a time proximity of the particular time of the anomaly to filter usual events and rank each event relative to the anomaly; and displaying a labeled chart of the time series data showing the anomaly in a graph relative to all the events.” THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Chesiree Walton, whose telephone number is (571) 272-5219. The examiner can normally be reached from Monday to Friday between 8 AM and 5 PM. If any attempt to reach the examiner by telephone is unsuccessful, the examiner’s supervisor, Patricia Munson, can be reached at (571) 270-5396. The fax telephone numbers for this group are either (571) 273-8300 or (703) 872-9326 (for official communications including After Final communications labeled “Box AF”). Another resource that is available to applicants is the Patent Application Information Retrieval (PAIR). Information regarding the status of an application can be obtained from the (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAX. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, please feel free to contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Applicants are invited to contact the Office to schedule an in-person interview to discuss and resolve the issues set forth in this Office Action. Although an interview is not required, the Office believes that an interview can be of use to resolve any issues related to a patent application in an efficient and prompt manner. Sincerely, /CHESIREE A WALTON/Examiner, Art Unit 3624
Read full office action

Prosecution Timeline

Mar 29, 2023
Application Filed
Nov 10, 2024
Non-Final Rejection — §101, §103
Feb 18, 2025
Response Filed
Apr 27, 2025
Final Rejection — §101, §103
Jul 31, 2025
Request for Continued Examination
Aug 01, 2025
Response after Non-Final Action
Oct 03, 2025
Non-Final Rejection — §101, §103
Jan 07, 2026
Response Filed
Jan 10, 2026
Interview Requested
Mar 24, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591903
SELF-SUPERVISED SYSTEM GENERATING EMBEDDINGS REPRESENTING SEQUENCED ACTIVITY
2y 5m to grant Granted Mar 31, 2026
Patent 12561640
METHOD AND SYSTEM TO STREAMLINE RETURN DECISION AND OPTIMIZE COSTS
2y 5m to grant Granted Feb 24, 2026
Patent 12555047
SYSTEMS AND METHODS FOR FORMULATING OR EVALUATING A CONSTRUCTION COMPOSITION
2y 5m to grant Granted Feb 17, 2026
Patent 12518292
HIERARCHY AWARE GRAPH REPRESENTATION LEARNING
2y 5m to grant Granted Jan 06, 2026
Patent 12333460
DISPLAY OF MULTI-MODAL VEHICLE INDICATORS ON A MAP
2y 5m to grant Granted Jun 17, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
30%
Grant Probability
58%
With Interview (+28.6%)
3y 5m
Median Time to Grant
High
PTA Risk
Based on 211 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month