Prosecution Insights
Last updated: April 19, 2026
Application No. 17/546,744

DETECTION OF ANOMALOUS RECORDS WITHIN A DATASET

Non-Final OA §103
Filed
Dec 09, 2021
Examiner
WEHOVZ, OSCAR
Art Unit
2161
Tech Center
2100 — Computer Architecture & Software
Assignee
Mckesson Corporation
OA Round
9 (Non-Final)
62%
Grant Probability
Moderate
9-10
OA Rounds
2y 5m
To Grant
91%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
63 granted / 101 resolved
+7.4% vs TC avg
Strong +28% interview lift
Without
With
+28.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
17 currently pending
Career history
118
Total Applications
across all art units

Statute-Specific Performance

§101
9.1%
-30.9% vs TC avg
§103
71.5%
+31.5% vs TC avg
§102
3.4%
-36.6% vs TC avg
§112
11.8%
-28.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 101 resolved cases

Office Action

§103
DETAILED ACTION This action is responsive to Request for Continued Examination filed on March 04, 2026. Amendments filed on March 04, 2026 have been acknowledged and considered. Claims 1, 8 and 15 have been amended. Claims 7, 14 and 20 have been canceled. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on March 04, 2026 has been entered. Response to Amendment Applicant's Remarks, filed March 04, 2026, has been fully considered and entered. Accordingly, claims 1, 8 and 15 have been amended. Claims 7, 14 and 20 have been canceled. Claims 1, 8 and 15 are independent claim. Claims 1-6, 8-13 and 15-19. Response to Arguments Applicant’s arguments, see pages 9-18, filed March 04, 2026, with respect to the rejection of claim 1 have been fully considered, but they are not persuasive. Argument 1: Applicant argues on page 10 of Applicant Arguments and Remarks "Leverich in view of Mishra and Gonzalez fails to teach or suggest "train, using a first subset of the multiple records, and based on: the one or more temporal effects, the first sensitivity parameter, and the confidence interval associated with the first sensitivity parameter, the detection model" for anomaly detection in manner claimed... The Office Action further treats Leverich's use of historical data to compute deviations and thresholds as "training" the detection model "based on" the sensitivity parameter and its purported confidence interval. Id. This is error for at least the reasons set forth below." Response to Argument 1: Examiner respectfully disagrees. Leverich paragraphs [0106-0110, 0113-0119, 0126] disclose that anomaly detection [e.g. detection model] is performed based on selectable elements like window of duration or time range [e.g. one or more temporal effects] and sensitivity setting [e.g. first sensitivity parameter] provided in the user interface in order to identify KPI values that fall above and/or below thresholds. Where anomaly scores are calculated for data points (e.g. set / portion of data points, resampling of datapoints within a signal), where the anomaly scores are compared to thresholds in order to determine whether an anomaly has occurred. Where thresholds may be determined based on statistical techniques [e.g. confidence interval], such as standard deviations between data points, where the thresholds may be adjusted based on a user input such as sensitivity setting [Thus, associated with the first sensitivity parameter]. Where for example, a sensitivity setting of ‘10’ [e.g. sensitivity parameter] may correspond to the 10th percentile of the referenced deviations from historical anomaly scores. Thus, based on such a selection, all those anomaly scores that are above the 10th percentile [e.g. confidence interval] with respect to their deviation from the non-anomalous data points within the signal would be identified as anomalies [Thus, it defines decision boundaries]. And when considering the KPI values (e.g. current data) in view of training data such as historical KPI values [Thus, based on temporal effects, the sensitivity parameter, and the confidence interval associated with the sensitivity parameter], the current value, may nevertheless reflect anomalous behavior/occurrences, and upon completing the training [Thus, training] a detection phase can be initiated. Thus, Leverich’s KPI values are obtained based on selectable elements like window of duration or time range [e.g. one or more temporal effects] and sensitivity setting [e.g. first sensitivity parameter] which adjusts thresholds based on statistical techniques [e.g. confidence interval associated with the sensitivity parameter]. Therefore, by training using historical KPI values, it is training based on temporal effects, the sensitivity parameter, and the confidence interval associated with the sensitivity parameter. See rejection below. Therefore, the Examiner has determined that this argument is not persuasive. Argument 2: Applicant argues on pages 10-11 of Applicant Arguments and Remarks "Leverich's "sensitivity setting" is post-processing thresholding of anomaly scores - not a confidence interval associated with a sensitivity parameter defining domains of record values in the manner claimed... Claim 1 requires that each sensitivity parameter "defines a respective first domain of record values that are normal and a respective second domain of record values that are anomalous according to a confidence interval associated with [the] sensitivity parameter," and that the detection model is trained based on that sensitivity parameter and associated confidence interval. The Specification confirms that a "confidence interval" is derived from the distribution of record values and may be applied as a constraint during generation of decision boundaries that separate normal and anomalous domains. See, e.g., Published Specification at 34-39. Leverich's percentile-based anomaly-score thresholding does not disclose (and does not suggest) a confidence interval associated with a sensitivity parameter that defines domains of record values as claimed. Thus, for at least this reason, Applicant respectfully requests that the rejection be withdrawn." Response to argument 2: Examiner respectfully disagrees. Leverich [0113-0119] expressly discloses determining whether a record is anomalous based on whether it falls outside a statistically derived range of expected values (e.g. beyond a selected percentile of deviation). Such statistically-based thresholds define a range of expected normal values and identify outliers (anomalous) The claim requires "... defines a respective first domain of record values that are normal and a respective second domain of record values that are anomalous according to a confidence interval associated with that sensitivity parameter;". The claim does not require a specific formula for computing the confidence interval, so whether Leverich uses percentiles to define boundaries, that still satisfies the functional requirement of: defining a range of expected values, and classifying values outside that range as anomalous. Examiner notes that although Leverich does not explicitly use the exact phrase "confidence interval", it relies on statistical models (e.g. probability, distributions, deviation thresholds). Such mechanisms inherently define a range of expected normal values and identify outliers. The Examiner interprets Leverich’s percentile thresholds as a statistically equivalent mechanism for implementing a sensitivity-controlled separation between normal and anomalous domains. This, aligns with the Specification where a sensitivity parameter maps to a confidence interval (or standard error), where values inside the interval are deemed normal and values outside the interval are deemed anomalous. Leverich teaches that these thresholds are adjustable, thereby controlling the strictness of anomaly detection. Thus, adjusting the percentile threshold directly correspond to modifying the breadth of the included data region used to define the decision boundary. Thus, by adjusting the sensitivity, it changes the statistical threshold (e.g. effective confidence interval). For example, selecting a threshold corresponding to the 90th percentile necessarily defines a range that contains a corresponding portion of the distribution, analogous to a confidence interval encompassing that proportion of expected values. It partitions the data into a first domain of values deemed normal, and a second domain of values deemed anomalous. Therefore, percentile thresholds in Leverich are a statistically equivalent mechanism for implementing a sensitivity-controlled separation between normal and anomalous domains, especially when tied to how much of the data distribution is treated as “normal”. See rejection below. Therefore, the Examiner has determined that this argument is not persuasive. Argument 3: Applicant argues on pages 11-12 of Applicant Arguments and Remarks "The Office Action does not identify (and the references do not teach) training the detection model based on the sensitivity parameter and its associated confidence interval... Leverich does not train any detection model "based on" the selected sensitivity parameter and its associated confidence interval... Leverich 's "training phase" involves presenting simulated anomalies to a user and receiving user feedback to calibrate alerting behavior. Leverich at 126. This user-calibration workflow is not training a detection model based on temporal effects, a selected sensitivity parameter, and an associated confidence interval, as recited by claim 1. Moreover, claim 1 expressly recites that training "further comprises" generating, based on the temporal effects, the sensitivity parameter, and the associated confidence interval, "a first decision boundary and a second decision boundary," each separating normal and anomalous domains. Leverich, by contrast, depicts and describes a single anomaly-score threshold (e.g., a dashed line) used to distinguish anomalies from non-anomalous points." Response to argument 3: Examiner respectfully disagrees. See response to argument 1 above, regarding how Leverich’s obtains KPI values based on selectable elements like window of duration or time range [e.g. one or more temporal effects] and sensitivity setting [e.g. first sensitivity parameter] which adjusts thresholds based on statistical techniques [e.g. confidence interval associated with the sensitivity parameter], where KPI values (e.g. current data) which are considered in view of training data such as historical KPI values [Thus, based on temporal effects, the sensitivity parameter, and the confidence interval associated with the sensitivity parameter], may reflect anomalous behavior/occurrences, where upon completing the training [Thus, training] a detection phase can be initiated. Therefore, by training using training data such as historical KPI values, it is training based on temporal effects, the sensitivity parameter, and the confidence interval associated with the sensitivity parameter. Further Leverich [0117-0119] disclose that based on a selected sensitivity setting [i.e. a first sensitivity parameter] which correspond to a percentile/threshold of the referenced deviations from historical anomaly scores, a search preview window 610, depicts all data points that fall outside of that threshold are shown above the dashed line. Thus, it is generating a first decision boundary and a second decision boundary by separating those points that are above and below the dashed line /threshold, wherein each one of the first decision boundary and the second decision boundary separates, according to the one or more temporal effects, the first sensitivity parameter and the confidence interval, the first domain where values of records are deemed normal (e.g. below the threshold) and the second domain where values of records are deemed anomalous (e.g. above the threshold)]. See rejection below. Therefore, the Examiner has determined that this argument is not persuasive. Argument 4: Applicant argues on pages 12-13 of Applicant Arguments and Remarks "Nothing in Leverich compels the conclusion that selecting (or adjusting) a threshold based on percentiles of deviations necessarily yields (or requires) a "confidence interval associated with" the sensitivity parameter as recited - much less one that is carried forward as an explicit training input." Response to argument 4: Examiner respectfully disagrees. See response to arguments 1-3 above and rejection below. Therefore, the Examiner has determined that this argument is not persuasive. Argument 5: Applicant argues on pages 10-11 of Applicant Arguments and Remarks "the Office Action's mapping materially overstates what Shinde supplies with respect to the claimed "training interval" requirement, particularly when read in light of the Application's own description of the "training interval" in this claim set. For example, the Specification explains that the claimed training interval is a time interval that precedes the detection interval and "contains historical ... records relative to ... records contained in the detection interval[.]" Published Specification at 44. That is, the claimed training interval is determined using configuration attributes (including the detection interval) to define a historical period from which the first subset is drawn. By contrast, Shinde's "training interval" is presented as a forecasting hyperparameter (e.g., a lookback window used to construct training inputs and labels for utilization prediction), not as a configuration-attribute-driven determination of a training interval that precedes - and is defined relative to - a separately-configured "detection interval" from which "second records" are selected for anomaly classification in the manner claimed. See Shinde at 66-69 (training interval/prediction interval pseudocode and tuning)." Response to argument 5: Examiner respectfully disagrees. As an initial matter, the Examiner would like to point out that the claims are interpreted in light of the specification; however, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). The claim limitation requires "determining a training interval using the at least one configuration attribute, the training interval comprising historical records relative to the second records". Shinde [0044, 0066-0067, 0130] teaches this limitation by training anomaly detection models using historical data, where each data subset [e.g. second records] of the historical data contains each type of data included in the training data set for each interval in the associated sub-duration where a "training_interval”, representing how many time units of the training data set is used as a basis for producing a prediction (e.g., use 10 minutes of historical data) is determined. Therefore, the Examiner has determined that this argument is not persuasive. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 7-11, 14-18 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Leverich (US Patent Application Publication No. US 20190065298 A1), in view of Mishra (US Patent Publication No. US 12079233 B1). Regarding claim 1, Leverich teaches a computing system, comprising: at least one processor; and at least one memory device having processor-executable instructions stored thereon that, in response to execution by the at least one processor, cause the computing system to: (See Leverich [0019] “a system for performing continuous anomaly detection comprises at least one memory having instructions stored thereon and at least one processor configured to execute the instructions.”) send, to a server device of a plurality of server devices, a query received via a client device, wherein the query indicates a first user interface selection, via the client device, of the server device from among the plurality of server devices; receive, via the server device, a dataset comprising multiple records, wherein the server device determines the dataset based on the query; (See Leverich [0035-0041] "FIG. 1 illustrates a networked computer system 100 in which an embodiment may be implemented... one or more client devices 102 are coupled to one or more host devices 106 [i.e. plurality of server devices] and a data intake and query system 108 via one or more networks 104... Each host device 106 may comprise, for example, one or more of a network device, a web server, an application server, a database server… The communication between a client device 102 and host application 114 may include sending various requests [Thus, send, to a server device of a plurality of server devices, a query received via a client device] and receiving data packets [e.g. dataset]... and the application server may respond with the requested content stored in one or more response packets. [Thus, receive, via the server device, a dataset comprising multiple records, wherein the server device determines the dataset based on the query]” See also Leverich Fig. 4, [0083-0085] "FIG. 4 is a flow diagram that illustrates an exemplary process that a search head and one or more indexers [e.g. a server of a plurality of server devices] may perform during a search query. At block 402, a search head receives a search query from a client... To determine which events are responsive to the query, the indexer searches for events that match the criteria [e.g. user selection] specified in the query. These criteria can include matching keywords or specific values for certain fields... At block 410, the search head combines the partial results and/or events [i.e. dataset comprising multiple records] received from the indexers to produce a final result for the query. This final result may comprise different types of data depending on what the query requested. [Thus, receive, via the server device, a dataset comprising multiple records, wherein the server device determines the dataset based on the query]" See also Leverich [0089] "The search head 210 allows users to search and visualize [Thus, having a user interface] event data extracted from raw machine data received from homogenous data sources... for processing a query.” See also Leverich [0093-0095, 0098] “the SPLUNK® ENTERPRISE platform provides various schemas, dashboards and visualizations… To facilitate this data retrieval process, SPLUNK® IT SERVICE INTELLIGENCE™ enables a user to define [e.g. first user interface selection] an IT operations infrastructure from the perspective of the services it provides… a service such as corporate e-mail may be defined in terms of the entities employed to provide the service, such as host machines [i.e. server devices] and network devices… One or more Key Performance Indicators (KPI's) are defined for a service… Each KPI [e.g. query] is defined by a search query that derives a KPI value from the machine data [e.g. of events associated with the entities [Thus, the query indicates a user interface selection (e.g. first user interface selection) of the server device] that provide the service… Entity definitions in SPLUNK® IT SERVICE INTELLIGENCE™ can also be created and updated by an import of tabular data (as represented in a CSV, another delimited file, or a search query result set [Thus, based on a user interface selection]). The import may be GUI-mediated or processed using import parameters from a GUI-based import definition process [Thus, a user interface selection (e.g. first user interface selection)].” access at least one configuration attribute, a first configuration attribute of the at least one configuration attribute being indicative of a detection interval, wherein accessing the at least one configuration attribute comprises receiving a second configuration attribute indicative of a second user interface selection, via the client device, of: a detection model from a group of defined detection models stored by the computing system, and one or more temporal effects for the detection model; See Leverich [0108-0110] "FIG. 5 illustrates an exemplary GUI 500… GUI 500 (as depicted in FIG. 5) corresponds to a particular KPI… activation control 502 can be, for example, a button or any other such selectable element [i.e. access at least one configuration attribute] or interface item that, upon selection (e.g., by a user), enables and/or otherwise activates the various anomaly detection technologies described herein (e.g., with respect to a particular KPI or KPIs)... data window selector 504 and other user interface elements (not depicted) [e.g. first, second configuration attributes] can be presented to the user via GUI 500… data window selector 504 can enable the user to define a window (e.g., a duration, number of data points, or time range [e.g. temporal effects for the detection model]) of data [i.e. dataset] (e.g., KPI) to provide as a signal (e.g., a sequential set of data points) that will be used to perform anomaly detection [Thus, indicative of a detection interval]... by defining the data window that may be used for the anomaly detection definition associated with the particular data (e.g., KPI), the frequency with which data points may be provided to the signal (e.g., an interval between data points) may be determined... a selection input (e.g., a pull-down menu, radio button, etc.) may be provided to select [i.e. user interface selection] between available anomaly detection procedures [e.g. second configuration attribute]... performing any anomaly detection procedure [Thus, receiving a second configuration attribute indicative of a user interface selection] may provide an output such as an anomaly result. An anomaly result may be an output of the anomaly detection procedure, such as an anomaly value, an anomaly score" See also Leverich [0115] "anomaly scoring may be determined based on a variety of statistical techniques, such as behavior modeling (e.g., using models like Holt-Winters or ARIMA), statistical distribution p-testing, non-parametric distribution comparison (e.g., Kullback-Leibler), or non-parametric distance functions (e.g. L1/Manhattan distance) [Thus, detection model from a group of defined detection models stored by the computing system]" Thus, a selection input is provided to select [e.g. second user interface selection] between available anomaly detection procedures [i.e. second configuration attribute], and performing an anomaly detection procedure would result in an anomaly score, where the anomaly score is determined based on a detection model from a variety of detection models. Thus, receiving a second configuration attribute indicative of a user selection, of a detection model from a group of defined detection models.) Examiner interprets Leverich’s user-defined window such a duration, number of data points, or time range, as temporal effects for the detection model of data Leverich. However, Mishra teaches accessing the at least one configuration attribute comprises receiving a second configuration attribute indicative of a second user interface selection, via the client device, of: a detection model from a group of defined detection models stored by the computing system, and one or more temporal effects for the detection model more explicitly. (See Mishra Col. 1, lines 30-32 “Embodiments of the present technology are directed to facilitating performance of multiple seasonality online data decomposition.” See also Mishra Col. 36, lines 4-14 "One conventional method for decomposing data into components is Seasonal and Trend decomposition using LOESS (STL)... anomaly detection algorithms can apply STL to decompose data for anomaly analysis." See also Mishra Col. 17, lines 40-50 "the data intake and query system 108 provides the user with the ability to produce reports (e.g., a table, chart, visualization, etc.) without having to enter SPL, SQL, or other query language terms into a search screen. Data models [e.g. detection model] are used as the basis for the search feature. Data models may be selected [e.g. a second configuration attribute indicative of a second user interface selection] in a report generation interface. The report generator supports drag-and-drop organization of fields to be summarized in a report. When a model is selected, the fields with available extraction rules are made available for use in the report. The user may refine and/or filter search results to produce more precise reports." See also Mishra Col. 16, lines 38-42 "Examples of data models can include electronic mail, authentication, databases, intrusion detection, malware, application state, alerts, compute inventory, network sessions, network traffic, performance, audits, updates, vulnerabilities, etc. [e.g. a group of defined detection models stored by the computing system]" See Mishra Col. 57, lines 50-57 "a user of the client device 1904 may input a seasonality parameter [e.g. temporal effect for the detection model] for use in performing data decomposition. Further, data components, such as trend, variability, and/or residual may be provided to the client device, or other client device, for display to a user. In some cases, as described herein, the determined data components may be provided to a data analysis service to perform data analysis, such as anomaly detection" See also Mishra Col. 47, lines 13-17 "A window size for use in performing data decomposition may be based on a seasonality parameter, for example, input or designated by a user. A seasonality parameter generally refers to an indication of seasonality [e.g. temporal effect for the detection model] associated with a set of data (e.g., a time series data set)" See also Mishra Col. 62, lines 22-25 “The set of seasonality parameters indicate different seasonalities associated with a data set. In embodiments, the set of seasonality parameters may be provided via a user. For example, a user may recognize a weekly seasonality and a monthly seasonality associated with a dataset and provide a corresponding weekly seasonality parameter and monthly seasonality parameter.” See also Mishra Claim 1 "obtaining a first seasonality parameter and a second seasonality parameter [e.g. one or more temporal effects for the detection model] based on a user selection via a user interface [e.g. user interface selection (second user interface selection)], the first seasonality parameter and the second seasonality parameter indicating different seasonalities associated with a time series data set;”) It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify Leverich; which allow users to select among other user interface elements, anomaly detection procedures and duration and time range elements through a graphical user interface, to incorporate the teachings of Mishra, which allow users to select available anomaly detection models trough a graphical user interface, and obtaining multiple seasonality parameters based on a user selection via a user interface to determine data components used to perform data analysis, such as anomaly detection. One would be motivated to do so to effectively identify multiple seasonal components in the data that can provide a more accurate trend and/or residual component, as well as more accurate data forecasting. Thus, improve accuracy and efficiency of anomaly detection (Mishra Col. 37 lines 49-55). Leverich further in view of Mishra, [hereinafter Leverich-Mishra] additionally disclose determine, based on an indication of a third user interface selection, via the client device, of a first visual element of a plurality of visual elements, a first sensitivity parameter of a plurality of sensitivity parameters, wherein each sensitivity parameter, of the plurality of sensitivity parameters, is associated with one of the plurality of visual elements and defines a respective first domain of record values that are normal and a respective second domain of record values that are anomalous according to a confidence interval associated with that sensitivity parameter; (See Leverich [0048] “each client device 102 may host or execute one or more client applications 110 that are capable of interacting with one or more host devices 106 via one or more networks 104. For instance, a client application 110 may be or comprise a web browser that a user may use to navigate to one or more websites or other resources provided by one or more host devices 106.” See also Leverich [0113-0119] “the signal for each anomaly detection definition may be periodically analyzed as the data within the signal changes… anomaly scores may be calculated for the signal, and the anomaly scores may be compared to one or more thresholds in order to determine whether an anomaly has occurred… determination of anomaly thresholds, analysis of those thresholds, and determinations of alerts may be performed based on an analysis of the signal as well as user inputs such as sensitivity settings… a base threshold may be determined based on statistical techniques [e.g. confidence interval], such as by determining standard deviations between data points… the threshold may be adjusted based on a user input such as sensitivity. Anomalies may be detected based on comparison of anomaly scores with the anomaly threshold… GUI 600 may include search preview selector control 602, sensitivity setting indicator 604 [i.e. a first visual element of a plurality of visual elements], sensitivity setting control 606, alert setting control 608, search preview window 610, and data selection 612 ...sensitivity setting control 606 can be, for example, a selectable element (e.g., a movable slider, a pull-down menu, or a numerical text input) or interface item that, upon selection (e.g., by a user [Thus, based on an indication of a user interface selection (e.g. third user interface selection)]) [Thus, based on a selection of at least one visual element via the client device], enables a user to select or define a setting that dictates the sensitivity (e.g., between I,′ corresponding to a relatively low sensitivity and ‘100,’ corresponding to a relatively high sensitivity, the presently selected value of which is reflected in sensitivity setting indicator 604) [i.e. sensitivity parameter] with respect to which anomaly scores associated with data (e.g., KPI) values are to be identified as anomalies [Thus, according to a confidence interval associated with that sensitivity parameter]… all those anomaly scores that are above the 10th percentile with respect to their deviation from the non-anomalous data points [Thus, each sensitivity parameter, of the plurality of sensitivity parameters, defines a respective first domain of record values that are normal] within the signal would be identified as anomalies [Thus, a respective second domain of record values that are anomalous]… providing the referenced sensitivity setting control 606, the described technologies can enable a user to adjust the sensitivity setting (thereby setting a higher or lower error threshold with respect to which anomaly scores are or are not identified as anomalies)” Examiner notes that although Leverich does not explicitly use the exact phrase "confidence interval", it relies on statistical models (e.g. probability, distributions, deviation thresholds). Such mechanisms inherently define a range of expected normal values and identify outliers. The Examiner interprets Leverich’s percentile thresholds as a statistically equivalent mechanism for implementing a sensitivity-controlled separation between normal and anomalous domains.) train, using a first subset of the multiple records, and based on: the one or more temporal effects, the first sensitivity parameter, and the confidence interval associated with the first sensitivity parameter, the detection model to determine presence or absence of anomalous records within the multiple records, (See Leverich [0105-0108, 0125-0126] “Anomaly Detection… upon considering a current data (e.g., KPI values) in view of various trend(s) identified/observed in prior data values [e.g. first subset of the multiple records] (e.g., training data such as historical KPI values [Thus, the one or more temporal effects, the first sensitivity parameter, and the confidence interval associated with the first sensitivity parameter]… the current value, may nevertheless reflect anomalous behavior/occurrences (in that the current data (e.g., KPI) value [Thus, to determine presence or absence of anomalous records within the multiple records]… feedback may originate from a multitude of sources (similar to the different sources of training data described herein)… the referenced feedback can be… received after an initial attempt has been made with respect to identifying anomalies, in other implementations the described technologies can be configured such that a training phase can first be initiated… Then, upon completing the referenced training phase [Thus, train the detection model], a detection phase can be initiated (e.g., by applying the referenced techniques to actual data (e.g., KPI) values, etc.).” Thus, Leverich’s KPI values are obtained based on selectable elements like window of duration or time range [e.g. one or more temporal effects] and sensitivity setting [e.g. first sensitivity parameter] which adjusts thresholds based on statistical techniques [e.g. confidence interval associated with the sensitivity parameter]. Therefore, by training using historical KPI values, it is training based on temporal effects, the sensitivity parameter, and the confidence interval associated with the sensitivity parameter. wherein the processor-executable instructions that cause the computing system to train the detection model further cause the computing system to generate, based on: the one or more temporal effects, the first sensitivity parameter, and the confidence interval associated with the first sensitivity parameter, a first decision boundary and a second decision boundary, wherein each one of the first decision boundary and the second decision boundary separates, according to the one or more temporal effects, the first sensitivity parameter, and the confidence interval, the first domain where values of records are deemed normal and the second domain where values of records are deemed anomalous; (See Leverich [0106-0108] “upon considering a current data (e.g., KPI values) in view of various trend(s) identified/observed in prior data values [e.g. first subset of the multiple records] (e.g., training data such as historical KPI values [Thus, the one or more temporal effects, the first sensitivity parameter, and the confidence interval associated with the first sensitivity parameter]… the current value, may nevertheless reflect anomalous behavior/occurrences (in that the current data (e.g., KPI) value [Thus, to determine presence or absence of anomalous records within the multiple records]… feedback may originate from a multitude of sources (similar to the different sources of training data described herein)… the referenced feedback can be… received after an initial attempt has been made with respect to identifying anomalies, in other implementations the described technologies can be configured such that a training phase can first be initiated… Then, upon completing the referenced training phase [Thus, train the detection model], a detection phase can be initiated [e.g. generate a first decision boundary and a second decision boundary] (e.g., by applying the referenced techniques to actual data (e.g., KPI) values, etc.).” See also Leverich [0117-0119] “FIG. 6 illustrates an exemplary GUI 600 in accordance with one or more embodiments of the present disclosure … GUI 600 may include search preview selector control 602, sensitivity setting indicator 604, sensitivity setting control 606, alert setting control 608, search preview window 610 [Thus, based on the data window (e.g. one or more temporal effects)], and data selection 612 ... sensitivity setting control 606 can be, for example, a selectable element (e.g., a movable slider, a pull-down menu, or a numerical text input) or interface item that, upon selection (e.g., by a user), enables a user to select or define a setting that dictates the sensitivity (e.g., between I,′ corresponding to a relatively low sensitivity and ‘100,’ corresponding to a relatively high sensitivity, the presently selected value of which is reflected in sensitivity setting indicator 604) [Thus, based on the first sensitivity parameter, and the confidence interval associated with the first sensitivity parameter] …as described herein, an anomaly value may be calculated for some or all data points within a sequential set of data points of a signal that is used for anomaly detection [Thus, a detection model] …the sensitivity setting indicator may be utilized to establish thresholds for anomaly scores that will be identified as an anomaly. Accordingly, the referenced sensitivity setting can dictate/define an anomaly threshold which can be, for example, a threshold by which such deviations are to be considered/identified as anomalies. For example, a sensitivity setting of ‘10’ [i.e. a first sensitivity parameter] may correspond to the 10th percentile of the referenced deviations from historical anomaly scores. Accordingly, based on such a selection, all those anomaly scores that are above the 10th percentile with respect to their deviation from the non-anomalous data points [Thus, a first domain where values of records are deemed normal] within the signal would be identified as anomalies [Thus, a second domain of record values that are anomalous]… anomaly detection preview for a particular signal is depicted in search preview window 610, with anomaly scores values scaled along the ordinate of the search preview window 610 and anomaly scored depicted versus time (e.g., along the abscissa of the search preview window 610). In an embodiment, a depiction of an anomaly score threshold and data points of the signal that fall outside of that threshold are depicted as a dashed line and as points above that dashed line, respectively)” Thus, Leverich generate a first decision boundary and a second decision boundary by separating those points above and below the dashed line /threshold, wherein each one of the first decision boundary and the second decision boundary separates, according to the one or more temporal effects, the first sensitivity parameter and the confidence interval, the first domain where values of records are deemed normal (e.g. below the threshold) and the second domain where values of records are deemed anomalous (e.g. above the threshold)]) select a second subset of the multiple records, the second subset comprising second records within the detection interval; and generate classification attributes for respective ones of the second records by applying the trained detection model to the second subset, wherein a first classification attribute of the classification attributes designates a first one of the second records as one of normal or anomalous. (See Leverich [0126, 0131-0132] “upon completing the referenced training phase, a detection phase can be initiated [Thus, applying the trained detection model] (e.g., by applying the referenced techniques to actual data (e.g., KPI) values, etc.)… anomaly detection procedures may be described in the context of FIG. 8… search query can be executed repeatedly, such as over a period of time and/or based on a frequency and/or a schedule [Thus, within the detection interval]. In doing so, values [e.g. second subset of the multiple records, second records] for a key performance indicator (KPI) can be produced.” See also Leverich [0135] “At block 808, zero or more of the values can be identified as anomalies [Thus, generate classification attributes for respective ones of the second records (e.g. designates a first one of the second records as one of anomalous)]. In certain implementations, such values can be identified as anomalies based on the settings for the anomaly detection definition, such as a sensitivity setting indicated by user input (e.g., via sensitivity setting control 606), a data window for the anomaly definition, the anomaly detection procedure for the anomaly definition, and any other suitable inputs.” Regarding claim 2, Leverich-Mishra teaches all limitations and motivations of claim 1, the at least one memory device having further processor-executable instructions stored thereon that in response to execution by the at least one processor further cause the computing system to cause the client device to present a graph representing a time series of a portion of the multiple records, the graph comprising one or more anomalous values of respective anomalous records. (See Leverich [0026] “events may be derived from “time series data,” where the time series data comprises a sequence of data points (e.g., performance measurements from a computer system, etc.) that are associated with successive points in time.” See also Leverich Fig. 7, [0122] “FIG. 7 illustrates an exemplary GUI 700 for monitoring in accordance with one or more embodiments of the present disclosure. In an embodiment, GUI 700 may include search preview window 610 (as described with respect to FIG. 6), KPI value graph 702 [i.e. graph representing a time series of a portion of the multiple records], anomaly point(s) 704 [i.e. one or more anomalous values of respective anomalous records], anomaly information 706, alert management control 708, and related data display 710. In the exemplary embodiment of FIG. 7, the values utilized for the ordinate of the search preview window may be data (e.g., KPI) values (compared to anomaly scores, as depicted in FIG. 6) such that the value graph is a KPI value graph 702 (compared to the anomaly score value graph of FIG. 6). KPI value graph 702 can be, for example, a graph that depicts or represents KPI values (e.g., ‘CPU usage’) over the chronological interval defined by search preview selector control 602 (e.g., the past 24 hours). It should be understood that, in certain implementations, the referenced chronological interval may be adjusted (e.g., zoomed-in, zoomed-out) by the user, e.g., at run time (such as by providing an input via search preview selector control 602).”) Regarding claim 3, Leverich-Mishra teaches all limitations and motivations of claim 1, wherein each server device of the plurality of server devices is associated with one of a plurality of databases, and wherein the query is associated with a first database of the plurality of databases. (See Leverich [0035-0040] "FIG. 1 illustrates a networked computer system 100 in which an embodiment may be implemented... one or more client devices 102 are coupled to one or more host devices 106 [i.e. server device of the plurality of server devices] and a data intake and query system 108 via one or more networks 104... Each host device 106 may comprise, for example, one or more of a network device, a web server, an application server, a database server [e.g. each server device of the plurality of server devices is associated with one of a plurality of databases], etc." See also Leverich [0060] "forwarders and indexers can comprise separate computer systems, or may alternatively comprise separate processes executing on one or more computer systems." See also Leverich [0081, 0084] “Each indexer 206 may be responsible for storing and searching a subset of the events contained in a corresponding data store 208 [i.e. a first database of the plurality of databases]… the indexers to which the query was distributed, search data stores associated with them for events that are responsive to the query [Thus, the query is associated with a first database of the plurality of databases]”) Regarding claim 4, Leverich-Mishra teaches all limitations and motivations of claim 1, wherein the one or more temporal effects comprise a monthly seasonality, a weekly seasonality, a daily seasonality. (See Mishra Col. 38, lines 56-60 “a user desiring to view or analyze a daily seasonality, or 24 hour seasonality, can input a seasonality parameter of 48. As another example, a user desiring to view or analyze a weekly seasonality can input a seasonality parameter of 336” See also Mishra Col. 62, lines 25-28 “a user may recognize a weekly seasonality and a monthly seasonality associated with a dataset and provide a corresponding weekly seasonality parameter and monthly seasonality parameter.”) Regarding claim 8, Leverich-Mishra teaches all of the elements of claim 1 in system form rather than method form. Leverich also discloses a method [0017]. Therefore, the supporting rationale of the rejection to claim 1 applies equally as well to those elements of claim 8. Regarding claim 9, Leverich-Mishra teaches all of the elements of claim 2 in system form rather than method form. Leverich also discloses a method [0017]. Therefore, the supporting rationale of the rejection to claim 2 applies equally as well to those elements of claim 9. Regarding claim 10, Leverich-Mishra teaches all of the elements of claim 3 in system form rather than method form. Leverich also discloses a method [0017]. Therefore, the supporting rationale of the rejection to claim 3 applies equally as well to those elements of claim 10. Regarding claim 11, Leverich-Mishra teaches all of the elements of claim 4 in system form rather than method form. Leverich also discloses a method [0017]. Therefore, the supporting rationale of the rejection to claim 4 applies equally as well to those elements of claim 11. Regarding claim 15, Leverich-Mishra teaches all of the elements of claim 1 in system form rather than computer-readable medium form. Leverich also discloses a computer-readable medium [0018]. Therefore, the supporting rationale of the rejection to claim 1 applies equally as well to those elements of claim 15. Regarding claim 16, Leverich-Mishra teaches all of the elements of claim 2 in system form rather than computer-readable medium form. Leverich also discloses a computer-readable medium [0018]. Therefore, the supporting rationale of the rejection to claim 2 applies equally as well to those elements of claim 16. Regarding claim 17, Leverich-Mishra teaches all of the elements of claim 3 in system form rather than computer-readable medium form. Leverich also discloses a computer-readable medium [0018]. Therefore, the supporting rationale of the rejection to claim 3 applies equally as well to those elements of claim 17. Regarding claim 18, Leverich-Mishra teaches all of the elements of claim 4 in system form rather than computer-readable medium form. Leverich also discloses a computer-readable medium [0018]. Therefore, the supporting rationale of the rejection to claim 4 applies equally as well to those elements of claim 18. Claims 5 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Leverich-Mishra, in view of Gonzalez (US Patent Application Publication No. US 20220342868 A1). Regarding claim 5, Leverich-Mishra teaches all limitations and motivations of claim 1, wherein the detection model comprises a time-series model. (See machine data may be stored as timestamped events (each of which may include a segment of raw machine data) [e.g. time-series]. Such machine data may also be accessed according to a late-binding schema. Once a KPI is identified for a search query, and other data related to anomaly detection for a KPI is provided (e.g., a data window), a signal may be acquired for the KPI as described herein and an anomaly definition may be created.” See also Leverich [0110-0111] “performing any anomaly detection procedure may provide an output such as an anomaly result. An anomaly result may be an output of the anomaly detection procedure, such as an anomaly value, an anomaly score, an anomaly definition, or an anomaly alert… In an embodiment, a trending anomaly detection procedure may provide for analysis of anomalies within a single KPI over time, e.g., by comparing data points within a single signal for a single anomaly detection definition with other data points of the same signal… anomaly scoring may be determined based on a variety of statistical techniques, such as behavior modeling (e.g., using models like Holt-Winters or ARIMA), statistical distribution p-testing, non-parametric distribution comparison (e.g., Kullback-Leibler), or non-parametric distance functions (e.g. L1/Manhattan distance). [Thus, a time-series model]” However, Gonzalez teaches wherein the detection model comprises an isolation forest model, a time-series model. (See Gonzalez [0075] “machine learning model 602 may be trained to perform anomaly detection [i.e. detection model] using one or more unsupervised techniques. For example, an isolation forest algorithm [i.e. isolation forest model] or other decision tree algorithm may be used to configure machine learning model 602 based on a training dataset in an unsupervised manner.” See also Gonzalez [0083] “ At 714, model selection system 102 receives, from the anomaly detection model, one or more anomalies associated with the timeseries dataset [Thus, a time-series model]. For example, the model selection system may receive from machine learning model 602 timestamps 606 that may include one or more anomalies.”) It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify Leverich-Mishra; which uses multiple detection models applied to time-series data, to incorporate the teachings of Gonzalez of training an anomaly detection model, using unsupervised techniques to detect anomalies associated with timeseries dataset. One would be motivated to use such unsupervised techniques do so to improve efficiency and resource utilization (Gonzalez 0152). Regarding claim 12, Leverich-Mishra further in view of Gonzalez teaches all of the elements of claim 5 in system form rather than method form. Leverich also discloses a method [0017]. Therefore, the supporting rationale of the rejection to claim 5 applies equally as well to those elements of claim 12. Claims 6, 13, 19 are rejected under 35 U.S.C. 103 as being unpatentable over Leverich-Mishra, in view of Shinde (US Patent Application Publication No. US 20200134423 A1). Regarding claim 6, Leverich-Mishra teaches all limitations and motivations of claim 1, wherein the processor- executable instructions that cause the computing system to train the detection model further cause the computing system to: determine a training interval using the at least one configuration attribute, the training interval comprising historical records relative to the second records; (See Leverich [0105-0108, 0125-0126] “Anomaly Detection… upon considering a current data (e.g., KPI values) in view of various trend(s) identified/observed in prior data values [e.g. second records] (e.g., training data such as historical KPI values [Thus, historical records relative to the second records]… the current value, may nevertheless reflect anomalous behavior/occurrences (in that the current data (e.g., KPI) value [Thus, to determine presence or absence of anomalous records within the multiple records]… feedback may originate from a multitude of sources (similar to the different sources of training data described herein)… the referenced feedback can be… received after an initial attempt has been made with respect to identifying anomalies, in other implementations the described technologies can be configured such that a training phase can first be initiated… Then, upon completing the referenced training phase [Thus, train the detection model], a detection phase can be initiated (e.g., by applying the referenced techniques to actual data (e.g., KPI) values, etc.)… the described technologies can be configured to switch between training and detection modes/phases (e.g., periodically [e.g. training interval], following some conditional trigger such as a string of negative user feedback, etc.).” Leverich-Mishra does not explicitly disclose determine a training interval using the at least one configuration attribute, the training interval comprising historical records relative to the second records; However, Shinde discloses determine a training interval using the at least one configuration attribute, the training interval comprising historical records relative to the second records more explicitly; (See Shinde [0130] “A computer system process executes a machine learning algorithm by executing software configured to cause execution of the algorithm.” See also Shinde [0098] "predictions from hierarchy of models 300 may also be used to detect anomalies in datacenter usage at any level of datacenter hardware, including at the switch or server device level. According to an embodiment, a potential anomaly (referred to herein as a “deviation event”)... from the predicted utilization generated by any applicable kind of deployed ML model in hierarchy of models 300." See also Shinde [0044] "The historical utilization information used to train server utilization models" See also Shinde [0066-0067] "Each data subset [e.g. second records] of the historical utilization data contains each type of data included in the training data set for each interval in the associated sub-duration... The following pseudocode represents an example of how labeled data can be created for given value of “training_interval” [e.g. configuration attribute] and “prediction_interval”. Here “training_interval” represents how many time units of the training data set is used as a basis for producing a prediction (e.g., use 10 minutes of historical data [Thus, comprising historical records]), and “prediction_interval” represents how many time units after the training interval should a prediction represent (e.g., 1 minute in future).” PNG media_image1.png 287 812 media_image1.png Greyscale ) It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify Leverich-Mishra; which can be configured to switch between training and detection modes/phases periodically or following some conditional trigger, to incorporate the teachings of Shinde of determine a training interval used as a basis for producing a prediction using the at least one configuration attribute, the training interval comprising historical records. One would be motivated to do so to help detect problems early reducing rare failures and improving uptime [Shinde 0120]. Leverich-Mishra further in view of Shinde additionally disclose select the first subset, wherein the first subset comprises the historical records; and train, using the first subset, the detection model. (See Leverich [0105-0108, 0125-0126] “Anomaly Detection… upon considering a current data (e.g., KPI values) in view of various trend(s) identified/observed in prior data values [e.g. first subset of the multiple records] (e.g., training data such as historical KPI values [Thus, first subset comprises the historical records]… the current value, may nevertheless reflect anomalous behavior/occurrences (in that the current data (e.g., KPI) value [Thus, to determine presence or absence of anomalous records within the multiple records]… feedback may originate from a multitude of sources (similar to the different sources of training data described herein)… the referenced feedback can be… received after an initial attempt has been made with respect to identifying anomalies, in other implementations the described technologies can be configured such that a training phase can first be initiated… Then, upon completing the referenced training phase [Thus, train, using the first subset, the detection model], a detection phase can be initiated (e.g., by applying the referenced techniques to actual data (e.g., KPI) values, etc.).”) Regarding claim 13, Leverich-Mishra further in view of Shinde teaches all of the elements of claim 6 in system form rather than method form. Leverich also discloses a method [0017]. Therefore, the supporting rationale of the rejection to claim 6 applies equally as well to those elements of claim 13. Regarding claim 19, Leverich-Mishra further in view of Shinde teaches all of the elements of claim 6 in system form rather than computer-readable medium form. Leverich also discloses a computer-readable medium [0018]. Therefore, the supporting rationale of the rejection to claim 6 applies equally as well to those elements of claim 19. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to OSCAR WEHOVZ whose telephone number is (571)272-3362. The examiner can normally be reached 8:00am - 5:00pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, APU M MOFIZ can be reached at (571) 272-4080. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /OSCAR WEHOVZ/Examiner, Art Unit 2161 /APU M MOFIZ/Supervisory Patent Examiner, Art Unit 2161
Read full office action

Prosecution Timeline

Dec 09, 2021
Application Filed
Jan 06, 2023
Non-Final Rejection — §103
Apr 14, 2023
Response Filed
Jun 07, 2023
Final Rejection — §103
Sep 14, 2023
Request for Continued Examination
Sep 25, 2023
Response after Non-Final Action
Sep 29, 2023
Non-Final Rejection — §103
Feb 05, 2024
Response Filed
Apr 10, 2024
Final Rejection — §103
Jun 17, 2024
Response after Non-Final Action
Jul 15, 2024
Request for Continued Examination
Jul 18, 2024
Response after Non-Final Action
Aug 19, 2024
Non-Final Rejection — §103
Oct 25, 2024
Response Filed
Jan 02, 2025
Final Rejection — §103
Mar 03, 2025
Response after Non-Final Action
Apr 08, 2025
Request for Continued Examination
Apr 14, 2025
Response after Non-Final Action
Apr 23, 2025
Non-Final Rejection — §103
Jul 09, 2025
Response Filed
Sep 09, 2025
Final Rejection — §103
Nov 12, 2025
Response after Non-Final Action
Dec 10, 2025
Notice of Allowance
Dec 10, 2025
Response after Non-Final Action
Dec 30, 2025
Response after Non-Final Action
Feb 10, 2026
Response after Non-Final Action
Feb 10, 2026
Response after Non-Final Action
Mar 01, 2026
Response after Non-Final Action
Mar 04, 2026
Request for Continued Examination
Mar 04, 2026
Response after Non-Final Action
Mar 12, 2026
Response after Non-Final Action
Mar 26, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602394
RESOURCE EFFICIENT FULL BOOTSTRAP
2y 5m to grant Granted Apr 14, 2026
Patent 12602442
MEDIA CONTENT PROCESSING METHOD AND APPARATUS, STORAGE MEDIUM, AND ELECTRONIC DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12596759
INSIGHTS SERVICE FOR SEARCH ENGINE UTILIZATION
2y 5m to grant Granted Apr 07, 2026
Patent 12591629
QUESTION ANSWERING USING ENTITY REFERENCES IN UNSTRUCTURED DATA
2y 5m to grant Granted Mar 31, 2026
Patent 12566819
SYSTEMS AND METHODS FOR CLUSTERING ALGORITHMS FOR DATA ANALYSIS
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

9-10
Expected OA Rounds
62%
Grant Probability
91%
With Interview (+28.3%)
2y 5m
Median Time to Grant
High
PTA Risk
Based on 101 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month