Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
The action is in response to the Applicant’s communication filed on 12/15/2025.
Claims 1-20 are pending, where claims 1, 9 and 15 are independent.
The provisional obviousness-type double patenting rejection set forth in the Non-Final Office Action is maintained since the Applicant has not submitted a Terminal Disclaimer. However, the response of the applicant in remarks page 13 "Applicant expressly reserves the right to file a terminal disclaimer should the Office maintain a non-provisional double patenting rejection after the claims of the reference application are in allowable form" is considered till final form.
Response to Arguments
Applicant's arguments filed on 12/15/2025 have been fully considered but they are not persuasive.
Response to double patenting: As to pages 10-13, applicant argues "Claims 1, 8 and 15 have been amended and provisional double patenting rejection should be withdrawn on the basis of at least: claims are directed to different technical problem and production viability analysis is absent".
Examiner respectfully disagrees because comparison table provides that the independent claim limitations of the application and the co-pending application are obviousness type similar invention. Though the conflicting claims are not identical, but they are not patentably distinct from each other. Because they are conceptually or inherently similar to the limitations of the patent applications and they and produce the similar end result of training machine learning models to detect anomalies in the time series signals. Thereby, it is a provisionally rejected non-statutory double patenting rejection and needs terminal disclaimer to overcome till the final form of allowance consideration. Therefore, Applicant’s arguments are not persuasive.
Response to 103 rejections:
A) As to pages 14-15, applicant argues Shama/Baclawsk does not teach or suggest "separating a plurality of time series signals from individual sources into a plurality of alternative configurations of clusters based on correlations between the time series signals, wherein the alternative configurations of clusters differ by amount of individual clusters that the time series signals are separated into" as recited in claims 1, 9 and 15.
Examiner respectfully disagrees because Shama ([claim 1] “receiving time-series data of a single time-series - mixed seasonality type - segmenting the time-series data - form a plurality of time-series data segments each having a different subset of the data points in the single time-series” [claim 2] “time-series data identified - using a machine learning algorithm” [abstract] “processed to determine one or more patterns across the plurality of time-series data segments” [col 1-7] see Fig. 1-6, segmenting time-series data, form a plurality of time-series data segments different subset of data points) and Baclawsk ([abstract] “receives a set of time-series signals gathered from sensors in the monitored asset - performs a pairwise differencing operation between actual values and the estimated values for the set of time-series signals - alarms exceeds a threshold value - incipient anomaly in the monitored asset, the system triggers an alert - updates the inferential model based on the time-series signals” [0001-20] see Fig. 1-10, set of time-series signals from plurality of sensors, applying machine-learning) in combination obviously teach the argued limitations under 103 obviousness rejection. Therefore, Applicant’s arguments are not persuasive.
B) As to pages 16-17, applicant argues Shama/Baclawsk does not teach or suggest "training machine learning models for the individual clusters in the alternative configurations of clusters" as recited in claims 1, 9 and 15.
Examiner respectfully disagrees because Shama ( [col 1-7] “machine learning algorithm trained using a plurality of time-series - process an input time-series to infer - mixed seasonality - time-series data received for - defining patterns from the time-series data - mixed seasonality type accomplished by a machine learning algorithm that classifies the time-series data” [claim 2] “time-series data identified - using a machine learning algorithm” [abstract] “processed to determine one or more patterns across the plurality of time-series data segments” see Fig. 1-6 segmenting time-series data, form a plurality of time-series data segments different subset of data machine learning algorithm as model obviously includes alternative pattern/group) and Baclawsk ([0041-84] “monitoring and applying ML to data - training new model requires considerable time using and effort - pretrain a library of models” [0001-20] [abstract] see Fig. 1-10, set of time-series signals, applying machine-learning to data at desired time intervals, generate estimated values for the set of time-series signals, monitoring and applying ML) see Fig. 1-10,) in combination obviously teach the argued limitations under 103 obviousness rejection. Therefore, Applicant’s arguments are not persuasive.
C) As to pages 17-18, applicant argues Shama/Baclawsk does not teach or suggest "determining whether one or more of the alternative configurations of clusters is viable for use in a production environment based on whether the trained machine learning models for the individual clusters in the one or more of the alternative configurations of clusters satisfy an accuracy threshold and a completion time threshold" as recited in claims 1, 9 and 15.
Examiner respectfully disagrees because Shama ([claim 1] “determine one or more patterns across the plurality of time-series data segments - standard representative value for each sampled time segment point, a lower bound of data in the pattern for a defined confidence interval, and an upper bound of data in the pattern for the defined confidence interval - defined a normal range of data points in the pattern - based on the stored one or more patterns - selecting a particular pattern of the stored one or more patterns - predicting a range of upcoming normal values for the input time-series - range is predicted as characterized in the particular pattern - detecting an anomaly for the segment based on a data point within the input-time series being out of the range” [abstract] “anomaly detection in time-series data with mixed seasonality - time-series data is segmented - form a plurality of time-series data segments - determine one or more patterns across the plurality of time-series data segments - stored and used to perform pattern matching for an input time-series” [claim 2] “time-series data identified - using a machine learning algorithm” see Fig. 1-6, segmented time-series data having a different subset of data obviously includes alternative pattern/group, machine learning algorithm, trained ML obviously based on threshold and accuracy, determine plurality data patterns, standard representative value for each sampled time segment point, normal range of data pattern, plurality of stored patterns, upper and lower bound, defined confidence interval, detecting plurality anomalies in an input time-series based on plurality of patterns, selecting particular pattern) and Baclawsk ([abstract] “receives a set of time-series signals gathered from sensors in the monitored asset - performs a pairwise differencing operation between actual values and the estimated values for the set of time-series signals - alarms exceeds a threshold value - incipient anomaly in the monitored asset, the system triggers an alert - updates the inferential model based on the time-series signals” [0001-20] see Fig. 1-10, set of time-series signals from plurality of sensors, applying machine-learning, alarms exceeds a threshold value ) in combination obviously teach the argued limitations under 103 obviousness rejection. Therefore, Applicant’s arguments are not persuasive.
D) As to pages 19-20, applicant argues Shama/Baclawsk does not teach or suggest "selecting one configuration from the alternative configurations of clusters that were determined to be viable configurations; and deploying production machine learning models into the production environment to detect anomalies in the time series signals based on the selected configuration" as recited in claims 1, 9 and 15.
Examiner respectfully disagrees because Shama ([claim 1] “segmenting the time-series data - form a plurality of time-series data segments each having a different subset of the data points in the single time-series - processing the plurality of time-series data segments to determine one or more patterns across the plurality of time-series data segments - standard representative value for each sampled time segment point, a lower bound of data in the pattern for a defined confidence interval, and an upper bound of data in the pattern for the defined confidence interval, wherein the lower bound and the upper bond defined a normal range of data points in the pattern; and detecting one or more anomalies in an input time-series, based on the stored one or more patterns - selecting a particular pattern of the stored one or more patterns - predicting a range of upcoming normal values for the input time-series - range is predicted as characterized in the particular pattern - detecting an anomaly for the segment based on a data point within the input-time series being out of the range” [abstract] “anomaly detection in time-series data with mixed seasonality - time-series data is segmented - form a plurality of time-series data segments” [claim 2] “using a machine learning algorithm” see Fig. 1-6, segmented time-series data having a different subset of data obviously includes alternative pattern/group, selecting particular pattern from stored plurality of patterns, machine learning algorithm, standard representative value for each sampled time segment point, normal range of data pattern, plurality of stored patterns, upper and lower bound, defined confidence interval, detecting plurality anomalies in an input time-series based on plurality of patterns, selecting particular pattern, detecting plurality anomalies in an input time-series based on plurality of patterns, selecting particular pattern, perform pattern matching, detecting anomaly being out of the range obviously includes in range or out of range) and Baclawsk ([abstract] “receives a set of time-series signals gathered from sensors in the monitored asset - performs a pairwise differencing operation between actual values and the estimated values for the set of time-series signals - alarms exceeds a threshold value - incipient anomaly in the monitored asset, the system triggers an alert” [0001-20] see Fig. 1-10, set of time-series signals from plurality of sensors, applying machine-learning) in combination obviously teach the argued limitations under 103 obviousness rejection. Therefore, Applicant’s arguments are not persuasive.
Multiple filed related applications
Applicants have filed multiple related applications. To date, some of the related applications have been allowed or under NOA and it appears that some related applications are stand pending, yet to be examined. There are plurality of co-pending related Applications and double patenting is proper. See MPEP 804 and 1490 (VI) D:
Double Patenting
37 CFR 1.78(b) provides that when two or more applications filed by the same applicant contain conflicting claims, elimination of such claims from all but one application may be required in the absence of good and sufficient reason for their retention during pendency in more than one application. Applicant is required to either cancel the conflicting claims from all but one application or maintain a clear line of demarcation between the applications. See MPEP § 822.]
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. See MPEP § 804 and 1490 (VI) D.
Claims 1, 9 and 15 are provisionally rejected on the ground of nonstatutory double patenting over claims 1, 8 and 15 of copending U.S. Patent application No. 18/133047 (USPGPub. No. 20240346361 A1). This is a provisional double patenting rejection because the patentably indistinct claims have not in fact been patented.
The subject matter claimed in the instant application and co-pending application are claiming similar subject matter, as follows:
Instant Application No. 18/203,771
US Application 18/133047 (USPGPub. No. 20240346361 A1)
Title
PROGNOSTICS ACCELERATION FOR MACHINE LEARNING ANOMALY DETECTION
AUTOMATIC SIGNAL CLUSTERING WITH AMBIENT SIGNALS FOR ML ANOMALY DETECTION
Claim 1. A computer-implemented method, comprising:
separating a plurality of time series signals from individual sources into a plurality of alternative configurations of clusters based on correlations between the time series signals, wherein the alternative configurations of clusters differ by amount of individual clusters that the time series signals are separated into;
training machine learning models for the individual clusters in the alternative configurations of clusters;
determining whether one or more of the alternative configurations of clusters is viable for use in a production environment based on whether the trained machine learning models for the individual clusters in the one or more of the alternative configurations of clusters satisfy an accuracy threshold and a completion time threshold;
selecting one configuration from the alternative configurations of clusters that were determined to be viable configurations; and
deploying production machine learning models into the production environment to detect anomalies in the time series signals based on the selected configuration.
1. A computer-implemented method, comprising:
receiving time series signals associated with a plurality of machines, wherein the time series signals are unlabeled as to which of the machines the time series signals are associated with; automatically determining from the time series signals a plurality of clusters that correspond to the plurality of the machines and separating the time series signals into the plurality of clusters, wherein one cluster of the clusters corresponds to one machine of the plurality of machines and includes the time series signals that are associated with the one machine of the plurality of machines;
identifying a group of ambient time series signals that overlaps more than one of the clusters;
adding the group of the ambient time series signals into the one cluster of the clusters that corresponds to the one machine; and
training a machine learning model to detect an anomaly based on the one cluster to generate a trained machine learning model that is specific to the one machine without using the time series signals not included in the one cluster.
Claims 2-20 are also provisionally obvious to the claims 1-20 of the U.S. Patent co-pending Application No. 18/133047 (USPGPub. No. 20240346361 A1).
Although the conflicting claims are not identical, they are not patentably distinct from each other (as shown in the table for comparison) because they are conceptually or inherently similar to the limitations of the patent applications (as for example the limitation “separating time series signals into a plurality of alternative configurations of clusters based on correlations between the time series signals, wherein the alternative configurations of clusters differ by amount of individual clusters that the time series signals are separated” of the application is equivalent to the limitation “automatically determining from the time series signals a plurality of clusters that correspond to the plurality of the machines and separating the time series signals into the plurality of clusters, wherein one cluster of the clusters corresponds to one machine of the plurality of machines” of the co-pending application) in scope and they use the similar limitations and produce the similar end result of training machine learning models to detect anomalies in the time series signals.
It would be therefore obvious to one having ordinary skill in the art before the effective filing date of the claimed invention was made that to modify or to omit the additional elements of claims 1, 8 and 15 of the co-pending application to arrive at the claims 1, 9 and 15 of the instant application, would perform the similar functions as before.
This is a provisional obviousness-type nonstatutory double patenting rejection because the patentably indistinct claims have not yet been patented. See MPEP 804 and 1490 (VI) D:
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claims 1-20 are rejected under AIA 35 U.S.C. 103 as being unpatentable over Shama, et al. (USP No. 11768915 B1) in view of Baclawsk, et al. (USPGPub No. 20210158202 A1).
As to claims 1, 9 and 15, Shama discloses (Currently Amended) A computer-implemented method (Shama [abstract] “computer program provided for anomaly detection in time-series data with mixed seasonality - time-series data is segmented, by a defined unit of time, to form a plurality of time-series data segments - processed to determine one or more patterns across the plurality of time-series data segments - patterns are stored and used to perform pattern matching for an input time-series” see Fig. 1-6), comprising:
separating a plurality of time series signals from individual sources into a plurality of alternative configurations of clusters based on correlations between the time series signals, wherein the alternative configurations of clusters differ by amount of individual clusters that the time series signals are separated into (Shama [claim 1] “receiving time-series data of a single time-series - mixed seasonality type - segmenting the time-series data - to form a plurality of time-series data segments each having a different subset of the data points in the single time-series” [abstract] “processed to determine one or more patterns across the plurality of time-series data segments” [col 1-7] see Fig. 1-6, segmenting time-series data, form a plurality of time-series data segments different subset of data points obviously provides separating time series signals into a plurality of alternative configurations of clusters based on correlations between the time series signals - clusters differ by amount of individual clusters that the time series signals separated);
training machine learning models for the individual clusters in the alternative configurations of clusters (Shama [col 1-7] “machine learning algorithm trained using a plurality of time-series - process an input time-series to infer - mixed seasonality - time-series data received for - defining patterns from the time-series data - mixed seasonality type accomplished by a machine learning algorithm that classifies the time-series data” [claim 2] “time-series data identified as being of the mixed seasonality type, using a machine learning algorithm” [abstract] “processed to determine one or more patterns across the plurality of time-series data segments” see Fig. 1-6);
determining whether one or more of the alternative configurations of clusters is viable for use in a production environment based on whether the trained machine learning models for the individual clusters in the one or more of the alternative configurations of clusters satisfy an accuracy threshold and a completion time threshold;
selecting one configuration from the alternative configurations of clusters that were determined to be viable configurations; and deploying production machine learning models into the production environment to detect anomalies in the time series signals based on the selected configuration (Shama [claim 1] “segmenting the time-series data - to form a plurality of time-series data segments each having a different subset of the data points in the single time-series - processing the plurality of time-series data segments to determine one or more patterns across the plurality of time-series data segments - standard representative value for each sampled time segment point, a lower bound of data in the pattern for a defined confidence interval, and an upper bound of data in the pattern for the defined confidence interval, wherein the lower bound and the upper bond defined a normal range of data points in the pattern; and detecting one or more anomalies in an input time-series, based on the stored one or more patterns - selecting a particular pattern of the stored one or more patterns - predicting a range of upcoming normal values for the input time-series - range is predicted as characterized in the particular pattern - detecting an anomaly for the segment based on a data point within the input-time series being out of the range” [abstract] “anomaly detection in time-series data with mixed seasonality - time-series data is segmented - form a plurality of time-series data segments - determine one or more patterns across the plurality of time-series data segments - stored and used to perform pattern matching for an input time-series” [claim 2] “time-series data identified - using a machine learning algorithm” see Fig. 1-6, segmenting time-series data to form plurality of time-series data segments each having a different subset of data points, processing plurality of time-series data segments, determine plurality data patterns, standard representative value for each sampled time segment point, normal range of data points pattern, detecting plurality anomalies in an input time-series based on plurality of patterns, selecting particular pattern, predicting a range of upcoming normal values for the input time-series predicted as characterized in particular pattern, perform pattern matching for an input time-series, detecting anomaly for the segment based on data point within the input-time series being out of the range obviously provides determining plurality of clusters viable for use in a production environment based on whether the trained machine learning models for the individual clusters in the one or more of the alternative configurations of clusters satisfy an accuracy threshold and a completion time threshold; selecting one configuration from the alternative configurations of clusters that were determined to be viable configurations; and deploying production machine learning models into the production environment to detect anomalies in the time series signals based on selected configuration).
Baclawsk discloses trained machine learning models for the individual clusters in the one or more of the alternative configurations of clusters satisfy an accuracy threshold and a completion time threshold (Baclawsk [0041-84] “perform anomaly discovery for evolving tasks - monitoring and applying ML to data using an RDBMS scheduler at desired time intervals - training new model requires considerable time and effort - pretrain a library of models retrieved quickly when circumstances change abruptly - improves response time - rapid response time critical” [0001-20] “performs prognostic-surveillance operations based on an inferential model that dynamically adapts to evolving operational characteristics of a monitored asset - uses an inferential model to generate estimated values for the set of time-series signals - incipient anomaly in the monitored asset, the system triggers an alert” [0048-84] [abstract] see Fig. 1-10, generate estimated values for the set of time-series signals, monitoring and applying ML to data at desired time intervals, improves response time and rapid response time critical obviously provides satisfy accuracy threshold and completion time threshold).
Shama and Baclawsk are analogous arts from the same field of endeavor and contain overlapping structural and functional similarities and both contain anomaly detection in time series signals.
Therefore, at the time the invention was made, it would have been obvious to a person of ordinary skill in the art to modify the above functionalities clusters satisfy accuracy threshold and completion time threshold, as taught by Shama, and incorporating monitoring and applying ML to data at desired time intervals, improves response time and rapid response time critical, as taught by Baclawsk.
As to claims 2, 10 and 16, the combination of Shama and Baclawsk disclose all the limitations of the base claims as outlined above.
The combination further discloses The computer-implemented method of claim 1, wherein separating time series signals into a plurality of alternative configurations of clusters based on correlations between the time series signals further comprises a clustering algorithm to separate the time series signals into specified amounts of individual clusters (Shama [claim 1] “receiving time-series data of a single time-series - mixed seasonality type - segmenting the time-series data - to form a plurality of time-series data segments each having a different subset of the data points in the single time-series” [abstract] “processed to determine one or more patterns across the plurality of time-series data segments” see Fig. 1-6, segmenting time-series data, form a plurality of time-series data segments different subset of data points obviously provides separating time series signals into a plurality of alternative configurations of clusters based on correlations between the time series signals further comprises a clustering algorithm to separate the time series signals into specified amounts of individual clusters).
As to claims 3, 11 and 17, the combination of Shama and Baclawsk disclose all the limitations of the base claims as outlined above.
The combination further discloses The computer-implemented method of claim 1, further comprising: receiving a configuration of hardware in the production environment; and in response to the trained machine learning models for the individual clusters in one of the alternative configurations of clusters satisfying the accuracy threshold:
simulating the execution by the hardware in the configuration of one of the trained machine learning models that is trained for a largest cluster that is in the one of the alternative configurations, determining a completion time for the one of the alternative configurations of clusters based on the simulated execution, comparing the completion time to the completion time threshold to determine whether the trained machine learning models for the individual clusters that are in the one of the alternative configurations of clusters satisfy the completion time threshold (Shama [claim 1] “processor to perform - receiving time-series data of a single time-series - mixed seasonality type by having portions repeat on a time-wise basis - segmenting the time-series data - to form a plurality of time-series data segments each having a different subset of the data points in the single time-series - processing the plurality of time-series data segments to determine one or more patterns across the plurality of time-series data segments - standard representative value for each sampled time segment point, a lower bound of data in the pattern for a defined confidence interval, and an upper bound of data in the pattern for the defined confidence interval, wherein the lower bound and the upper bond defined a normal range of data points in the pattern; and detecting one or more anomalies in an input time-series, based on the stored one or more patterns - selecting a particular pattern of the stored one or more patterns - predicting a range of upcoming normal values for the input time-series - range is predicted as characterized in the particular pattern - detecting an anomaly for the segment based on a data point within the input-time series being out of the range” [abstract] “anomaly detection in time-series data with mixed seasonality - time-series data is segmented - form a plurality of time-series data segments - determine one or more patterns across the plurality of time-series data segments - stored and used to perform pattern matching for an input time-series” [claim 2] “time-series data identified - using a machine learning algorithm” see Fig. 1-6, using machine learning algorithm, segmenting time-series data to form plurality of time-series data segments each having a different subset of data points, determine plurality data patterns, standard representative value for each sampled time segment point, normal range of data points pattern, detecting plurality anomalies in an input time-series based on plurality of patterns, selecting particular pattern, predicting a range of upcoming normal values for the input time-series predicted as characterized in particular pattern, perform pattern matching for an input time-series, detecting anomaly for the segment based on data point within the input-time series being out of the range obviously provides the limitation).
As to claim 4, the combination of Shama and Baclawsk disclose all the limitations of the base claims as outlined above.
The combination further discloses The computer-implemented method of claim 1, further comprising: executing the trained machine learning models for the individual clusters in one of the alternative configurations to determine accuracy levels of the trained machine learning models for the individual clusters that are in the one of the alternative configurations; and comparing the accuracy levels against the accuracy threshold to determine whether the trained machine learning models for the individual clusters that are in the one of the alternative configurations of clusters satisfy the accuracy threshold (Shama [claim 1] “segmenting the time-series data - to form a plurality of time-series data segments each having a different subset of the data points in the single time-series - processing the plurality of time-series data segments to determine one or more patterns across the plurality of time-series data segments - standard representative value for each sampled time segment point, a lower bound of data in the pattern for a defined confidence interval, and an upper bound of data in the pattern for the defined confidence interval, wherein the lower bound and the upper bond defined a normal range of data points in the pattern; and detecting one or more anomalies in an input time-series, based on the stored one or more patterns - selecting a particular pattern of the stored one or more patterns - predicting a range of upcoming normal values for the input time-series - range is predicted as characterized in the particular pattern - detecting an anomaly for the segment based on a data point within the input-time series being out of the range” [abstract] “anomaly detection in time-series data with mixed seasonality - time-series data is segmented - form a plurality of time-series data segments - determine one or more patterns across the plurality of time-series data segments - stored and used to perform pattern matching for an input time-series” [claim 2] “time-series data identified - using a machine learning algorithm” see Fig. 1-6, using machine learning algorithm, segmenting time-series data to form plurality of time-series data segments each having a different subset of data points, determine plurality data patterns, standard representative value for each sampled time segment point, normal range of data points pattern, detecting plurality anomalies in an input time-series based on plurality of patterns, selecting particular pattern, predicting a range of upcoming normal values for the input time-series predicted as characterized in particular pattern, perform pattern matching for an input time-series, detecting anomaly for the segment based on data point within the input-time series being out of the range obviously provides executing the trained machine learning models for the individual clusters in one of the alternative configurations to determine accuracy levels of the trained machine learning models for the individual clusters that are in the one of the alternative configurations; and comparing the accuracy levels against the accuracy threshold to determine whether the trained machine learning models for the individual clusters that are in the one of the alternative configurations of clusters satisfy the accuracy threshold).
As to claim 5, the combination of Shama and Baclawsk disclose all the limitations of the base claims as outlined above.
The combination further discloses The computer-implemented method of claim 1, wherein the trained machine learning models for the individual clusters in the one or more of the alternative configurations of clusters are initially evaluated against the accuracy threshold, and a subset of the trained machine learning models for the individual clusters in the one or more of the alternative configurations of clusters that have satisfied the accuracy threshold are subsequently evaluated against the completion time threshold (Shama [claim 1] “segmenting the time-series data - to form a plurality of time-series data segments each having a different subset of the data points in the single time-series - processing the plurality of time-series data segments to determine one or more patterns across the plurality of time-series data segments - standard representative value for each sampled time segment point, a lower bound of data in the pattern for a defined confidence interval, and an upper bound of data in the pattern for the defined confidence interval, wherein the lower bound and the upper bond defined a normal range of data points in the pattern; and detecting one or more anomalies in an input time-series, based on the stored one or more patterns - selecting a particular pattern of the stored one or more patterns - predicting a range of upcoming normal values for the input time-series - range is predicted as characterized in the particular pattern - detecting an anomaly for the segment based on a data point within the input-time series being out of the range” [abstract] “anomaly detection in time-series data with mixed seasonality - time-series data is segmented - form a plurality of time-series data segments - determine one or more patterns across the plurality of time-series data segments - stored and used to perform pattern matching for an input time-series” [claim 2] “time-series data identified - using a machine learning algorithm” see Fig. 1-6, using machine learning algorithm, segmenting time-series data to form plurality of time-series data segments each having a different subset of data points, determine plurality data patterns, standard representative value for each sampled time segment point, normal range of data points pattern, detecting plurality anomalies in an input time-series based on plurality of patterns, selecting particular pattern, predicting a range of upcoming normal values for the input time-series predicted as characterized in particular pattern, perform pattern matching for an input time-series, detecting anomaly for the segment based on data point within the input-time series being out of the range obviously provides trained machine learning models for the individual clusters in the one or more of the alternative configurations of clusters are initially evaluated against the accuracy threshold, and a subset of the trained machine learning models for the individual clusters in the one or more of the alternative configurations of clusters that have satisfied the accuracy threshold are subsequently evaluated against the completion time threshold).
As to claims 6, 12 and 18, the combination of Shama and Baclawsk disclose all the limitations of the base claims as outlined above.
The combination further discloses The computer-implemented method of claim 1, wherein the completion time threshold is determined to be satisfied based on (i) a training time taken to train one of the machine learning models, or (ii) a monitoring time taken to generate estimates of what signal values should be by one of the trained machine learning models (Shama [claim 1] “segmenting the time-series data - to form a plurality of time-series data segments each having a different subset of the data points in the single time-series - processing the plurality of time-series data segments to determine one or more patterns across the plurality of time-series data segments - standard representative value for each sampled time segment point, a lower bound of data in the pattern for a defined confidence interval, and an upper bound of data in the pattern for the defined confidence interval, wherein the lower bound and the upper bond defined a normal range of data points in the pattern; and detecting one or more anomalies in an input time-series, based on the stored one or more patterns - selecting a particular pattern of the stored one or more patterns - predicting a range of upcoming normal values for the input time-series - range is predicted as characterized in the particular pattern - detecting an anomaly for the segment based on a data point within the input-time series being out of the range” [abstract] “anomaly detection in time-series data with mixed seasonality - time-series data is segmented - form a plurality of time-series data segments - determine one or more patterns across the plurality of time-series data segments - stored and used to perform pattern matching for an input time-series” [claim 2] “time-series data identified - using a machine learning algorithm” see Fig. 1-6, using machine learning algorithm, segmenting time-series data to form plurality of time-series data segments each having a different subset of data points, determine plurality data patterns, standard representative value for each sampled time segment point, normal range of data points pattern, detecting plurality anomalies in an input time-series based on plurality of patterns, selecting particular pattern, predicting a range of upcoming normal values for the input time-series predicted as characterized in particular pattern, perform pattern matching for an input time-series, detecting anomaly for the segment based on data point within the input-time series being out of the range obviously provides completion time threshold is determined to be satisfied based on (i) a training time taken to train one of the machine learning models, or (ii) a monitoring time taken to generate estimates of what signal values should be by one of the trained machine learning models).
As to claims 7, 13 and 19, the combination of Shama and Baclawsk disclose all the limitations of the base claims as outlined above.
The combination further discloses The computer-implemented method of claim 1, further comprising: monitoring the selected configuration of clusters for the anomalies with the trained production machine learning models; and in response to detecting a particular anomaly in the time series signals with the trained production machine learning models, (Shama [claim 1] “segmenting the time-series data - to form a plurality of time-series data segments each having a different subset of the data points in the single time-series - processing the plurality of time-series data segments to determine one or more patterns across the plurality of time-series data segments - standard representative value for each sampled time segment point, a lower bound of data in the pattern for a defined confidence interval, and an upper bound of data in the pattern for the defined confidence interval, wherein the lower bound and the upper bond defined a normal range of data points in the pattern; and detecting one or more anomalies in an input time-series, based on the stored one or more patterns - selecting a particular pattern of the stored one or more patterns - predicting a range of upcoming normal values for the input time-series - range is predicted as characterized in the particular pattern - detecting an anomaly for the segment based on a data point within the input-time series being out of the range” [abstract] “anomaly detection in time-series data with mixed seasonality - time-series data is segmented - form a plurality of time-series data segments - determine one or more patterns across the plurality of time-series data segments - stored and used to perform pattern matching for an input time-series” [claim 2] “time-series data identified - using a machine learning algorithm” see Fig. 1-6, using machine learning algorithm, segmenting time-series data to form plurality of time-series data segments each having a different subset of data points, determine plurality data patterns, standard representative value for each sampled time segment point, normal range of data points pattern, detecting plurality anomalies in an input time-series based on plurality of patterns, selecting particular pattern, predicting a range of upcoming normal values for the input time-series predicted as characterized in particular pattern, perform pattern matching for an input time-series, detecting anomaly for the segment based on data point within the input-time series being out of the range obviously provides monitoring the selected configuration of clusters for the anomalies with the trained production machine learning models; and in response to detecting a particular anomaly in the time series signals with the trained production machine learning models).
Baclawsk further discloses generating an electronic alert that the particular anomaly has occurred (Baclawsk [0001-20] “performs prognostic-surveillance operations based on an inferential model that dynamically adapts to evolving operational characteristics of a monitored asset - receives a set of time-series signals gathered from sensors in the monitored asset - uses an inferential model to generate estimated values for the set of time-series signals - performs a pairwise differencing operation between actual values and the estimated values for the set of time-series signals - when tripping frequency of the SPRT alarms exceeds a threshold value - indicative of an incipient anomaly in the monitored asset, the system triggers an alert - system incrementally updates the inferential model based on the time-series signals” [0048-84] [abstract] see Fig. 1-10, generate estimated values for the set of time-series signals, exceeds threshold value, system triggers alert obviously provides generating an electronic alert that the particular anomaly has occurred).
As to claims 8, 14 and 20, the combination of Shama and Baclawsk disclose all the limitations of the base claims as outlined above.
The combination further discloses The computer-implemented method of claim 1, wherein selecting one configuration from the alternative configurations of clusters that were determined to be viable further comprises automatically selecting the one configuration based on accuracy levels of the trained machine learning models that were trained for the individual clusters in the one configuration (Shama [claim 1] “segmenting the time-series data - to form a plurality of time-series data segments each having a different subset of the data points in the single time-series - processing the plurality of time-series data segments to determine one or more patterns across the plurality of time-series data segments - standard representative value for each sampled time segment point, a lower bound of data in the pattern for a defined confidence interval, and an upper bound of data in the pattern for the defined confidence interval, wherein the lower bound and the upper bond defined a normal range of data points in the pattern; and detecting one or more anomalies in an input time-series, based on the stored one or more patterns - selecting a particular pattern of the stored one or more patterns - predicting a range of upcoming normal values for the input time-series - range is predicted as characterized in the particular pattern - detecting an anomaly for the segment based on a data point within the input-time series being out of the range” [abstract] “anomaly detection in time-series data with mixed seasonality - time-series data is segmented - form a plurality of time-series data segments - determine one or more patterns across the plurality of time-series data segments - stored and used to perform pattern matching for an input time-series” [col 1-7] “machine learning algorithm trained using a plurality of time-series - process an input time-series to infer - mixed seasonality - time-series data received for - defining patterns from the time-series data - mixed seasonality type accomplished by a machine learning algorithm that classifies the time-series data” see Fig. 1-6, segmenting time-series data to form plurality of time-series data segments each having a different subset of data points, processing plurality of time-series data segments, determine plurality data patterns, standard representative value for each sampled time segment point, normal range of data points pattern, detecting plurality anomalies in an input time-series based on plurality of patterns, selecting particular pattern, predicting a range of upcoming normal values for the input time-series predicted as characterized in particular pattern, perform pattern matching for an input time-series, detecting anomaly for the segment based on data point within the input-time series being out of the range obviously provides selecting one configuration from the alternative configurations of clusters that were determined to be viable further comprises automatically selecting the one configuration based on accuracy levels of the trained machine learning models that were trained for the individual clusters in the one configuration).
Citation of Pertinent Prior Art
It is noted that any citations to specific, pages, columns, lines, or figures in the prior art references and any interpretation of the reference should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. See MPEP 2141.02 VI. PRIOR ART MUST BE CONSIDERED IN ITS ENTIRETY, i.e., as a whole and 2123.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. The prior art made of record:
Gross, et al. USPGPub No. 2019/0163719 A1 discloses a technique for monitoring operational health of electrical power plants, transmission and distribution grids for tuning sequential probability ratio test (SPRT) parameters to facilitate prognostic surveillance of non-Gaussian sensor data.
Palani, et al. USPGPub No. 2021/0344695 A1 discloses a method for automated anomaly detection comprising training an ensemble of deep learning models using clustered time series training data from numerous components in an Information Technology infrastructure.
Pan, et al. USPGPub No. 2024/0346389 A1 discloses a method for time series forecasting using ensemble machine learning using instantiate, train, and plurality of machine learning models.
Fang, et al. USPGPub No. 2022/0237468 A1 discloses a method provide a machine learning model exploit long time dependency for time-series sequences and perform end-to-end learning of dimension reduction and clustering, or train on long time-series sequences with low computation.
Gross, et al. USPGPub No. 2019/0121714 A1 discloses a technique for performing prognostic surveillance operations on sensor data relating hybrid clustering-partitioning technique optimizes accuracy to facilitate proactive anomaly detection based on sensor data.
Wang, et al. USPGPub No. 2019/0378022 A1 discloses a technique for performing prognostic analysis operations based on time-series sensor data with missing value imputation to facilitate prognostics-analysis operations on received time-series sensor data under surveillance.
Gross, et al. USPGPub No. 2020/0151618 A1 discloses a technique for performing prognostic surveillance operations on sensor data relating hybrid clustering-partitioning technique optimizes accuracy to facilitate proactive anomaly detection based on sensor data.
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Md Azad whose telephone @(571)272-0553 or email: md.azad@uspto.gov. The examiner can normally be reached on Mon-Thu 9AM-5PM.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mohammad Ali can be reached on (571)272-4105. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from Patent Center and the Private Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from Patent Center or Private PAIR. Status information for unpublished applications is available through Patent Center and Private PAIR for authorized users only. Should you have questions about access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form.
/Md Azad/
Primary Examiner, Art Unit 2119