DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more.
Regarding claim 1, in step 1 of the 101 analysis set forth in MPEP 2106, the claim recites A computer-implemented method for generating segmented multivariate time series data comprising:. The claim recites a method. A method is one of the four statutory categories of invention.
In Step 2A, Prong 1 of the 101 analysis set forth in MPEP 2106, the examiner has determined that the following limitations recite a process that, under broadest reasonable interpretation, covers a mental process or mathematical concept but for the recitation of generic computer components:
grouping portions of multivariate time series data by a window size to generate windowed subsequences of the multivariate time series data; (i.e., the broadest reasonable interpretation includes a step of evaluation and judgement and could be performed mentally or with pen and paper like separating a dataset into multiple parts, which is either a mental process of evaluation/judgement (MPEP 2106)).
generating graph objects from the windowed subsequences… (i.e., the broadest reasonable interpretation includes a step of observation, evaluation, and judgement and could be performed mentally or with pen and paper like creating nodes and edges, which is either a mental process of observation/evaluation/judgement (MPEP 2106)).
determining one or more segmentation timestamps when one or more segment changes in the multivariate time series data occurred based on comparing the graph objects…(i.e., the broadest reasonable interpretation includes a step of observation, evaluation, and judgement and could be performed mentally or with pen and paper like determining time changes based on differences between graph elements, which is either a mental process of observation/evaluation/judgement (MPEP 2106)).
and generating a segmented multivariate time series by segmenting the multivariate time series data based on one or more segmentation timestamps. (i.e., the broadest reasonable interpretation includes a step of observation, evaluation, and judgement and could be performed mentally or with pen and paper like separating a dataset using timestamps, which is either a mental process of observation/evaluation/judgement (MPEP 2106)).
If the claim limitations, under their broadest reasonable interpretation, covers activities classified under Mental processes: concepts performed in the human mind (including observation, evaluation, judgement, or opinion) (see MPEP 2106.04(a)(2), subsection (III)) or Mathematical concepts: mathematical relationships, mathematical formulas or equations, or mathematical calculations (see MPEP 2106.04(a)(2), subsection (I)). Accordingly, the claim recites an abstract idea.
In Step 2A, Prong 2 of the 101 analysis, set forth in MPEP 2106, the examiner has determined that the following additional elements do not integrate this judicial exception into a practical application:
…utilizing a sparse graph recovery model; (i.e., the generic computer components recited in this limitation merely add the words “apply it”, or an equivalent, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f))).
…utilizing a similarity model; (i.e., the generic computer components recited in this limitation merely add the words “apply it”, or an equivalent, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f))).
Since the claim does not contain any other additional elements, that amount to integration into a practical application, the claim is directed to an abstract idea.
In Step 2B of the 101 analysis set forth in the 2019 PEG, the examiner has determined that the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception:
Regarding, limitation (V), under the broadest reasonable interpretation, merely recite steps that apply a generic machine learning model as a tool to perform a judicial exception, which represents merely adding the words “apply it”, or an equivalent, which are not indicative of an inventive concept (MPEP 2106.05(f)). Similarly, limitation (VI), under the broadest reasonable interpretation, merely recite steps that apply a generic similarity model as a tool to perform a judicial exception, which represents merely adding the words “apply it”, or an equivalent, which are not indicative of an inventive concept (MPEP 2106.05(f)). Considering additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible.
Regarding claim 2, it is dependent upon claim 1 and fails to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. For example, claim 2 recites further comprising: comparing a first graph object to a second graph object utilizing the similarity model to determine that a difference between the first graph object and the second graph object satisfies a difference threshold; and determining a first segmentation timestamp based on a segmentation timestamp of the first graph object. Under the broadest reasonable interpretation, the limitations recite comparing a similarity level between two graph elements and applying a timestamp to the first graph element which is a step of observation, evaluation, and judgement which can be performed mentally or with pen and paper. The steps of observation, evaluation, and judgement are mental processes. Therefore, claim 2 does not solve the deficiencies of claim 1.
Regarding claim 3, it is dependent upon claim 1 and fails to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. For example, claim 3 recites wherein generating the graph objects from the windowed subsequences includes generating a visual graph of nodes and edges, where the edges indicate a positive or a negative partial correlation between connected nodes. Under the broadest reasonable interpretation, the limitations recite creating graphs with labeled edges which is a step of observation, evaluation, and judgement which can be performed mentally or with pen and paper. The steps of observation, evaluation, and judgement are mental processes. Therefore, claim 3 does not solve the deficiencies of claim 1.
Regarding claim 4, it is dependent upon claim 1 and fails to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. For example, claim 4 recites wherein generating the graph objects from the windowed subsequences includes generating an adjacency matrix indicating partial correlations between corresponding nodes and edges between two graph objects. Under the broadest reasonable interpretation, the limitations recite creating a lookup table for edges between graph nodes which is a step of observation, evaluation, and judgement which can be performed mentally or with pen and paper. The steps of observation, evaluation, and judgement are mental processes. Therefore, claim 4 does not solve the deficiencies of claim 1.
Regarding claim 5, it is dependent upon claim 1 and fails to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. For example, claim 5 recites further comprising generating multiple graph objects from the windowed subsequences at a same time as a batch operation that utilizes one instance of the sparse graph recovery model and shared parameters. Under the broadest reasonable interpretation, the limitations merely recite steps that applies a generic machine learning model to perform in batches to perform a judicial exception, which represents merely adding the words “apply it”, or an equivalent, which are not indicative of an inventive concept (MPEP 2106.05(f)). Therefore, claim 5 does not solve the deficiencies of claim 1.
Regarding claim 6, it is dependent upon claim 1 and fails to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. For example, claim 6 recites wherein generating the graph objects from the windowed subsequences includes utilizing a conditional independence sparse graph recovery model that generates graph objects that exhibit partial correlation between variables. Under the broadest reasonable interpretation, the limitations merely recite steps that apply a generic conditional independence sparse graph recovery model to perform a judicial exception, which represents merely adding the words “apply it”, or an equivalent, which are not indicative of an inventive concept (MPEP 2106.05(f)). Therefore, claim 6 does not solve the deficiencies of claim 1.
Regarding claim 7, it is dependent upon claim 1 and fails to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. For example, claim 7 recites further comprising: generating, using a refined window size, additional windowed subsequences from the multivariate time series data based on the one or more segmentation timestamps, wherein the refined window size is smaller than the window size;. Under the broadest reasonable interpretation, the limitations recite identifying smaller portions of the dataset which is a step of observation, evaluation, and judgement which can be performed mentally or with pen and paper. Claim 7 also recites determining one or more refined segmentation timestamps from the additional windowed subsequences; and refining locations of segments within the segmented multivariate time series based on the one or more refined segmentation timestamps. Under the broadest reasonable interpretation, the limitations recite creating smaller partitions of a dataset which is a step of observation, evaluation, and judgement which can be performed mentally or with pen and paper. The steps of observation, evaluation, and judgement are mental processes. Therefore, claim 7 does not solve the deficiencies of claim 1.
Regarding claim 8, it is dependent upon claim 1 and fails to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. For example, claim 8 recites wherein the similarity model includes an allocation algorithm that determines the one or more segmentation timestamps based on determining a first order distance and a second order distance from the graph objects. Under the broadest reasonable interpretation, the limitations recite using distance calculations to identify similarities which is interpreted as using a mathematical calculation. A mathematical calculation is interpreted as a mathematical concept. Therefore, claim 8 does not solve the deficiencies of claim 1.
Regarding claim 9, it is dependent upon claim 8 and fails to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. For example, claim 9 recites wherein: the first order distance captures a distance between consecutive graph objects; and the second order distance generates absolute values based on the first order distance. Under the broadest reasonable interpretation, the limitations recite using distance calculations to identify similarities which is interpreted as using a mathematical calculation. A mathematical calculation is interpreted as a mathematical concept. Therefore, claim 9 does not solve the deficiencies of claim 8.
Regarding claim 10, it is dependent upon claim 8 and fails to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. For example, claim 10 recites wherein the allocation algorithm further comprises: reducing the second order distance by a filtering out sequence values below a noise threshold to generate a filtered sequence; and traversing the filtered sequence for non-zero values to identify the one or more segmentation timestamps. Under the broadest reasonable interpretation, the limitations recite filtering noisy values and identifying timesteps from the clean values which is a step of observation, evaluation, and judgement which can be performed mentally or with pen and paper. The steps of observation, evaluation, and judgement are mental processes. Therefore, claim 10 does not solve the deficiencies of claim 8.
Regarding claim 11, in step 1 of the 101 analysis set forth in MPEP 2106, the claim recites A system comprising:…a processor; and a computer memory comprising instructions that, when executed by the processor, cause the system to carry out operations comprising:. The claim recites a system comprising hardware components. A system with hardware components is interpreted as a machine and a machine is one of the four statutory categories of invention.
In Step 2A, Prong 1 of the 101 analysis set forth in MPEP 2106, the examiner has determined that the following limitations recite a process that, under broadest reasonable interpretation, covers a mental process or mathematical concept but for the recitation of generic computer components:
generating windowed subsequences of the multivariate time series data by grouping portions of the multivariate time series data a time-based window size; (i.e., the broadest reasonable interpretation includes a step of evaluation and judgement and could be performed mentally or with pen and paper like separating a dataset into multiple parts, which is either a mental process of evaluation/judgement (MPEP 2106)).
generating graph objects from the windowed subsequences… (i.e., the broadest reasonable interpretation includes a step of observation, evaluation, and judgement and could be performed mentally or with pen and paper like creating nodes and edges, which is either a mental process of observation/evaluation/judgement (MPEP 2106)).
determining one or more segmentation timestamps based on the graph objects…(i.e., the broadest reasonable interpretation includes a step of observation, evaluation, and judgement and could be performed mentally or with pen and paper like determining time changes based on differences between graph elements, which is either a mental process of observation/evaluation/judgement (MPEP 2106)).
and generating a segmented multivariate time series by segmenting the multivariate time series data based on one or more segmentation timestamps. (i.e., the broadest reasonable interpretation includes a step of observation, evaluation, and judgement and could be performed mentally or with pen and paper like separating a dataset using timestamps, which is either a mental process of observation/evaluation/judgement (MPEP 2106)).
If the claim limitations, under their broadest reasonable interpretation, covers activities classified under Mental processes: concepts performed in the human mind (including observation, evaluation, judgement, or opinion) (see MPEP 2106.04(a)(2), subsection (III)) or Mathematical concepts: mathematical relationships, mathematical formulas or equations, or mathematical calculations (see MPEP 2106.04(a)(2), subsection (I)). Accordingly, the claim recites an abstract idea.
In Step 2A, Prong 2 of the 101 analysis, set forth in MPEP 2106, the examiner has determined that the following additional elements do not integrate this judicial exception into a practical application:
A system comprising: multivariate time series data; a sparse graph recovery model that generates graph objects from portions of multivariate time series data; (i.e., the generic computer components recited in this limitation merely add the words “apply it”, or an equivalent, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f))).
a similarity model that determines differences between two or more graph objects; (i.e., the generic computer components recited in this limitation merely add the words “apply it”, or an equivalent, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f))).
a processor; and a computer memory comprising instructions that, when executed by the processor, cause the system to carry out operations comprising: (i.e., the generic computer components recited in this limitation merely add the words “apply it”, or an equivalent, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f))).
…utilizing a sparse graph recovery model; (i.e., the generic computer components recited in this limitation merely add the words “apply it”, or an equivalent, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f))).
…utilizing a similarity model; (i.e., the generic computer components recited in this limitation merely add the words “apply it”, or an equivalent, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f))).
Since the claim does not contain any other additional elements, that amount to integration into a practical application, the claim is directed to an abstract idea.
In Step 2B of the 101 analysis set forth in the 2019 PEG, the examiner has determined that the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception:
Regarding, limitations (V) and (VIII), under the broadest reasonable interpretation, merely recite steps that apply a generic machine learning model as a tool to perform a judicial exception, which represents merely adding the words “apply it”, or an equivalent, which are not indicative of an inventive concept (MPEP 2106.05(f)). Similarly, limitations (VI) and (IX), under the broadest reasonable interpretation, merely recite steps that apply a generic similarity model as a tool to perform a judicial exception, which represents merely adding the words “apply it”, or an equivalent, which are not indicative of an inventive concept (MPEP 2106.05(f)). Further, limitation (VII), under the broadest reasonable interpretation, merely recite steps that apply generic computer components to perform judicial exceptions, which represents merely adding the words “apply it”, or an equivalent, which are not indicative of an inventive concept (MPEP 2106.05(f)). Considering additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible.
Regarding claim 12, the claim is similar to claim 5 and rejected under the same rationales.
Regarding claim 13, it is dependent upon claim 12 and fails to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. For example, claim 13 recites wherein the sparse graph recovery model is an unsupervised deep-learning sparse graph recovery model trained to generate batches of object graph dependency graphs. Under the broadest reasonable interpretation, the limitations merely recite steps that apply a generic unsupervised machine learning model to perform a judicial exception, which represents merely adding the words “apply it”, or an equivalent, which are not indicative of an inventive concept (MPEP 2106.05(f)). Therefore, claim 13 does not solve the deficiencies of claim 12.
Regarding claim 14, it is dependent upon claim 12 and fails to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. For example, claim 14 recites wherein the sparse graph recovery model generates conditional independent graph objects that exhibit partial correlation between variables. Under the broadest reasonable interpretation, the limitations merely recite steps that apply a generic conditional independence sparse graph recovery model to perform a judicial exception, which represents merely adding the words “apply it”, or an equivalent, which are not indicative of an inventive concept (MPEP 2106.05(f)). Therefore, claim 14 does not solve the deficiencies of claim 12.
Regarding claim 15, the claim is similar to claim 3 and rejected under the same rationales.
Regarding claim 16, the claim is similar to claim 7 and rejected under the same rationales.
Regarding claim 17, in step 1 of the 101 analysis set forth in MPEP 2106, the claim recites A computer-implemented method for generating segmented multivariate time series data comprising:. The claim recites a method. A method is one of the four statutory categories of invention.
In Step 2A, Prong 1 of the 101 analysis set forth in MPEP 2106, the examiner has determined that the following limitations recite a process that, under broadest reasonable interpretation, covers a mental process or mathematical concept but for the recitation of generic computer components:
generating a first windowed subsequence and a second windowed subsequence from multivariate time series data based on a window size; (i.e., the broadest reasonable interpretation includes a step of evaluation and judgement and could be performed mentally or with pen and paper like separating a dataset into multiple parts, which is either a mental process of evaluation/judgement (MPEP 2106)).
generating a first graph object and a second graph object from the first windowed subsequence and the second windowed subsequence… (i.e., the broadest reasonable interpretation includes a step of observation, evaluation, and judgement and could be performed mentally or with pen and paper like creating nodes and edges, which is either a mental process of observation/evaluation/judgement (MPEP 2106)).
determining a segmentation timestamp based on when a segment change occurred based on comparing the first graph object and the second graph object…(i.e., the broadest reasonable interpretation includes a step of observation, evaluation, and judgement and could be performed mentally or with pen and paper like determining time changes based on differences between graph elements, which is either a mental process of observation/evaluation/judgement (MPEP 2106)).
and generating a segmented multivariate time series by segmenting the multivariate time series data based on one or more segmentation timestamps. (i.e., the broadest reasonable interpretation includes a step of observation, evaluation, and judgement and could be performed mentally or with pen and paper like separating a dataset using timestamps, which is either a mental process of observation/evaluation/judgement (MPEP 2106)).
If the claim limitations, under their broadest reasonable interpretation, covers activities classified under Mental processes: concepts performed in the human mind (including observation, evaluation, judgement, or opinion) (see MPEP 2106.04(a)(2), subsection (III)) or Mathematical concepts: mathematical relationships, mathematical formulas or equations, or mathematical calculations (see MPEP 2106.04(a)(2), subsection (I)). Accordingly, the claim recites an abstract idea.
In Step 2A, Prong 2 of the 101 analysis, set forth in MPEP 2106, the examiner has determined that the following additional elements do not integrate this judicial exception into a practical application:
…utilizing a sparse graph recovery model; (i.e., the generic computer components recited in this limitation merely add the words “apply it”, or an equivalent, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f))).
…utilizing a similarity model; (i.e., the generic computer components recited in this limitation merely add the words “apply it”, or an equivalent, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f))).
Since the claim does not contain any other additional elements, that amount to integration into a practical application, the claim is directed to an abstract idea.
In Step 2B of the 101 analysis set forth in the 2019 PEG, the examiner has determined that the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception:
Regarding, limitation (V), under the broadest reasonable interpretation, merely recite steps that apply a generic machine learning model as a tool to perform a judicial exception, which represents merely adding the words “apply it”, or an equivalent, which are not indicative of an inventive concept (MPEP 2106.05(f)). Similarly, limitation (VI), under the broadest reasonable interpretation, merely recite steps that apply a generic similarity model as a tool to perform a judicial exception, which represents merely adding the words “apply it”, or an equivalent, which are not indicative of an inventive concept (MPEP 2106.05(f)). Considering additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible.
Regarding claim 18, it is dependent upon claim 17 and fails to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. For example, claim 18 recites wherein the window size corresponds to an overlapping window, and wherein the first windowed subsequence and the second windowed subsequence include duplicative data. Under the broadest reasonable interpretation, the limitations recite overlapping portions to have duplicate data which is a step of observation, evaluation, and judgement which can be performed mentally or with pen and paper. The steps of observation, evaluation, and judgement are mental processes. Therefore, claim 18 does not solve the deficiencies of claim 17.
Regarding claim 19, it is dependent upon claim 17 and fails to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. For example, claim 19 recites wherein the window size corresponds to a non-overlapping window, and wherein the first windowed subsequence and the second windowed subsequence include non-duplicate data. Under the broadest reasonable interpretation, the limitations recite prevent overlapping portions so that there is no duplicate data which is a step of observation, evaluation, and judgement which can be performed mentally or with pen and paper. The steps of observation, evaluation, and judgement are mental processes. Therefore, claim 19 does not solve the deficiencies of claim 17.
Regarding claim 20, it is dependent upon claim 17 and fails to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. For example, claim 20 recites further comprising generating the first graph object and the second graph object in a single batch operation utilizing the sparse graph recovery model based on the first windowed subsequence and the second windowed subsequence. Under the broadest reasonable interpretation, the limitations merely recite steps that applies a generic machine learning model to perform in batches to perform a judicial exception, which represents merely adding the words “apply it”, or an equivalent, which are not indicative of an inventive concept (MPEP 2106.05(f)). Therefore, claim 20 does not solve the deficiencies of claim 17.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 6, 11, 17, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Kulkarni, et al., US Pre-Grant Publication 2020/0372024A1 (“Kulkarni”) in view of Shrivastava, et al., Non-Patent Literature “GLAD: LEARNING SPARSE GRAPH RECOVERY” (“Shrivastava”).
Regarding claim 1, Kulkarni discloses:
A computer-implemented method for generating segmented multivariate time series data comprising: (Kulkarni, ⁋21, “The multi-scale segmentation module 104 receives one or more time-series datasets 102. The datasets 102 may be, for example, multivariate time-series datasets (MTD) which include time-series data for different assets (for example, IoT sensor data) [A computer-implemented method for generating segmented multivariate time series data comprising:].”).
grouping portions of multivariate time series data by a window size to generate windowed subsequences of the multivariate time series data; (Kulkarni, ⁋22, “The multi-scale segmentation module 104 creates segments of the time series datasets 102 of different sizes [grouping portions of multivariate time series data by a window size]. FIG. 2 depicts a non-limiting example of multi-scale segmentation 200 in accordance with one or more example embodiments. As shown in FIG. 2, it is assumed the total data duration for the time series of each asset's data is n, so that the sizes may be, for example, n, n/2, n/4, wherein each size represents a different time scale [to generate windowed subsequences of the multivariate time series data;].”).
generating graph objects from the windowed subsequences…; (Kulkarni, ⁋25, “The multi-scale anomalous asset detection module 106 constructs a segment similarity graph for each scale using the distance matrix [generating graph objects from the windowed subsequences…;].”).
determining one or more segmentation timestamps when one or more segment changes in the multivariate time series data occurred based on comparing the graph objects utilizing a similarity model; (Kulkarni, ⁋31, “FIG. 3B is a segment graph 350 with segments from four different assets, namely, assets 1-4. The segment graph in FIG. 3B shows anomalous segment closeness between the assets. In particular, segments of asset 3 are closely grouped with a segment of asset 2, and a segment of asset 4 is closely grouped with a segment of asset 2; grouping the anomalous segments closer together based on closeness is interpreted as determining an anomalous timesteps or points in the dataset (i.e. determining one or more segmentation timestamps when one or more segment changes in the multivariate time series data occurred based on comparing the graph objects).”, and Kulkarni, ⁋25, “The multi-scale anomalous asset detection module 106 [utilizing a similarity model;] constructs a segment similarity graph for each scale using the distance matrix.”).
and generating a segmented multivariate time series by segmenting the multivariate time series data based on one or more segmentation timestamps. (Kulkarni, ⁋31, “FIG. 3B is a segment graph 350 with segments from four different assets, namely, assets 1-4. The segment graph in FIG. 3B shows anomalous segment closeness between the assets. In particular, segments of asset 3 are closely grouped with a segment of asset 2, and a segment of asset 4 is closely grouped with a segment of asset 2; grouping segments closer together based on anomalous closeness is interpreted as generating a segmented multivariate time series using segmentation timesteps, or anomalous points in the data (i.e. and generating a segmented multivariate time series by segmenting the multivariate time series data based on one or more segmentation timestamps.).”).
While Kulkarni teaches a system that segments multivariate time series data using graph objects, Kulkarni does not explicitly teach:
…utilizing a sparse graph recovery model…
Shrivastava teaches …utilizing a sparse graph recovery model… (Shrivastava, abstract, “We propose a deep learning architecture, GLAD, which uses an Alternating Minimization (AM) algorithm as our model inductive bias, and learns the model parameters via supervised learning. We show that GLAD learns a very compact and effective model for recovering sparse graphs from data […utilizing a sparse graph recovery model…].”).
Kulkarni and Shrivastava are both in the same field of endeavor (i.e. graph analysis). It would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Kulkarni and Shrivastava to teach the above limitation(s). The motivation for doing so is that recovering sparse conditional independence graphs improves understanding of relationships between different assets (cf. Shrivastava, pg. 1, “Recovering sparse conditional independence graphs from data is a fundamental problem in high dimensional statistics and time series analysis, and it has found applications in diverse areas. In computational biology, a sparse graph structure between gene expression data may be used to understand gene regulatory networks; in finance, a sparse graph structure between financial time series may be used to understand the relationship between different financial assets.”).
Regarding claim 2, Kulkarni in view of Shrivastava teaches the computer-implemented method of claim 1. Kulkarni further teaches further comprising: comparing a first graph object to a second graph object utilizing the similarity model to determine that a difference between the first graph object and the second graph object satisfies a difference threshold; and determining a first segmentation timestamp based on a segmentation timestamp of the first graph object. (Kulkarni, ⁋27-29, “In equation (2) Ak are the segments belonging to asset k, S is the set of all segments, (μ−i, δ−i) are the “leave ‘i’ out” mean and standard deviation of all segment-pair distances within the graph [further comprising: comparing a first graph object to a second graph object utilizing the similarity model], and di is the average distance between ‘i’ and all other nodes. b) Aggregates the isolation metric for the asset k across all scales. This is the total isolation of the asset. An asset is marked anomalous if its isolation metric is higher than a threshold At [to determine that a difference between the first graph object and the second graph object satisfies a difference threshold;]. c) Computes groupings of anomalous assets based on their anomalous segment closeness using the following pairwise separation metric [and determining a first segmentation timestamp based on a segmentation timestamp of the first graph object.]”).
Regarding claim 3, Kulkarni in view of Shrivastava teaches the computer-implemented method of claim 1. Kulkarni further teaches wherein generating the graph objects from the windowed subsequences includes generating a visual graph of nodes and edges, where the edges indicate a positive or a negative partial correlation between connected nodes. (Kulkarni, ⁋25, “FIG. 3A shows an example of a segment graph 300 [wherein generating the graph objects from the windowed subsequences includes generating a visual graph of nodes and edges,] wherein the vertices represent data coming from all assets for a given-time scale, and the edges represent the distance between the distributions of the data represented by the vertices [where the edges indicate a positive or a negative partial correlation between connected nodes.].”).
Regarding claim 6, Kulkarni in view of Shrivastava teaches the computer-implemented method of claim 1. Shrivastava further teaches wherein generating the graph objects from the windowed subsequences includes utilizing a conditional independence sparse graph recovery model that generates graph objects that exhibit partial correlation between variables. (Shrivastava, pg. 1, “Recovering sparse conditional independence graphs from data is a fundamental problem in high dimensional statistics and time series analysis [wherein generating the graph objects from the windowed subsequences includes utilizing a conditional independence sparse graph recovery model]”, and Shrivastava, pg. 2, “Given m observations of ad-dimensional multivariate Gaussian random variable X = [X1,...,Xd]⊤, the sparse graph recovery problem aims to estimate its covariance matrix Σ∗ and precision matrix Θ∗ =(Σ∗)−1. The ij-th component of Θ∗ is zero if and only if Xi and Xj are conditionally independent given the other variables {Xk}k̸=i,j [that generates graph objects that exhibit partial correlation between variables.].”).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the present application to combine the teachings of Shrivastava with the teachings of Kulkarni for the same reasons disclosed in claim 1.
Regarding claim 11, Kulkarni discloses:
A system comprising: multivariate time series data; (Kulkarni, ⁋21, “The multi-scale segmentation module 104 receives one or more time-series datasets 102. The datasets 102 may be, for example, multivariate time-series datasets (MTD) which include time-series data for different assets (for example, IoT sensor data) [A system comprising: multivariate time series data;].”).
a similarity model that determines differences between two or more graph objects; (Kulkarni, ⁋25, “The multi-scale anomalous asset detection module 106 constructs a segment similarity graph for each scale using the distance matrix [a similarity model that determines differences between two or more graph objects;].”).
a processor; and a computer memory comprising instructions that, when executed by the processor, cause the system to carry out operations comprising: (Kulkarni, ⁋5, “a system including a memory and at least one processor that is coupled to the memory and configured to perform noted method steps [a processor; and a computer memory comprising instructions that, when executed by the processor, cause the system to carry out operations comprising:]”).
generating windowed subsequences of the multivariate time series data by grouping portions of the multivariate time series data a time-based window size; (Kulkarni, ⁋22, “The multi-scale segmentation module 104 creates segments of the time series datasets 102 of different sizes [grouping portions of the multivariate time series data a time-based window size;]. FIG. 2 depicts a non-limiting example of multi-scale segmentation 200 in accordance with one or more example embodiments. As shown in FIG. 2, it is assumed the total data duration for the time series of each asset's data is n, so that the sizes may be, for example, n, n/2, n/4, wherein each size represents a different time scale [generating windowed subsequences of the multivariate time series data].”).
generating graph objects from the windowed subsequences…; (Kulkarni, ⁋25, “The multi-scale anomalous asset detection module 106 constructs a segment similarity graph for each scale using the distance matrix [generating graph objects from the windowed subsequences…;].”).
determining one or more segmentation timestamps based on the graph objects utilizing the similarity model; (Kulkarni, ⁋31, “FIG. 3B is a segment graph 350 with segments from four different assets, namely, assets 1-4. The segment graph in FIG. 3B shows anomalous segment closeness between the assets. In particular, segments of asset 3 are closely grouped with a segment of asset 2, and a segment of asset 4 is closely grouped with a segment of asset 2; grouping the anomalous segments closer together based on closeness is interpreted as determining an anomalous timesteps or points in the dataset (i.e. determining one or more segmentation timestamps based on the graph objects utilizing the similarity model;).”, and Kulkarni, ⁋25, “The multi-scale anomalous asset detection module 106 [utilizing a similarity model;] constructs a segment similarity graph for each scale using the distance matrix.”).
and generating a segmented multivariate time series by segmenting the multivariate time series data based on one or more segmentation timestamps. (Kulkarni, ⁋31, “FIG. 3B is a segment graph 350 with segments from four different assets, namely, assets 1-4. The segment graph in FIG. 3B shows anomalous segment closeness between the assets. In particular, segments of asset 3 are closely grouped with a segment of asset 2, and a segment of asset 4 is closely grouped with a segment of asset 2; grouping segments closer together based on anomalous closeness is interpreted as generating a segmented multivariate time series using segmentation timesteps, or anomalous points in the data (i.e. and generating a segmented multivariate time series by segmenting the multivariate time series data based on one or more segmentation timestamps.).”).
While Kulkarni teaches a system that segments multivariate time series data using graph objects, Kulkarni does not explicitly teach:
a sparse graph recovery model that generates graph objects from portions of…time series data;
…utilizing a sparse graph recovery model…
Shrivastava teaches:
a sparse graph recovery model that generates graph objects from portions of…time series data; (Shrivastava, abstract, “We propose a deep learning architecture, GLAD, which uses an Alternating Minimization (AM) algorithm as our model inductive bias, and learns the model parameters via supervised learning. We show that GLAD learns a very compact and effective model for recovering sparse graphs from data [a sparse graph recovery model that generates graph objects].”, and Shrivastava, pg. 1, “Recovering sparse conditional independence graphs from data is a fundamental problem in high dimensional statistics and time series analysis [from portions of…time series data;]”).
…utilizing a sparse graph recovery model… (Shrivastava, abstract, “We propose a deep learning architecture, GLAD, which uses an Alternating Minimization (AM) algorithm as our model inductive bias, and learns the model parameters via supervised learning. We show that GLAD learns a very compact and effective model for recovering sparse graphs from data […utilizing a sparse graph recovery model…].”).
Kulkarni and Shrivastava are both in the same field of endeavor (i.e. graph analysis). It would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Kulkarni and Shrivastava to teach the above limitation(s). The motivation for doing so is that recovering sparse conditional independence graphs improves understanding of relationships between different assets (cf. Shrivastava, pg. 1, “Recovering sparse conditional independence graphs from data is a fundamental problem in high dimensional statistics and time series analysis, and it has found applications in diverse areas. In computational biology, a sparse graph structure between gene expression data may be used to understand gene regulatory networks; in finance, a sparse graph structure between financial time series may be used to understand the relationship between different financial assets.”).
Regarding claim 17, Kulkarni discloses:
A computer-implemented method for generating segmented multivariate time series data comprising: (Kulkarni, ⁋21, “The multi-scale segmentation module 104 receives one or more time-series datasets 102. The datasets 102 may be, for example, multivariate time-series datasets (MTD) which include time-series data for different assets (for example, IoT sensor data) [A computer-implemented method for generating segmented multivariate time series data comprising:].”).
generating a first windowed subsequence and a second windowed subsequence from multivariate time series data based on a window size; (Kulkarni, ⁋22, “The multi-scale segmentation module 104 creates segments of the time series datasets 102 of different sizes [based on a window size;]. FIG. 2 depicts a non-limiting example of multi-scale segmentation 200 in accordance with one or more example embodiments. As shown in FIG. 2, it is assumed the total data duration for the time series of each asset's data is n, so that the sizes may be, for example, n, n/2, n/4, wherein each size represents a different time scale [generating a first windowed subsequence and a second windowed subsequence from multivariate time series data].”).
generating a first graph object and a second graph object from the first windowed subsequence and the second windowed subsequence…; (Kulkarni, ⁋25, “The multi-scale anomalous asset detection module 106 constructs a segment similarity graph for each scale using the distance matrix [generating a first graph object and a second graph object from the first windowed subsequence and the second windowed subsequence…;].”).
determining a segmentation timestamp based on when a segment change occurred based on comparing the first graph object and the second graph object utilizing a similarity model; (Kulkarni, ⁋31, “FIG. 3B is a segment graph 350 with segments from four different assets, namely, assets 1-4. The segment graph in FIG. 3B shows anomalous segment closeness between the assets. In particular, segments of asset 3 are closely grouped with a segment of asset 2, and a segment of asset 4 is closely grouped with a segment of asset 2; grouping the anomalous segments closer together based on closeness is interpreted as determining an anomalous timesteps or points in the dataset (i.e. determining a segmentation timestamp based on when a segment change occurred based on comparing the first graph object and the second graph object).”, and Kulkarni, ⁋25, “The multi-scale anomalous asset detection module 106 [utilizing a similarity model;] constructs a segment similarity graph for each scale using the distance matrix.”).
and generating a segmented multivariate time series by segmenting the multivariate time series data based on the segmentation timestamp. (Kulkarni, ⁋31, “FIG. 3B is a segment graph 350 with segments from four different assets, namely, assets 1-4. The segment graph in FIG. 3B shows anomalous segment closeness between the assets. In particular, segments of asset 3 are closely grouped with a segment of asset 2, and a segment of asset 4 is closely grouped with a segment of asset 2; grouping segments closer together based on anomalous closeness is interpreted as generating a segmented multivariate time series using segmentation timesteps, or anomalous points in the data (i.e. and generating a segmented multivariate time series by segmenting the multivariate time series data based on the segmentation timestamp.).”).
While Kulkarni teaches a system that segments multivariate time series data using graph objects, Kulkarni does not explicitly teach:
…utilizing a sparse graph recovery model…
Shrivastava teaches …utilizing a sparse graph recovery model… (Shrivastava, abstract, “We propose a deep learning architecture, GLAD, which uses an Alternating Minimization (AM) algorithm as our model inductive bias, and learns the model parameters via supervised learning. We show that GLAD learns a very compact and effective model for recovering sparse graphs from data […utilizing a sparse graph recovery model…].”).
Kulkarni and Shrivastava are both in the same field of endeavor (i.e. graph analysis). It would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Kulkarni and Shrivastava to teach the above limitation(s). The motivation for doing so is that recovering sparse conditional independence graphs improves understanding of relationships between different assets (cf. Shrivastava, pg. 1, “Recovering sparse conditional independence graphs from data is a fundamental problem in high dimensional statistics and time series analysis, and it has found applications in diverse areas. In computational biology, a sparse graph structure between gene expression data may be used to understand gene regulatory networks; in finance, a sparse graph structure between financial time series may be used to understand the relationship between different financial assets.”).
Regarding claim 19, Kulkarni in view of Shrivastava teaches the computer-implemented method of claim 17. Kulkarni further teaches wherein the window size corresponds to a non-overlapping window, and wherein the first windowed subsequence and the second windowed subsequence include non-duplicate data. (Kulkarni, ⁋22, “The multi-scale segmentation module 104 creates segments of the time series datasets 102 of different sizes [wherein the window size corresponds to a non-overlapping window,]. FIG. 2 depicts a non-limiting example of multi-scale segmentation 200 in accordance with one or more example embodiments. As shown in FIG. 2, it is assumed the total data duration for the time series of each asset's data is n, so that the sizes may be, for example, n, n/2, n/4, wherein each size represents a different time scale [and wherein the first windowed subsequence and the second windowed subsequence include non-duplicate data.].”).
Claims 4 are rejected under 35 U.S.C. 103 as being unpatentable over Kulkarni, et al., US Pre-Grant Publication 2020/0372024A1 (“Kulkarni”) in view of Shrivastava, et al., Non-Patent Literature “GLAD: LEARNING SPARSE GRAPH RECOVERY” (“Shrivastava”) and further in view of Runestone, Non-Patent Literature “An Adjacency Matrix” (“Runestone”).
Regarding claim 4, Kulkarni in view of Shrivastava teaches the computer-implemented method of claim 1. While the combination teaches generating the graph objects from the windowed subsequences as seen in claim 1, the combination does not explicitly teach includes generating an adjacency matrix indicating partial correlations between corresponding nodes and edges between two graph objects.
Runestone teaches includes generating an adjacency matrix indicating partial correlations between corresponding nodes and edges between two graph objects. (Runestone, pg. 1, “One of the easiest ways to implement a graph is to use a two-dimensional matrix. In this matrix implementation, each of the rows and columns represent a vertex in the graph. The value that is stored in the cell at the intersection of row v and column w indicates if there is an edge from vertex v to vertex w. When two vertices are connected by an edge, we say that they are adjacent [includes generating an adjacency matrix]. Figure 3 illustrates the adjacency matrix for the graph in Figure 2. A value in a cell represents the weight of the edge from vertex v to vertex w [indicating partial correlations between corresponding nodes and edges between two graph objects.].”).
Kulkarni, in view of Shrivastava, and Runestone are both in the same field of endeavor (i.e. graph analysis). It would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Kulkarni, in view of Shrivastava, and Runestone to teach the above limitation(s). The motivation for doing so is that using an adjacency matrix provides a simplified representation of graph connections (cf. Runestone, pg. 1, “One of the easiest ways to implement a graph is to use a two-dimensional matrix. In this matrix implementation, each of the rows and columns represent a vertex in the graph.”).
Claims 5, 12, 14-15, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Kulkarni, et al., US Pre-Grant Publication 2020/0372024A1 (“Kulkarni”) in view of Shrivastava, et al., Non-Patent Literature “GLAD: LEARNING SPARSE GRAPH RECOVERY” (“Shrivastava”) and further in view of Brownlee, Non-Patent Literature “A Gentle Introduction to Mini-Batch Gradient Descent and How to Configure Batch Size” (“Brownlee”).
Regarding claim 5, Kulkarni in view of Shrivastava teaches the computer-implemented method of claim 1. Kulkarni further teaches further comprising generating multiple graph objects from the windowed subsequences… (Kulkarni, ⁋25, “The multi-scale anomalous asset detection module 106 constructs a segment similarity graph for each scale using the distance matrix [further comprising generating multiple graph objects from the windowed subsequences].”).
Shrivastava further teaches …that utilizes one instance of the sparse graph recovery model and shared parameters. (Shrivastava, abstract, “We propose a deep learning architecture, GLAD, which uses an Alternating Minimization (AM) algorithm as our model inductive bias, and learns the model parameters via supervised learning. We show that GLAD learns a very compact and effective model for recovering sparse graphs from data […that utilizes one instance of the sparse graph recovery model and shared parameters.].”).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the present application to combine the teachings of Shrivastava with the teachings of Kulkarni for the same reasons disclosed in claim 1.
While the combination teaches a multivariate time series segmentation system using a sparse graph recovery model, the combination does not explicitly teach …at a same time as a batch operation…
Brownlee teaches …at a same time as a batch operation… (Brownlee, pg. 2, “Batch gradient descent is a variation of the gradient descent algorithm that calculates the error for each example in the training dataset, but only updates the model after all training examples have been evaluated. One cycle through the entire training dataset is called a training epoch. Therefore, it is often said that batch gradient descent performs model updates at the end of each training epoch […at a same time as a batch operation…].”).
Kulkarni, in view of Shrivastava, and Brownlee are both in the same field of endeavor (i.e. machine learning). It would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Kulkarni, in view of Shrivastava, and Brownlee to teach the above limitation(s). The motivation for doing so is that batch training improves the stability of the model convergence (cf. Brownlee, pg. 3, “The decreased update frequency results in a more stable error gradient and may result in a more stable convergence”).
Regarding claim 12, the claim is similar to claim 5 and rejected under the same rationales.
Regarding claim 14, Kulkarni in view of Shrivastava and Brownlee teaches the system of claim 12. Shrivastava further teaches wherein the sparse graph recovery model generates conditional independent graph objects that exhibit partial correlation between variables. (Shrivastava, pg. 1, “Recovering sparse conditional independence graphs from data is a fundamental problem in high dimensional statistics and time series analysis [wherein the sparse graph recovery model generates conditional independent graph objects]”, and Shrivastava, pg. 2, “Given m observations of ad-dimensional multivariate Gaussian random variable X = [X1,...,Xd]⊤, the sparse graph recovery problem aims to estimate its covariance matrix Σ∗ and precision matrix Θ∗ =(Σ∗)−1. The ij-th component of Θ∗ is zero if and only if Xi and Xj are conditionally independent given the other variables {Xk}k̸=i,j [that exhibit partial correlation between variables.].”).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the present application to combine the teachings of Shrivastava with the teachings of Kulkarni and Brownlee for the same reasons disclosed in claim 12.
Regarding claim 15, Kulkarni in view of Shrivastava and Brownlee teaches the system of claim 12. Kulkarni further teaches wherein generating the graph objects from the windowed subsequences includes generating a visual graph of nodes and edges, where the edges indicate a positive or a negative partial correlation between connected nodes. (Kulkarni, ⁋25, “FIG. 3A shows an example of a segment graph 300 [wherein generating the graph objects from the windowed subsequences includes generating a visual graph of nodes and edges,] wherein the vertices represent data coming from all assets for a given-time scale, and the edges represent the distance between the distributions of the data represented by the vertices [where the edges indicate a positive or a negative partial correlation between connected nodes.].”).
Regarding claim 20, Kulkarni in view of Shrivastava teaches the computer-implemented method of claim 17. Kulkarni further teaches further comprising generating the first graph object and the second graph object…based on the first windowed subsequence and the second windowed subsequence. (Kulkarni, ⁋25, “The multi-scale anomalous asset detection module 106 constructs a segment similarity graph for each scale using the distance matrix [further comprising generating the first graph object and the second graph object…based on the first windowed subsequence and the second windowed subsequence.].”).
Shrivastava further teaches …utilizing the sparse graph recovery model… (Shrivastava, abstract, “We propose a deep learning architecture, GLAD, which uses an Alternating Minimization (AM) algorithm as our model inductive bias, and learns the model parameters via supervised learning. We show that GLAD learns a very compact and effective model for recovering sparse graphs from data […utilizing the sparse graph recovery model…].”).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the present application to combine the teachings of Shrivastava with the teachings of Kulkarni for the same reasons disclosed in claim 1.
While the combination teaches a multivariate time series segmentation system using a sparse graph recovery model, the combination does not explicitly teach …in a single batch operation…
Brownlee teaches …in a single batch operation… (Brownlee, pg. 2, “Batch gradient descent is a variation of the gradient descent algorithm that calculates the error for each example in the training dataset, but only updates the model after all training examples have been evaluated. One cycle through the entire training dataset is called a training epoch. Therefore, it is often said that batch gradient descent performs model updates at the end of each training epoch […in a single batch operation…].”).
Kulkarni, in view of Shrivastava, and Brownlee are both in the same field of endeavor (i.e. machine learning). It would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Kulkarni, in view of Shrivastava, and Brownlee to teach the above limitation(s). The motivation for doing so is that batch training improves the stability of the model convergence (cf. Brownlee, pg. 3, “The decreased update frequency results in a more stable error gradient and may result in a more stable convergence”).
Claims 7, 16, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Kulkarni, et al., US Pre-Grant Publication 2020/0372024A1 (“Kulkarni”) in view of Shrivastava, et al., Non-Patent Literature “GLAD: LEARNING SPARSE GRAPH RECOVERY” (“Shrivastava”) and further in view of Tan, et al., US Pre-Grant Publication 2024/0160160A1 (“Tan”).
Regarding claim 7, Kulkarni in view of Shrivastava teaches the computer-implemented method of claim 1.
While the combination teaches a multivariate time series segmentation system using a sparse graph recovery model, the combination does not explicitly teach further comprising: generating, using a refined window size, additional windowed subsequences from the multivariate time series data based on the one or more segmentation timestamps, wherein the refined window size is smaller than the window size; determining one or more refined segmentation timestamps from the additional windowed subsequences; and refining locations of segments within the segmented multivariate time series based on the one or more refined segmentation timestamps.
Tan teaches:
further comprising: generating, using a refined window size, additional windowed subsequences from the multivariate time series data based on the one or more segmentation timestamps, wherein the refined window size is smaller than the window size; (Tan, ⁋32, “A sliding time-window is defined which slides inside an internal greater time window [wherein the refined window size is smaller than the window size;]. The sliding time window is applied to all algorithms, and all CPs over all algorithms are summed up. The same procedure is performed with further sliding time windows [further comprising: generating, using a refined window size, additional windowed subsequences from the multivariate time series data based on the one or more segmentation timestamps,]. Those sliding time windows, i.e., the CPs of these windows, are presented to the user that contain a high number of CPs.”).
determining one or more refined segmentation timestamps from the additional windowed subsequences; (Tan, ⁋56, “According to an embodiment, the selection out of the candidate CPs is according one of the following: (i) selecting the CPs randomly out of the candidate CPs, (ii) defining a sliding time-window length smaller than the time-window, summing up the number of CP candidates detected across all CP algorithms, and selecting windows with a high sum are selected [determining one or more refined segmentation timestamps from the additional windowed subsequences;]”).
and refining locations of segments within the segmented multivariate time series based on the one or more refined segmentation timestamps. (Tan, ⁋16, “CPD can be used to cut the otherwise continuous signal into segments with start (a first change point) and end (the next change point) times. These segments can be used as samples in an anomaly detection or in a classification process [and refining locations of segments within the segmented multivariate time series based on the one or more refined segmentation timestamps.]”).
Kulkarni, in view of Shrivastava, and Tan are both in the same field of endeavor (i.e. data segmentation). It would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Kulkarni, in view of Shrivastava, and Tan to teach the above limitation(s). The motivation for doing so is that finding additional change points increases the understanding of time series data (cf. Tan, ⁋17, “Similar to process monitoring by anomaly detection or classification, the CPD is used to split a continuous signal into several meaningful segments.”).
Regarding claim 16, the claim is similar to claim 7 and rejected under the same rationales.
Regarding claim 18, Kulkarni in view of Shrivastava teaches the computer-implemented method of claim 17.
While the combination teaches a multivariate time series segmentation system using a sparse graph recovery model, the combination does not explicitly teach wherein the window size corresponds to an overlapping window, and wherein the first windowed subsequence and the second windowed subsequence include duplicative data.
Tan teaches wherein the window size corresponds to an overlapping window, and wherein the first windowed subsequence and the second windowed subsequence include duplicative data. (Tan, ⁋32, “A sliding time-window is defined which slides inside an internal greater time window [wherein the window size corresponds to an overlapping window,]. The sliding time window is applied to all algorithms, and all CPs over all algorithms are summed up. The same procedure is performed with further sliding time windows [and wherein the first windowed subsequence and the second windowed subsequence include duplicative data.]. Those sliding time windows, i.e., the CPs of these windows, are presented to the user that contain a high number of CPs.”).
Kulkarni, in view of Shrivastava, and Tan are both in the same field of endeavor (i.e. data segmentation). It would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Kulkarni, in view of Shrivastava, and Tan to teach the above limitation(s). The motivation for doing so is that finding additional change points increases the understanding of time series data (cf. Tan, ⁋17, “Similar to process monitoring by anomaly detection or classification, the CPD is used to split a continuous signal into several meaningful segments.”).
Claims 8-9 are rejected under 35 U.S.C. 103 as being unpatentable over Kulkarni, et al., US Pre-Grant Publication 2020/0372024A1 (“Kulkarni”) in view of Shrivastava, et al., Non-Patent Literature “GLAD: LEARNING SPARSE GRAPH RECOVERY” (“Shrivastava”) and further in view of Freidenreich, Non-Patent Literature “Absolute Value Equations and Inequalities as Applied to Distance” (“Freidenreich”).
Regarding claim 8, Kulkarni in view of Shrivastava teaches the computer-implemented method of claim 1. Kulkarni further teaches wherein the similarity model includes an allocation algorithm that determines the one or more segmentation timestamps based on determining a first order distance… (Kulkarni, ⁋31, “FIG. 3B is a segment graph 350 with segments from four different assets, namely, assets 1-4. The segment graph in FIG. 3B shows anomalous segment closeness between the assets. In particular, segments of asset 3 are closely grouped with a segment of asset 2, and a segment of asset 4 is closely grouped with a segment of asset 2; grouping the anomalous segments closer together based on closeness is interpreted as determining an anomalous timesteps or points in the dataset (i.e. wherein the similarity model includes an allocation algorithm that determines the one or more segmentation timestamps).”, and Kulkarni, ⁋25, “The multi-scale anomalous asset detection module 106 [similarity model] constructs a segment similarity graph for each scale using the distance matrix [based on determining a first order distance…].”).
While the combination teaches a multivariate time series segmentation system using a sparse graph recovery model, the combination does not explicitly teach …and a second order distance from the graph objects.
Freidenreich teaches …and a second order distance from the graph objects (Freidenreich, pg. 1, “Absolute value is often used to measure the distance between two numbers, a and b. If we use absolute value bars around the subtracted quantity, then the order of subtraction does not matter […and a second order distance from the graph objects.].”).
Kulkarni, in view of Shrivastava, and Freidenreich are both in the same field of endeavor (i.e. distance calculation). It would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Kulkarni, in view of Shrivastava, and Freidenreich to teach the above limitation(s). The motivation for doing so is that taking the absolute value of a distance ensures that the magnitude of the distance is returned.
Regarding claim 9, Kulkarni in view of Shrivastava and Freidenreich teaches the computer-implemented method of claim 8. Kulkarni further teaches wherein: the first order distance captures a distance between consecutive graph objects; (Kulkarni, ⁋23, “The multi-scale anomalous asset detection module 106 then computes the distance between segments. For example, for each time scale, the segments may be compared by computing the J-Coefficient, Ji,j, and applying a distance function to obtain a M×M distance matrix (DM) wherein M is the number of segments of all assets of a given scale, and each entry (i,j) in the matrix is the distance function on the J-coefficient for the distributions for segments i and j [wherein: the first order distance captures a distance between consecutive graph objects;].”).
Freidenreich further teaches and the second order distance generates absolute values based on the first order distance. (Freidenreich, pg. 1, “Absolute value is often used to measure the distance between two numbers, a and b. If we use absolute value bars around the subtracted quantity, then the order of subtraction does not matter [and the second order distance generates absolute values based on the first order distance.].”).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the present application to combine the teachings of Freidenreich with the teachings of Kulkarni and Shrivastava for the same reasons disclosed in claim 8.
Claims 10 are rejected under 35 U.S.C. 103 as being unpatentable over Kulkarni, et al., US Pre-Grant Publication 2020/0372024A1 (“Kulkarni”) in view of Shrivastava, et al., Non-Patent Literature “GLAD: LEARNING SPARSE GRAPH RECOVERY” (“Shrivastava”) and further in view of Freidenreich, Non-Patent Literature “Absolute Value Equations and Inequalities as Applied to Distance” (“Freidenreich”) and Ottersten, et al., Non-Patent Literature “Accurate Changing Point Detection for ℓ1 Mean Filtering” (“Ottersten”).
Regarding claim 10, Kulkarni in view of Shrivastava and Freidenreich teaches the computer-implemented method of claim 8.
While the combination teaches the second order distance and a similarity model, the combination does not explicitly teach wherein the allocation algorithm further comprises: reducing the…distance by a filtering out sequence values below a noise threshold to generate a filtered sequence; and traversing the filtered sequence for non-zero values to identify the one or more segmentation timestamps.
Ottersten teaches wherein the allocation algorithm further comprises: reducing the…distance by a filtering out sequence values below a noise threshold to generate a filtered sequence; and traversing the filtered sequence for non-zero values to identify the one or more segmentation timestamps. (Ottersten, pg. 298 col. 1, “This is referred to as ℓ1 mean filtering [18] which is also known as the total variation (TV) denoising [5] problem. The penalty in equation (1) forces xi to be constant and we obtain a piece-wise constant solution; filtering out noise is interpreted as reducing a distance as removing noisy values improves the effectiveness of data comparisons (i.e. reducing the…distance by a filtering out sequence values below a noise threshold to generate a filtered sequence;). The ℓ1 mean filtering problem is used to detect changes in the mean of the time series yi. The non-zero entries in the sparse changing vector indicate the changing points in the time series yi [and traversing the filtered sequence for non-zero values to identify the one or more segmentation timestamps.].”).
Kulkarni, in view of Shrivastava and Freidenreich, and Ottersten are both in the same field of endeavor (i.e. data segmentation). It would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Kulkarni, in view of Shrivastava and Freidenreich, and Ottersten to teach the above limitation(s). The motivation for doing so is that removing noise from data improves the ability to find trends in the data (cf. Ottersten, pg. 297 col. 1, “Finding the underlying linear trends in time series data is a common signal processing problem that arises in many applications in areas such as financial time series analysis [1], in biological and medical sciences [2] and climatology [3] to list a few. This is a difficult estimation problem when noise and imperfections are present in the signal measurements. The main challenge in mean and trend filtering is to find the so-called changing points in the trend.”).
Claims 13 are rejected under 35 U.S.C. 103 as being unpatentable over Kulkarni, et al., US Pre-Grant Publication 2020/0372024A1 (“Kulkarni”) in view of Shrivastava, et al., Non-Patent Literature “GLAD: LEARNING SPARSE GRAPH RECOVERY” (“Shrivastava”) and further in view of Brownlee, Non-Patent Literature “A Gentle Introduction to Mini-Batch Gradient Descent and How to Configure Batch Size” (“Brownlee”) and Chen, et al., Non-Patent Literature “Unsupervised Multimodal Change Detection Based on Structural Relationship Graph Representation Learning” (“Chen”).
Regarding claim 13, Kulkarni in view of Shrivastava and Brownlee teaches the system of claim 12.
While the combination teaches a sparse graph recovery model using batch learning, the combination does not explicitly teach wherein the sparse graph recovery model is an unsupervised deep-learning sparse graph recovery model trained to generate batches of object graph dependency graphs.
Chen teaches wherein the sparse graph recovery model is an unsupervised deep-learning sparse graph recovery model trained to generate batches of object graph dependency graphs. (Chen, abstract, “Unsupervised multimodal change detection is a practical and challenging topic that can play an important role in time-sensitive emergency applications…First, structural graphs are generated from preprocessed multimodal image pairs by means of an object-based image analysis approach. Then, a structural relationship graph convolutional autoencoder (SR-GCAE) is proposed to learn robust and representative features from graphs. Two loss functions aiming at reconstructing vertex information and edge information are presented to make the learned representations applicable for structural relationship similarity measurement [wherein the sparse graph recovery model is an unsupervised deep-learning sparse graph recovery model trained to generate batches of object graph dependency graphs.].”).
Kulkarni, in view of Shrivastava and Brownlee, and Chen are both in the same field of endeavor (i.e. change detection). It would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Kulkarni, in view of Shrivastava and Brownlee, and Chen to teach the above limitation(s). The motivation for doing so is that unsupervised sparse graph recovery model learns robust dependency graphs for analysis (cf. Chen, pg. 2 col. 1, “the proposed network can learn robust high-level graph representations to measure the similarity levels of local and nonlocal structural relationships.”).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NICHOLAS S WU whose telephone number is (571)270-0939. The examiner can normally be reached Monday - Friday 8:00 am - 4:00 pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle Bechtold can be reached at 571-431-0762. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/N.S.W./Examiner, Art Unit 2148 /MICHELLE T BECHTOLD/Supervisory Patent Examiner, Art Unit 2148