DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claim 1, 7, and 13 objected to because of the following informalities: claim contains a comma that is considered a typo at “and, generating a data center asset event mode”. Appropriate correction is required.
Claim Rejections - 35 USC § 112
Regarding 112(b):
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-20 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
In regard to Claim 1:
Claim 1 recites the limitation "reducing a dimension of the respective vectorized input spaces to respective latent spaces, each respective latent space providing respective component event model dimension”. The recited limitation renders aspects of the claim indefinite. The meaning of “each respective latent space providing respective component event model dimension” is not understood. As noted by the earlier part of the limitation, the dimension of the respective input spaces is respective to the latent spaces. One cannot determine if the component event model dimension is simply reinforcing the same idea of the dimension of the input spaces to latent spaces, or if the meaning is something else. Possible other interpretations are ”each latent space corresponds to an event model dimension” or “each respective latent space corresponding to a respective component”. A definition or meaning was not found within the specification. Amending the wording of the claim to clarify the intended claimed element can help overcome the 112(b) issue.
In regards to claim 6:
Claim 6 recites the limitation "each failure model of the plurality of failure models". There is insufficient antecedent basis for this limitation in the claim. There is no previous recitation of “a plurality of failure models”.
Claim 6 recites the limitation "characterized by an independent and identically distributed (IID) thresholding parameter (T)”. The recited limitation renders aspects of the claim indefinite. One of ordinary skill in the art would be confused as to how to apply IID to a singular value, such as a thresholding parameter, as the basis of IID normally indicates a distribution (so multiple values). The description of the variable T in paragraph 225 of the specification makes more sense regarding the idea of IID as paragraph 225 notes “independent and identically distributed (IID) random variables ‘T’” which indicates multiple variables or values. This provides a possible confusion on whether the variables mentioned in claim 6 (T, N, and M) are indicative of the description given in paragraph 229 of specification (which re-recites elements of claim 6) or if the variables are indicative of the description given in paragraph 225. The confusing use of descriptors in claim 6 (IID describing one value of a threshold) as well as the duplicate defining of variables in paragraphs 225 and 229 creates indefiniteness from one of ordinary skill being unable to determine the intended meaning of the limitations.
In regards to analogous claims:
Analogous claims to claims rejected under 112(b), such as 7 and 13 for claim 1 or claims 12 and 18 for claim 6, are rejected under the same rejections found in the rejected claim.
In regards to dependent claims:
Claims dependent up on claims rejected under 112(b) are also rejected under 112(b) for being dependent upon a rejected claim.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed towards an abstract idea without significantly more.
In regards to Claim 1:
Step 1: Is the claim directed towards a process, machine, manufacture, or composition of matter?
Yes, the claim is directed towards a method, so a process.
Step 2A Prong 1: Does the claim recite a law of nature, a natural phenomenon, or an abstract idea?
Yes, the claim does recite a(n) abstract idea.
Claim 1 recites the following abstract ideas:
assigning the data center asset component event data for the plurality of data center asset components to respective vectorized input spaces
This limitation is directed towards the abstract idea of a mental process, or a concept performed in the human mind, including observation, evaluation, judgement or opinion (see MPEP 2106.04(a)(2) subsection 3). Here the limitation is seen as evaluation.
The term assigning is broad and is not given much description on how the element is performed, thus why under BRI is capable of being interpreted as an element a human can perform.
Step 2A Prong 2: Does the claim recite additional elements that integrate the exception into a practical application of the exception?
No, the application does not recite any additional elements that would integrate the abstract idea into a practical application.
Claim 1 recites the following additional elements:
A computer-implementable method for performing a data center management and monitoring operation, comprising:
At a high level of generality, this is an activity of using a computer or computer parts as an “apply it” use (see MPEP 2106.05(f)).
receiving data center asset component event data for a plurality of data center asset components
This limitation is directed towards the insignificant extra solution activity of mere data gathering (see MPEP § 2106.05(g)).
reducing a dimension of the respective vectorized input spaces to respective latent spaces, each respective latent space providing respective component event model dimension
At a high level of generality, this is an activity of reducing a dimension as an “apply it” use (see MPEP 2106.05(f)).
decoding each respective latent space to provide respective vectorized decoded output spaces
At a high level of generality, this is an activity of decoding as an “apply it” use (see MPEP 2106.05(f)).
generating a plurality of data center asset component event models for the plurality of data center asset components using the respective vectorized decoded output spaces
At a high level of generality, this is an activity of using respective vectorized decoded output spaces as an “apply it” use (see MPEP 2106.05(f)).
generating a data center asset event model using a combination of the plurality of data center asset component event models
At a high level of generality, this is an activity of using a combination of the plurality of data center asset component event models as an “apply it” use (see MPEP 2106.05(f)).
Step 2B: Does the claim as a whole amount to significantly more than the judicial exception?
No, the claim as a whole does not amount to significantly more than the judicial exception. All elements of the claim, viewed individually or wholistically, do not provide an inventive concept or otherwise significantly more than the abstract idea itself.
Claim 1 recites the following additional elements:
A computer-implementable method for performing a data center management and monitoring operation, comprising:
At a high level of generality, this is an activity of using a computer or computer parts as an “apply it” use (see MPEP 2106.05(f)). At said high level of generality, the use of computer parts or a computer appears to be an implementation of the abstract idea on a computer, so merely using a computer as a tool to perform the abstract idea.
receiving data center asset component event data for a plurality of data center asset components
This limitation is directed towards the insignificant extra solution activity of mere data gathering (see MPEP § 2106.05(g)). This is a well understood, routine, conventional activity of transmitting data (see MPEP 2106.05(d) example i in computer functions).
reducing a dimension of the respective vectorized input spaces to respective latent spaces, each respective latent space providing respective component event model dimension
At a high level of generality, this is an activity of reducing a dimension as an “apply it” use (see MPEP 2106.05(f)). At said high level of generality, a generic recitation of “reducing a dimension” of input spaces to latent spaces does not incorporate the abstract idea into a practical invention and is seen as a variation of the phrase “apply it”.
decoding each respective latent space to provide respective vectorized decoded output spaces
At a high level of generality, this is an activity of decoding as an “apply it” use (see MPEP 2106.05(f)). At said high level of generality, a generic recitation of “decoding” latent space to provide decoded output does not incorporate the abstract idea into a practical invention and is seen as a variation of the phrase “apply it”.
generating a plurality of data center asset component event models for the plurality of data center asset components using the respective vectorized decoded output spaces
At a high level of generality, this is an activity of using respective vectorized decoded output spaces as an “apply it” use (see MPEP 2106.05(f)). At said high level of generality, a generic recitation of “generating a plurality of data center asset component event models” using respective vectorized decoded output spaces does not incorporate the abstract idea into a practical invention and is seen as a variation of the phrase “apply it”.
generating a data center asset event model using a combination of the plurality of data center asset component event models
At a high level of generality, this is an activity of using a combination of the plurality of data center asset component event models as an “apply it” use (see MPEP 2106.05(f)). At said high level of generality, a generic recitation of “generating a data center asset event model” using a combination of the plurality of data center asset component event models does not incorporate the abstract idea into a practical invention and is seen as a variation of the phrase “apply it”.
Details on how the data center asset event model is generated beyond the generic recitation in this limitation could help satisfy 101.
In regards to Claim 2:
Step 2A Prong 2: Does the claim recite additional elements that integrate the exception into a practical application of the exception?
No, the application does not recite any additional elements that would integrate the abstract idea into a practical application.
Claim 2 recites the following additional elements:
generating a failure model hierarchy using the plurality of data center asset component event models
At a high level of generality, this is an activity of using the plurality of data center asset component event models as an “apply it” use (see MPEP 2106.05(f)).
Step 2B: Does the claim as a whole amount to significantly more than the judicial exception?
No, the claim as a whole does not amount to significantly more than the judicial exception. All elements of the claim, viewed individually or wholistically, do not provide an inventive concept or otherwise significantly more than the abstract idea itself.
Claim 2 recites the following additional elements:
generating a failure model hierarchy using the plurality of data center asset component event models
At a high level of generality, this is an activity of using the plurality of data center asset component event models as an “apply it” use (see MPEP 2106.05(f)). At said high level of generality, a generic recitation of “generating a failure model hierarchy” using the plurality of data center asset component event models does not incorporate the abstract idea into a practical invention and is seen as a variation of the phrase “apply it”.
In regards to claim 3:
Step 2A Prong 2: Does the claim recite additional elements that integrate the exception into a practical application of the exception?
No, the application does not recite any additional elements that would integrate the abstract idea into a practical application.
Claim 3 recites the following additional elements:
the generating the plurality of data center asset component event models further comprises replicating a data center asset component event model to generate the plurality of data center asset component event models
At a high level of generality, this is an activity of using a data center asset component event model as an “apply it” use (see MPEP 2106.05(f)).
Step 2B: Does the claim as a whole amount to significantly more than the judicial exception?
No, the claim as a whole does not amount to significantly more than the judicial exception. All elements of the claim, viewed individually or wholistically, do not provide an inventive concept or otherwise significantly more than the abstract idea itself.
Claim 3 recites the following additional elements:
the generating the plurality of data center asset component event models further comprises replicating a data center asset component event model to generate the plurality of data center asset component event models
At a high level of generality, this is an activity of using a data center asset component event model as an “apply it” use (see MPEP 2106.05(f)). At said high level of generality, a generic recitation of “replicating” using a data center asset component event model does not incorporate the abstract idea into a practical invention and is seen as a variation of the phrase “apply it”.
In regards to claim 4:
Step 2A Prong 2: Does the claim recite additional elements that integrate the exception into a practical application of the exception?
No, the application does not recite any additional elements that would integrate the abstract idea into a practical application.
Claim 4 recites the following additional elements:
the replicating the data center asset event model comprises using a generative adversarial network (GAN) variant with a convolutional neural network (CNN) discriminator (D) and a gated recurrent unit (GRU) generator (G) to replicate distributions
At a high level of generality, this is an activity of using the data center asset component event model as an “apply it” use (see MPEP 2106.05(f)).
Step 2B: Does the claim as a whole amount to significantly more than the judicial exception?
No, the claim as a whole does not amount to significantly more than the judicial exception. All elements of the claim, viewed individually or wholistically, do not provide an inventive concept or otherwise significantly more than the abstract idea itself.
Claim 4 recites the following additional elements:
the replicating the data center asset event model comprises using a generative adversarial network (GAN) variant with a convolutional neural network (CNN) discriminator (D) and a gated recurrent unit (GRU) generator (G) to replicate distributions
At a high level of generality, this is an activity of using the data center asset component event model as an “apply it” use (see MPEP 2106.05(f)). At said high level of generality, a generic recitation of “replicating” using a GAN, CNN, GRU, and such does not incorporate the abstract idea into a practical invention and is seen as a variation of the phrase “apply it”, for the claim is noting to apply the elements “to replicate distributions”.
In regards to claim 5:
Step 2A Prong 1: Does the claim recite a law of nature, a natural phenomenon, or an abstract idea?
Yes, the claim does recite a(n) abstract idea.
Claim 5 recites the following abstract ideas:
characterizing a plurality of data center asset faults using the plurality of data center asset component event models
This limitation is directed towards the abstract idea of a mental process, or a concept performed in the human mind, including observation, evaluation, judgement or opinion (see MPEP 2106.04(a)(2) subsection 3). Here the limitation is seen as evaluation.
In regards to claim 6:
Step 2A Prong 2: Does the claim recite additional elements that integrate the exception into a practical application of the exception?
No, the application does not recite any additional elements that would integrate the abstract idea into a practical application.
Claim 6 recites the following additional elements:
each failure model of the plurality of failure models is characterized by an independent and identically distributed (IID) thresholding parameter (T), a reduced dimension parameter (N), and an input for behavior replicating parameter (M)
This limitation is directed towards the insignificant extra solution activity of mere data gathering (see MPEP § 2106.05(g)).
Step 2B: Does the claim as a whole amount to significantly more than the judicial exception?
No, the claim as a whole does not amount to significantly more than the judicial exception. All elements of the claim, viewed individually or wholistically, do not provide an inventive concept or otherwise significantly more than the abstract idea itself.
Claim 6 recites the following additional elements:
each failure model of the plurality of failure models is characterized by an independent and identically distributed (IID) thresholding parameter (T), a reduced dimension parameter (N), and an input for behavior replicating parameter (M)
This limitation is directed towards the insignificant extra solution activity of mere data gathering (see MPEP § 2106.05(g)). This is a well understood, routine, conventional activity of transmitting data (see MPEP 2106.05(d) example i in computer functions). The parameters noted in the limitation are interpreted as inputs to the model under 101 as a result of referring to the variables as parameters.
In regards to claim 7:
Step 1: Is the claim directed towards a process, machine, manufacture, or composition of matter?
Yes, the claim is directed towards a system, so a machine.
Step 2A Prong 1: Does the claim recite a law of nature, a natural phenomenon, or an abstract idea?
Yes, the claim does recite a(n) abstract idea.
Claim 7 recites the same abstract ideas as analogous claim 1.
Step 2A Prong 2: Does the claim recite additional elements that integrate the exception into a practical application of the exception?
No, the application does not recite any additional elements that would integrate the abstract idea into a practical application.
Claim 7 recites the same additional elements as claim 1.
Step 2B: Does the claim as a whole amount to significantly more than the judicial exception?
No, the claim as a whole does not amount to significantly more than the judicial exception. All elements of the claim, viewed individually or wholistically, do not provide an inventive concept or otherwise significantly more than the abstract idea itself.
Claim 7 recites the same additional elements as analogous claim 1.
In regards to claim 8:
Step 2A Prong 2: Does the claim recite additional elements that integrate the exception into a practical application of the exception?
No, the application does not recite any additional elements that would integrate the abstract idea into a practical application.
Claim 8 recites the same additional elements as claim 2.
Step 2B: Does the claim as a whole amount to significantly more than the judicial exception?
No, the claim as a whole does not amount to significantly more than the judicial exception. All elements of the claim, viewed individually or wholistically, do not provide an inventive concept or otherwise significantly more than the abstract idea itself.
Claim 8 recites the same additional elements as analogous claim 2.
In regards to claim 9:
Step 2A Prong 2: Does the claim recite additional elements that integrate the exception into a practical application of the exception?
No, the application does not recite any additional elements that would integrate the abstract idea into a practical application.
Claim 9 recites the same additional elements as claim 3.
Step 2B: Does the claim as a whole amount to significantly more than the judicial exception?
No, the claim as a whole does not amount to significantly more than the judicial exception. All elements of the claim, viewed individually or wholistically, do not provide an inventive concept or otherwise significantly more than the abstract idea itself.
Claim 9 recites the same additional elements as analogous claim 3.
In regards to claim 10:
Step 2A Prong 2: Does the claim recite additional elements that integrate the exception into a practical application of the exception?
No, the application does not recite any additional elements that would integrate the abstract idea into a practical application.
Claim 10 recites the same additional elements as claim 4.
Step 2B: Does the claim as a whole amount to significantly more than the judicial exception?
No, the claim as a whole does not amount to significantly more than the judicial exception. All elements of the claim, viewed individually or wholistically, do not provide an inventive concept or otherwise significantly more than the abstract idea itself.
Claim 10 recites the same additional elements as analogous claim 4.
In regards to claim 11:
Step 2A Prong 1: Does the claim recite a law of nature, a natural phenomenon, or an abstract idea?
Yes, the claim does recite a(n) abstract idea.
Claim 11 recites the same abstract ideas as analogous claim 5.
In regards to claim 12:
Step 2A Prong 2: Does the claim recite additional elements that integrate the exception into a practical application of the exception?
No, the application does not recite any additional elements that would integrate the abstract idea into a practical application.
Claim 12 recites the same additional elements as claim 6.
Step 2B: Does the claim as a whole amount to significantly more than the judicial exception?
No, the claim as a whole does not amount to significantly more than the judicial exception. All elements of the claim, viewed individually or wholistically, do not provide an inventive concept or otherwise significantly more than the abstract idea itself.
Claim 12 recites the same additional elements as analogous claim 6.
In regards to claim 13:
Step 1: Is the claim directed towards a process, machine, manufacture, or composition of matter?
Yes, the claim is directed towards a manufacture.
Step 2A Prong 1: Does the claim recite a law of nature, a natural phenomenon, or an abstract idea?
Yes, the claim does recite a(n) abstract idea.
Claim 13 recites the same abstract ideas as analogous claim 1.
Step 2A Prong 2: Does the claim recite additional elements that integrate the exception into a practical application of the exception?
No, the application does not recite any additional elements that would integrate the abstract idea into a practical application.
Claim 13 recites the same additional elements as claim 1.
Step 2B: Does the claim as a whole amount to significantly more than the judicial exception?
No, the claim as a whole does not amount to significantly more than the judicial exception. All elements of the claim, viewed individually or wholistically, do not provide an inventive concept or otherwise significantly more than the abstract idea itself.
Claim 13 recites the same additional elements as analogous claim 1.
In regards to claim 14:
Step 2A Prong 2: Does the claim recite additional elements that integrate the exception into a practical application of the exception?
No, the application does not recite any additional elements that would integrate the abstract idea into a practical application.
Claim 14 recites the same additional elements as claim 2.
Step 2B: Does the claim as a whole amount to significantly more than the judicial exception?
No, the claim as a whole does not amount to significantly more than the judicial exception. All elements of the claim, viewed individually or wholistically, do not provide an inventive concept or otherwise significantly more than the abstract idea itself.
Claim 14 recites the same additional elements as analogous claim 2.
In regards to claim 15:
Step 2A Prong 2: Does the claim recite additional elements that integrate the exception into a practical application of the exception?
No, the application does not recite any additional elements that would integrate the abstract idea into a practical application.
Claim 15 recites the same additional elements as claim 3.
Step 2B: Does the claim as a whole amount to significantly more than the judicial exception?
No, the claim as a whole does not amount to significantly more than the judicial exception. All elements of the claim, viewed individually or wholistically, do not provide an inventive concept or otherwise significantly more than the abstract idea itself.
Claim 15 recites the same additional elements as analogous claim 3.
In regards to claim 16:
Step 2A Prong 2: Does the claim recite additional elements that integrate the exception into a practical application of the exception?
No, the application does not recite any additional elements that would integrate the abstract idea into a practical application.
Claim 16 recites the same additional elements as claim 4.
Step 2B: Does the claim as a whole amount to significantly more than the judicial exception?
No, the claim as a whole does not amount to significantly more than the judicial exception. All elements of the claim, viewed individually or wholistically, do not provide an inventive concept or otherwise significantly more than the abstract idea itself.
Claim 16 recites the same additional elements as analogous claim 4.
In regards to claim 17:
Step 2A Prong 1: Does the claim recite a law of nature, a natural phenomenon, or an abstract idea?
Yes, the claim does recite a(n) abstract idea.
Claim 17 recites the same abstract ideas as analogous claim 5.
In regards to claim 18:
Step 2A Prong 2: Does the claim recite additional elements that integrate the exception into a practical application of the exception?
No, the application does not recite any additional elements that would integrate the abstract idea into a practical application.
Claim 18 recites the same additional elements as claim 6.
Step 2B: Does the claim as a whole amount to significantly more than the judicial exception?
No, the claim as a whole does not amount to significantly more than the judicial exception. All elements of the claim, viewed individually or wholistically, do not provide an inventive concept or otherwise significantly more than the abstract idea itself.
Claim 18 recites the same additional elements as analogous claim 6.
In regards to claim 19:
Step 2A Prong 2: Does the claim recite additional elements that integrate the exception into a practical application of the exception?
No, the application does not recite any additional elements that would integrate the abstract idea into a practical application.
Claim 19 recites the following additional elements:
the computer executable instructions are deployable to a client system from a server system at a remote location
This limitation is directed towards the insignificant extra solution activity of mere data gathering (see MPEP § 2106.05(g)).
Step 2B: Does the claim as a whole amount to significantly more than the judicial exception?
No, the claim as a whole does not amount to significantly more than the judicial exception. All elements of the claim, viewed individually or wholistically, do not provide an inventive concept or otherwise significantly more than the abstract idea itself.
Claim 19 recites the following additional elements:
the computer executable instructions are deployable to a client system from a server system at a remote location
This limitation is directed towards the insignificant extra solution activity of mere data gathering (see MPEP § 2106.05(g)). This is a well understood, routine, conventional activity of transmitting data (see MPEP 2106.05(d) example i in computer functions).
In regards to claim 20:
Step 2A Prong 2: Does the claim recite additional elements that integrate the exception into a practical application of the exception?
No, the application does not recite any additional elements that would integrate the abstract idea into a practical application.
Claim 20 recites the following additional elements:
the computer executable instructions are provided by a service provider to a user on an on-demand basis
This limitation is directed towards linking or indicating a field of use or technological environment (see MPEP 2106.05(h)).
Step 2B: Does the claim as a whole amount to significantly more than the judicial exception?
No, the claim as a whole does not amount to significantly more than the judicial exception. All elements of the claim, viewed individually or wholistically, do not provide an inventive concept or otherwise significantly more than the abstract idea itself.
Claim 20 recites the following additional elements:
the computer executable instructions are provided by a service provider to a user on an on-demand basis
This limitation is directed towards linking or indicating a field of use or technological environment (see MPEP 2106.05(h)). Limitations that amount to merely linking/indicating to a field of use or technological environment, such having the data be related to field like power/electricity (see MPEP 2106.05(h)(vi)), do not amount to significantly more than the exception itself. This appears to be linking to software as a service, as the steps noted in the limitation note what a service provider could do with the invention in terms of distributing the invention.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-5, 7-11, 13-17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Figueira et al (“Survey on Synthetic Data Generation, Evaluation Methods and GANs”), referred to as Figueira in this document, and further in combination with Qichang Li et al (“A Novel Hierarchical Situation Awareness Model for CBTC Using SD Entropy and GRU with PRD Algorithms”), referred to as Qichang Li in this document, and further in combination with Hosenie et al (“Comparing Multiclass, Binary, and Hierarchical Machine Learning Classification schemes for variable stars”), referred to as Hosenie in this document.
Regarding Claim 1:
Figueira teaches:
assigning the data center asset component event data for the plurality of data center asset components to respective vectorized input spaces;
reducing a dimension of the respective vectorized input spaces to respective latent spaces, each respective latent space providing respective component event model dimension;
decoding each respective latent space to provide respective vectorized decoded output spaces;
[Figueira 4.2.2 Autoencoders page 27]: " An autoencoder (AE) is a special type of feedforward neural network that consists of two parts: an encoder network that learns to compress high-dimensional data into a low-dimensional, latent special [reducing a dimension of the respective vectorized input spaces to respective latent spaces, each respective latent space providing respective component event model dimension] representation (the code) [assigning the data center asset component event data for the plurality of data center asset components to respective vectorized input spaces as the assigning to vectorized input spaces is seen as encoding supported by Figure 10 1006 of current application noting the step as encoding] , and a decoder network that decompresses [decoding each respective latent space to provide respective vectorized decoded output spaces supported by figure 10 1010 of current application noted the step as decode] the compressed representation into the original domain [83]. Figure 23 shows a diagram of an autoencoder… The goal is not for the autoencoder to learn how to set 𝐷(𝐸(𝑥))=𝑥 for each input example x but rather to learn how to copy the original data only approximately, and only inputs that resemble the original data. By constraining it and forcing it to learn which aspects of the data should be prioritized, autoencoders can learn useful properties about the data (autoencoders have been on the deep learning landscape for decades and have typically been used for feature learning and dimensionality reduction)"
generating a plurality of data center asset component event models for the plurality of data center asset components using the respective vectorized decoded output spaces;
[Figueira 3.1 GANs under the hood]: “A GAN is constituted by two models: a generator model G that tries to generate samples that follow the underlying distribution of the data [generating a plurality of data center asset component event models for the plurality of data center asset components using the respective vectorized decoded output spaces where the generating of a model for synthetic data is taught here by Figuerira]. Nonetheless, these observations are suitably different from the ones in the dataset (i.e., they should not simply reproduce observations that already occur in the dataset). There is also a discriminator model D that, given an observation (from the original dataset or synthesized by the generator), classifies it as fake (produced by the generator) (Typically, the models used for the generator and discriminator are neural networks… The training of the generator is more complicated. G is given as the input random noise (The term latent space is typically used to designate G’s input space.), commonly from a multivariate normal distribution, and the output is a data point with the same features of the original dataset.”
Support for generating synthetic data with GANs is given in [Figueira 4.2.3 Generative Adversarial Networks]: “As shown in Section 3, GANs are a type of generative deep learning consisting of two networks: the generator, G, and the discriminator, D. The details of how they operate have already been reviewed, so we will now focus on the practical applications of such structures. Due to the usefulness of GANs in generating synthetic samples, they are widely used.”
The specification indicates that a GAN is used for such purposes in [Current Application 0022]: “Figure 22 is a simplified process flow diagram of the performance of generative adversarial network (GAN) operations to forecast the probability of the occurrence of a data center asset event” as well as further support by the limitations of claim 4.
Figueira does not explicitly teach:
A computer-implementable method for performing a data center management and monitoring operation, comprising:
receiving data center asset component event data for a plurality of data center asset components
generating a plurality of data center asset component event models for the plurality of data center asset components using the respective vectorized decoded output spaces
and, generating a data center asset event model using a combination of the plurality of data center asset component event models
Qichang Li teaches:
A computer-implementable method for performing a data center management and monitoring operation, comprising:
[Qichang Li page 6 table 1] notes computer elements like CPU [computer-implementable][processor], memory [memory], disk [computer readable medium], and more
PNG
media_image1.png
897
1216
media_image1.png
Greyscale
receiving data center asset component event data for a plurality of data center asset components;
[Qichang Li Section 4 Dataset Description and Performance Evaluation B. Dataset Statistics page 6]: “This platform is a semi-physical simulation platform, which combines simulation software and actual signal equipment to restore the operation of Beijing Metro Line 7. A total of 2 ATS devices, 4 ZC devices, 4 CI devices, 1 VOBC device, 1 DSU device and 6 network equipments are used to obtain normal and attack data [receiving data center asset component event data for a plurality of data center asset components].”
generating a plurality of data center asset component event models for the plurality of data center asset components using the respective vectorized decoded output spaces
[Qichang Li Introduction page 2]: “In order to solve the above problems, this paper proposes a novel hierarchical situation awareness model, which covers the physical layer, network layer and application layer of CBTC systems. As shown in Fig. 1, the model uses machine learning algorithms [a plurality of data center asset component event models where Qichang here teaches that multiple models/algorithms can be used] to perform feature dimensionality reduction, classification and prediction on the critical data of CBTC systems, and achieve real-time situation awareness and alerting.”
The combination of multiple models is further shown by the combination with Hosenie below, as the multiple models works well to create a hierarchy.
One of ordinary skill in the art, prior to the effective filing date, would have been motivated to combine Figueira and Qichang Li. Figueira and Qichang Li are in the same field of endeavor of machine learning. One of ordinary skill in the art would have been motivated to combine Figueira and Qichang Li in order to implement features that can help upkeep systems and prevent issues for systems by being aware of potential issues ([Qichang Li Introduction page 2]: “In order to solve the above problems, this paper proposes a novel hierarchical situation awareness model, which covers the physical layer, network layer and application layer of CBTC systems. As shown in Fig. 1, the model uses machine learning algorithms to perform feature dimensionality reduction, classification and prediction on the critical data of CBTC systems, and achieve real-time situation awareness and alerting.”).
Hosenie teaches:
and, generating a data center asset event model using a combination of the plurality of data center asset component event models.
[Hosenie 4 Classification Pipeline page 4]: “In contrast, DTs (Quinlan 1986) attempt to split input data recursively according to feature values. Each split creates a branch, and there can be arbitrarily many branches in a tree. Each branch eventually terminates at a leaf node that is associated with a specific label. The goal of tree learning is to build a tree structure that has decision paths (from tree root to leaf nodes) that accurately separate examples moving down the tree so that they arrive at the correct leaf node (i.e. obtain the correct label). Generally, using a single decision tree for classification often leads to poor performance due to low or high variance. For instance, a small change in the training set can lead to a very different learned tree structure. Given the weakness of individual trees to training variance, multiple trees can be combined to overcome this problem. Any method that combines multiple single-model classifiers in this manner is known as an ensemble method [and, generating a data center asset event model using a combination of the plurality of data center asset component event models] (Dietterich 2000). For instance, an RF (Breiman 2001) is simply an addition of decision trees that aggregate tree decisions, usually leading to improved classification performance. Such ensemble methods have been shown (Richards et al. 2011; Lochner et al. 2016; Narayan et al. 2018) to achieve better results than single-model learners on a variety of data sets.”
Support for the creation of a hierarchy for the “generating a data center asset event model” from multiple models is given in [Current Application 00228]: "In various embodiments, one or more hierarchies of data center asset component event models, or one or more sub-hierarchies thereof, or one or more individual componentized data center asset component event models thereof, or a combination thereof, may be combined to generate a data center asset component event model, described in greater detail herein, for a particular data center asset." and the interpretation of the limitation from claim 2.
One of ordinary skill in the art, prior to the effective filing date, would have been motivated to combine Figueira and Hosenie. Figueira and Hosenie are in the same field of endeavor of machine learning. One of ordinary skill in the art would have been motivated to combine Figueira and Hosenie in order to improve accuracy or results through using multiple models ([Hosenie 4 Classification Pipeline page 4]: “For instance, an RF (Breiman 2001) is simply an addition of decision trees that aggregate tree decisions, usually leading to improved classification performance. Such ensemble methods have been shown (Richards et al. 2011; Lochner et al. 2016; Narayan et al. 2018) to achieve better results than single-model learners on a variety of data sets.”).
Regarding Claim 2:
The method of claim 1 is taught by Figueira, Qichang Li, and Hosenie.
Hosenie teaches:
generating a failure model hierarchy using the plurality of data center asset component event models
[Hosenie 4 Classification Pipeline page 4]: “In contrast, DTs (Quinlan 1986) attempt to split input data recursively according to feature values. Each split creates a branch, and there can be arbitrarily many branches in a tree. Each branch eventually terminates at a leaf node that is associated with a specific label. The goal of tree learning is to build a tree structure that has decision paths (from tree root to leaf nodes) that accurately separate examples moving down the tree so that they arrive at the correct leaf node (i.e. obtain the correct label). Generally, using a single decision tree for classification often leads to poor performance due to low or high variance. For instance, a small change in the training set can lead to a very different learned tree structure. Given the weakness of individual trees to training variance, multiple trees can be combined to overcome this problem. Any method that combines multiple single-model classifiers in this manner is known as an ensemble method [generating a failure model hierarchy using the plurality of data center asset component event models] (Dietterich 2000). For instance, an RF (Breiman 2001) is simply an addition of decision trees that aggregate tree decisions, usually leading to improved classification performance. Such ensemble methods have been shown (Richards et al. 2011; Lochner et al. 2016; Narayan et al. 2018) to achieve better results than single-model learners on a variety of data sets.”
The motivation to combine with Hosenie is the same as the motivation to combine with Hosenie in claim 1.
generating a failure model hierarchy using the plurality of data center asset component event models
[Qichang Li Introduction page 2]: “In order to solve the above problems, this paper proposes a novel hierarchical situation awareness model, which covers the physical layer, network layer and application layer of CBTC systems. As shown in Fig. 1, the model uses machine learning algorithms to perform feature dimensionality reduction, classification and prediction on the critical data of CBTC systems, and achieve real-time situation awareness and alerting [failure model].”
Support for the above quote to indicate a form of failure is further given by the teaching of detecting faults [Qichang Li Introduction page 2]: “If vehicle on-broad controller (VOBC) does not receive MA within a period of time, the train should brake urgently and degrade the operation mode until the fault is recovered.”
Qichang Li notes the idea of simulation of systems in teachings from claim 1 from … where the idea of failure models appears to align from the specification [Current Application 00138]: “Likewise, as used herein, a failure model broadly refers to a model that defines failure rates, frequencies, and other statistical details observed in the operation of one or more data center assets. In various embodiments, a failure model may be implemented to simulate the operation of a particular data center asset and recreate associated failures.”
The motivation to combine with Qichang Li is the same as the motivation to combine with Qichang Li in claim 1.
Regarding Claim 3:
The method of claim 1 is taught by Figueira, Qichang Li, and Hosenie.
Figueira teaches:
the generating the plurality of data center asset component event models further comprises replicating a data center asset component event model to generate the plurality of data center asset component event models
[Figueira 3.1 GANs under the hood]: “A GAN is constituted by two models: a generator model G that tries to generate samples that follow the underlying distribution of the data [the generating the plurality of data center asset component event models further comprises replicating a data center asset component event model to generate the plurality of data center asset component event models]. Nonetheless, these observations are suitably different from the ones in the dataset (i.e., they should not simply reproduce observations that already occur in the dataset). There is also a discriminator model D that, given an observation (from the original dataset or synthesized by the generator), classifies it as fake (produced by the generator) (Typically, the models used for the generator and discriminator are neural networks… The training of the generator is more complicated. G is given as the input random noise (The term latent space is typically used to designate G’s input space.), commonly from a multivariate normal distribution, and the output is a data point with the same features of the original dataset.”
Figueira is noted as teaching the replicating a data center asset component event model, as GANs are noted for teaching the creation of synthetic data (as taught in claim 1 rejection by Figueira). The model being replicated by a GAN is seen as a form of decoder as a result of the interpretation from elements of the claims [Claim 4] and the specification [0022], for the use of latent space is noted as the input to a decoder or a GAN.
Regarding Claim 4:
The method of claim 3 is taught by Figueira, Qichang Li, and Hosenie.
Figueira teaches:
the replicating the data center asset event model comprises using a generative adversarial network (GAN) variant with a convolutional neural network (CNN) discriminator (D) and a gated recurrent unit (GRU) generator (G) to replicate distributions
[Figueira 3.1 GANs under the hood]: “A GAN [the replicating the data center asset event model comprises using a generative adversarial network (GAN)] is constituted by two models: a generator model G [generator (G) to replicate distributions] that tries to generate samples that follow the underlying distribution of the data. Nonetheless, these observations are suitably different from the ones in the dataset (i.e., they should not simply reproduce observations that already occur in the dataset). There is also a discriminator model D [discriminator (D)] that, given an observation (from the original dataset or synthesized by the generator), classifies it as fake (produced by the generator) (Typically, the models used for the generator and discriminator are neural networks… The training of the generator is more complicated. G is given as the input random noise (The term latent space is typically used to designate G’s input space.), commonly from a multivariate normal distribution, and the output is a data point with the same features of the original dataset.”
[Figueria 3.3 GANs Come in a Lot of Flavours page 12]: “Deep Convolutional [variant with a convolutional neural network (CNN)] Generative Adversarial Network, DCGAN, is a GAN architecture that combines convolutional layers (A convolutional layer is a layer that uses a convolution operation. A convolution, in terms of computer vision tasks, consists of a filter (represented by a matrix) that slides through the image pixels (also represented by a matrix) and performs matrix multiplication. This is useful in computer vision tasks because applying different filters to an image (by means of a convolution) can help, for example, detect edges, blur the image, or even remove noise), which are commonly used in computer vision tasks, with GANs.”
Qichang Li teaches:
and a gated recurrent unit (GRU)
[Qichang Li Section III System Model Design and Algorithm page 3]: “In this section, we first provide basic knowledge of the CBTC architecture. Then, we describe the two major characteristics of our hierarchical situation awareness model in detail. One is the SVD entropy algorithm, which is used for data dimensionality reduction from physical layer and network layer. The other is the GRU [a gated recurrent unit (GRU)] neural network with PRD algorithm, which is used to learn and predict the critical data from application layer and achieve situation alerting.”
The motivation to combine with Qichang Li is the same as the motivation to combine with Qichang Li in claim 1.
Regarding Claim 5:
The method of claim 1 is taught by Figueira, Qichang Li, and Hosenie.
Qichang Li teaches:
characterizing a plurality of data center asset faults using the plurality of data center asset component event models
[Qichang Li Section 5 Results and Discussion page 7]: “As the physical layer and network layer of CBTC systems suffer from various categories of attacks [characterizing a plurality of data center asset faults as faults is noted to be taught by Qichang (in claim 1) and here multiple possible faults are noted in forms of attacks], the multi-classification algorithm is introduced to classify different categories of data, such as normal data and attack data.”
[Qichang Li Introduction page 2]: “In order to solve the above problems, this paper proposes a novel hierarchical situation awareness model, which covers the physical layer, network layer and application layer of CBTC systems. As shown in Fig. 1, the model uses machine learning algorithms [using the plurality of data center asset component event models where the combination, as noted in claim 1, is further sensible to include multiple models when in combination with Hosenie] to perform feature dimensionality reduction, classification and prediction on the critical data of CBTC systems, and achieve real-time situation awareness and alerting.”
The motivation to combine with Qichang Li is the same as the motivation to combine with Qichang Li in claim 1.
Regarding Claim 7:
Claim 7 is analogous to claim 1.
The elements related to the processor, data bus, and such are taught by the teaching of a CPU in claim 1. The “data center asset client module” is interpreted as just referring to more computer parts, thus taught by the teaching of computer parts, as no indication of what the module is or does is given in the claims or spec.
Regarding Claim 8:
Claim 8 is analogous to claim 2.
Regarding Claim 9:
Claim 9 is analogous to claim 3.
Regarding Claim 10:
Claim 10 is analogous to claim 4.
Regarding Claim 11:
Claim 11 is analogous to claim 5.
Regarding Claim 13:
Claim 8 is analogous to claim 1.
Regarding Claim 14:
Claim 8 is analogous to claim 2.
Regarding Claim 15:
Claim 15 is analogous to claim 3.
Regarding Claim 16:
Claim 16 is analogous to claim 4.
Regarding Claim 17:
Claim 17 is analogous to claim 5.
Claims 6, 12, 18-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Figueira et al (“Survey on Synthetic Data Generation, Evaluation Methods and GANs”), referred to as Figueira in this document, and further in combination with Qichang Li et al (“A Novel Hierarchical Situation Awareness Model for CBTC Using SD Entropy and GRU with PRD Algorithms”), referred to as Qichang Li in this document, and further in combination with Hosenie et al (“Comparing Multiclass, Binary, and Hierarchical Machine Learning Classification schemes for variable stars”), referred to as Hosenie in this document, and further in combination with Li et al (US 20210319302 A1), referred to as Li in this document.
Regarding Claim 6:
The method of claim 1 is taught by Figueira, Qichang Li, and Hosenie.
The mapping for this claim is done under the interpretations acquired from the specification and the words used to describe the elements in the claim. As a result of the 112 rejections on claim 6, aspects of the variables are interpreted as sometimes referring to the elements as mentioned in paragraph 225 of the specification rather than paragraph 229 of the specification, as 229 notes closer wording to the claim but does not provide a real detail on what the terms mean, nor does paragraph 229 correct for the issues brought up in the 112 rejections. Some mapping below are intended to help cover possible ideas brought by the wording of the above claim (such as reduced dimension parameter (N) potentially referring to creating a latent space).
Figueira teaches:
a reduced dimension parameter (N)
and an input for behavior replicating parameter (M)
[Figueira 3.1 Gans under the Hood page 9]: “The training of the generator is more complicated. G is given as the input random noise [and an input for behavior replicating parameter (M) as paragraph 225 of specification indicates M refers to a gaussian signal which is noise N][a reduced dimension parameter (N) in reference to the noise N being for the gaussian signal] (The term latent space is typically used to designate G’s input space.), commonly from a multivariate normal distribution, and the output is a data point with the same features of the original dataset.”
[Figueira 4.2.2 Autoencoders page 27]: “An autoencoder (AE) is a special type of feedforward neural network that consists of two parts: an encoder network that learns to compress high-dimensional data into a low-dimensional, latent special [a reduced dimension parameter (N)] representation (the code),” Where the specification notes latent space is an example of dimension reduction ([Current Application 00149]: “Accordingly, the construction of a latent space 906 is an example of dimension reduction, which can also be viewed as a form of data compression.”)
Qichang Li teaches:
each failure model of the plurality of failure models is characterized by an independent and identically distributed (IID) thresholding parameter (T)
[Qichang Li Abstract page 1]: “The mean absolute error between the observed Movement Authority (MA) value and the MAvalue predicted by the GRU algorithm is 23.96, which can be set as a threshold [each failure model of the plurality of failure models is characterized by an independent and identically distributed (IID) thresholding parameter (T) where this quote is there to note the idea of thresholds or threshold parameters being used together with a GRU].”
The motivation to combine with Qichang Li is the same motivation to combine with Qichang Li as used in claim 1.
Figueira does not explicitly teach:
an independent and identically distributed (IID) thresholding parameter (T)
Li teaches:
an independent and identically distributed (IID) thresholding parameter (T)
[Li 0121]: “In this subsection, an embodiment is investigated using simulated data. There are two independent and identically distributed (iid) latent variables [an independent and identically distributed (IID) thresholding parameter (T) where paragraph 225 of specification notes that T refers to IID latent random variables where Li here teaches IID latent variables. Input of noise to a GAN and a GRU are already taught previously. What ‘random’ means in reference to the variables is not clear in relation to the claims, thus is interpreted as to mean which variables are used to not matter, thus taught under this quote from Li] z.sub.1, z.sub.2 following N(0,1) distribution.”
One of ordinary skill in the art, prior to the effective filing date, would have been motivated to combine Figueira and Li. Figueira and Li are in the same field of endeavor of machine learning. One of ordinary skill in the art would be motivated to combine Figueira and Li in order to include elements noted to be part of a system/idea to improve computer performance ([Li 0001]: “The present disclosure relates generally to systems and methods for computer learning that can provide improved computer performance, features, and uses. More particularly, the present disclosure relates to embodiments for estimating the implicit likelihoods of generative adversarial networks (GANs)”).
Regarding Claim 12:
Claim 12 is analogous to claim 6.
Regarding Claim 18:
Claim 18 is analogous to claim 6.
Regarding claim 19:
The computer readable medium of claim 13 is taught by Figueira, Qichang Li, and Hosenie.
Figueira does not explicitly teach:
the computer executable instructions are deployable to a client system from a server system at a remote location
Li teaches:
the computer executable instructions are deployable to a client system from a server system at a remote location
[Li 0159]: “However, various system components may or may not be in physical proximity to one another. For example, input data and/or output data may be remotely transmitted [the computer executable instructions are deployable to a client system from a server system at a remote location as the remotely transmitted enables being deployable to a remote location containing a client system] from one physical location to another. In addition, programs that implement various aspects of the disclosure may be accessed from a remote location (e.g., a server) over a network.”
One of ordinary skill in the art, prior to the effective filing date, would have been motivated to combine Figueira and Li. Figueira and Li are in the same field of endeavor of machine learning. One of ordinary skill in the art would be motivated to combine Figueira and Li in order to be able to utilize systems that are not in physical proximity and enable the distribution of the invention to sell ([Li 0159]: “However, various system components may or may not be in physical proximity to one another. For example, input data and/or output data may be remotely transmitted from one physical location to another. In addition, programs that implement various aspects of the disclosure may be accessed from a remote location (e.g., a server) over a network.” and [Li 0161]: “It shall be noted that embodiments of the present disclosure may further relate to computer products…”).
Regarding claim 20:
The computer readable medium of claim 13 is taught by Figueira, Qichang Li, and Hosenie.
Figueira does not explicitly teach:
the computer executable instructions are provided by a service provider to a user on an on-demand basis
Li teaches:
the computer executable instructions are provided by a service provider to a user on an on-demand basis
[Li 0161]: “It shall be noted that embodiments of the present disclosure may further relate to computer products [the computer executable instructions are provided by a service provider to a user on an on-demand basis] with a non-transitory, tangible computer-readable medium that have computer code thereon for performing various computer-implemented operations. The media and computer code may be those specially designed and constructed for the purposes of the present disclosure, or they may be of the kind known or available to those having skill in the relevant arts.”
The teaching of a computer product is seen as teaching being provided by a service provider to a user on an on-demand basis, as a product is an element sold to users/clients (the “provided by a service provider to a user”) when the users/clients purchase the product (the “on an on-demand basis”). One of ordinary skill in the art is considered understanding of selling products.
The motivation to combine with Li is the same as the motivation to combine with Li in claim 19.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Bisht et al (US 20230015709 A1) is relevant art, as Bisht notes predicting failure using information from a system [0006] utilizing machine learning models.
Cho et al (“Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation”) is relevant art, as is considered an important reference for teaching the elements of GRU, which are utilized in the current invention.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTOPHER D DEVORE whose telephone number is (703)756-1234. The examiner can normally be reached Monday-Friday 7:30 am - 5 pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michael J Huntley can be reached at (303) 297-4307. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/C.D.D./Examiner, Art Unit 2129
/MICHAEL J HUNTLEY/Supervisory Patent Examiner, Art Unit 2129