DETAILED ACTION
This Action is responsive to Claims filed 09/03/2025.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of the Claims
Claims 1-16 have been previously cancelled. Claims 17, 24, 26, and 31-32 have been amended. Claims 17-32 are pending.
Response to Amendment
The amendments to Claim 24 obviates the objections to informalities. The Objection to Claim 24 has been withdrawn.
Response to Arguments
Applicant's arguments, see pages 8-19, filed 09/03/2025, with regards to the 35 U.S.C. 101 Rejection of Claims 17-32 have been fully considered but they are not persuasive.
In regards to the Applicant’s arguments to Step 2A – Prong 1: The examiner submits there is no specific structure recited in the “propagating…” limitation indicating any confusion as to the practicality of the limitation being performable by the human mind. The submodules are generically recited functions, as per the claim limitation; the generically recited model (BRI of a model could include a function, equation, etc. without limitations indicating the model is a neural network model) consists of these submodules, per the claim limitation; therefore, propagating numerical vectors or values through a model made of functions is practically performed within the human mind or with the aid of pen and paper. Likewise, without the recitation of structure to indicate otherwise, the “learning…” limitation is not limited to models that preclude observation or interpretation by the human mind with or without the aid of pen and paper. For these reasons, the Examiner submits these limitations point toward abstract idea mental process steps involving and observation, evaluation, judgement regarding the claimed numerical vectors.
In regards to the Applicant’s arguments to Step 2A – Prong 2: The additional elements recited in the limitations of “providing…” is broadly interpreted is being analogous to an input step (although the Examiner notes this interpretation is merely contextual, as the generic recitation of “providing…” could be interpretable as an abstract idea in and of itself, given no technical structure is recited), and is therefore mere pre- or post-solution activity which cannot integrate the abstract idea mental process step(s) into a practical application (MPEP 2106.05(g), Mere Data Gathering). The “wherein parameterizations…” limitation, likewise, is also being interpreted contextually broadly as instructions to apply the preceding abstract idea mental process steps (although, again, the Examiner submits, this is the loosest one might interpret as instructions to apply, given there is little to no technical structure precluding a human mind from parameterizing functions in a flow-based order and fixing values/parameters of said functions based on the output of each function/submodule). Per MPEP 2106.05(f), instructions to apply the abstract idea steps cannot integrate into a practical application.
Regarding the specific improvement argument under Step 2A – Prong 2: The Examiner submits the improvements cited by the applicant are a direct result of the abstract idea mental process steps highlighted above. Per MPEP 2106.05(a), the specific improvement cannot come from the abstract idea, but must come from a specific structure or additional element(s).
Regarding the Applicant’s arguments to step 2B: Per MPEP 2106.05(d)(II)(i) and (d)(II)(iv) (both first list), the “providing…” step (interpretable as retrieving or inputting data) is well-understood, routine, or conventional activity. These subsection(s) were cited in the Office Action dated 04/03/2025. Likewise, the additional element being interpreted as instructions to apply the interpretable abstract idea mental process steps cannot recite significantly more than the abstract idea per MPEP 2106.05(f): “Another consideration when determining whether a claim integrates a judicial exception into a practical application in Step 2A Prong Two or recites significantly more than a judicial exception in Step 2B is whether the additional elements amount to more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer.” (Although, again, there is a reasonable interpretation of this limitation amounting to extra-solution activity, given the lack of technical recitation).
For these reasons, the Examiner submits the 101 Rejection of Claims 17-32 was conducted thoroughly and is properly supported in the Office Action dated 04/03/2025. The amendments to the independent claims do not significantly affect the interpretation of the limitation that was amended; therefore, the Examiner upholds the rejection. See the updated 35 U.S.C. 101 Rejection below.
Applicant’s arguments, see pages 19-21, filed 09/03/2025, with regards to the 35 U.S.C. 103 Rejection(s) of Claims 17-32 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 101
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
Claims 17-32 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more; and because the claims as a whole, considering all claim elements both individually and in combination, do not amount to significantly more than the abstract idea, see Alice Corporation Pty. Ltd. v. CLS Bank International, et al, 573 U.S. (2014). In determining whether the claims are subject matter eligible, the Examiner applies the 2019 USPTO Patent Eligibility Guidelines. (2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50, Jan. 7, 2019.)
Step 1:
Claims 17-25 recite a computer-implemented method for training a machine learning system, which falls under the statutory category of a process. Claims 26-30 recite a computer-implemented method for applying a trained machine learning system, which falls under the statutory category of a process. Claim 31 recites a computer-implemented system for training a machine learning system, which falls under the statutory category of a machine. Claim 32 recites a non-transitory machine-readable memory medium on which is stored a computer program for training a machine learning system, which falls under the statutory category of a manufacture.
Step 2A – Prong 1:
Claim 17 recites an abstract idea, law of nature, or natural phenomenon. The limitations of “propagating the numerical vectors of the at least one training data set through a parameterizable generic flow-based model, the parameterizable generic flow-based model including a concatenation of at least two parameterizable submodules, each of the submodules being a parameterizable function;” and “learning model parameters of the parameterizable generic flow-based model;” under the broadest reasonable interpretation, cover a mental process including an observation, evaluation, judgment or opinion that could be performed in the human mind or with the aid of pencil and paper.
Propagating a vector through a flow-based model, including submodules, which are functions is interpretable as an algorithm in which repeated calculations are performed, which is practically performed within the human mind or with the aid of pencil and paper, and learning model parameters is practically performed within the human mind or with the aid of pencil and paper.
Step 2A – Prong 2:
The additional elements of claim 17 do not integrate the abstract idea into a judicial exception. The claim recites the additional elements “A computer-implemented method”, “a number of numerical vectors”, “a parameterizable function”, and “output data” are recognized as generic computer components recited at a high level of generality. Although they have and execute instructions to perform the abstract idea itself, this also does not serve to integrate the abstract idea into a practical application as it merely amounts to instructions to "apply it." (See MPEP 2106.04(d)(2) indicating mere instructions to apply an abstract idea does not amount to integrating the abstract idea into a practical application).
The additional elements of “a machine learning system”, “training data”, “a parameterizable generic flow-based model”, “parameterizable submodules” and “model parameters” are recognized as non-generic computer components, but are recited at a high level of generality and are found to generally link the abstract idea to a particular technological environment or field of use (See MPEP 2106.05(h)).
The additional elements recited in the limitation “parameterizations of each of the parameterizable submodules are learned successively in a flow direction of the parameterizable generic flow-based model and are fixed before parameterizations of the parameterizable submodule next in the flow direction are learned, and the learning being directed at output data of each of the submodules being distributed according to a predetermined probability distribution, wherein each of the submodules is a concatenation of parameterizable functions.” are contextually interpreted to be mere instructions to apply the abstract idea of propagating a vector into functions and learning model parameters (See MPEP 2106.04(d)(2) indicating mere instructions to apply an abstract idea does not amount to integrating the abstract idea into a practical application).
The additional elements recited in the limitation “providing at least one training data set that includes a number of numerical vectors;” is interpreted contextually broadly to be insignificant extra solution data retrieval or transmittal steps (See MPEP 2106.05(g), Mere Data Gathering); therefore, this additional element cannot integrate the abstract idea mental process steps into a practical application.
Step 2B:
The only limitation on the performance of the described method is a limitation reciting “A computer-implemented method”, “a number of numerical vectors”, “a parameterizable function”, and “output data” These elements are insufficient to transform a judicial exception to a patentable invention because the recited elements are considered insignificant extra-solution activity (generic computer system, processing resources, links the judicial exception to a particular, respective, technological environment). The claim thus recites computing components only at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using generic computer components; mere instructions to apply an exception using a generic computer component cannot provide an inventive concept (see MPEP 2106.05(f)).
The additional elements of “a machine learning system”, “training data”, “a parameterizable generic flow-based model”, “parameterizable submodules” and “model parameters” are recognized as non-generic computer components, but are recited at a high level of generality and are found to generally link the abstract idea to a particular technological environment or field of use (See MPEP 2106.05(h)).
The additional elements recited in the limitation ““parameterizations of each of the parameterizable submodules are learned successively in a flow direction of the parameterizable generic flow-based model and are fixed before parameterizations of the parameterizable submodule next in the flow direction are learned, and the learning being directed at output data of each of the submodules being distributed according to a predetermined probability distribution, wherein each of the submodules is a concatenation of parameterizable functions.” are found to be mere instructions to apply the abstract idea of propagating a vector into functions and learning model parameters (See MPEP 2106.05(f) indicating mere instructions to apply an abstract idea does not recite significantly more).
In addition, the claimed “providing at least one training data set that includes a number of numerical vectors;” is acknowledged to be well-understood, routine, conventional activity (see, e.g., court recognized WURC examples in MPEP 2106.05(d)(II)(i)).
Taken alone or in ordered combination, these additional elements do not amount to significantly more than the above-identified abstract idea. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation.
For the reasons above, claim 17 is rejected as being directed to non-patentable subject matter under §101. This rejection applies equally to independent claims 26, 31, and 32.
Claim 26 recites similar limitations to claim 17, with the inclusion of additional elements “applying the trained machine learning system, the trained machine learning system being trained by:” This limitation has been evaluated under Steps 2A Prong 2 and 2B and found to be mere instructions to apply the abstract idea of propagating a vector into functions and learning model parameters (See MPEP 2106.05(f)).
Claim 31 recites similar limitations to claim 17, with the inclusion of additional elements “A computer-implemented system for training a machine learning system, the computer-implemented system configured to:” (generic computer components and additional elements that generally link).
Claim 32 recites similar limitations to claim 17, with the inclusion of additional elements “A non-transitory machine-readable memory medium on which is stored a computer program for training a machine learning system, the computer program, when executed by a computer, causing the computer to perform the following steps:” (generic computer components and additional elements that generally link).
Dependent Claims:
Claim 18 recites non-generic computer components (“a generic autoregressive flow”), but are recited at a high level of generality and are found to generally link the abstract idea to a particular technological environment or field of use (See MPEP 2106.05(h)).
Claim 19 recites non-generic computer components (“a conditioner”, “an associated transformer”, “an autoregressive neural network”), but are recited at a high level of generality and are found to generally link the abstract idea to a particular technological environment or field of use (See MPEP 2106.05(h)).
Claim 20 recites non-generic computer components (“a recurrent neural network”), but are recited at a high level of generality and are found to generally link the abstract idea to a particular technological environment or field of use (See MPEP 2106.05(h)).
Claim 21 recites refinements to the data flow of the interpretable abstract idea steps of claim 17.
Claim 22 recites refinements to a data type to propagate in the interpretable abstract idea step of claim 17.
Claim 23 recites abstract idea mathematical calculation steps (“a measure for performance is calculated after the learning of each respective submodule of the submodules the performance being determined via a Kullback-Leibler divergence between the predetermined probability distribution and a distribution of the output data of the respective submodule and,”) and abstract idea mental process steps (“ “the generic flow-based model is extended by further submodules or is reduced by existing submodules.”).
Claim 24 recites refinements to the non-generic computer components of claim 17 and other non-generic computer components (“a parameterizable invertible mapping”), but are recited at a high level of generality and are found to generally link the abstract idea to a particular technological environment or field of use (See MPEP 2106.05(h)).
Claim 25 recites refinements to data types. The additional elements are recited at a high level of generality and are found to generally link the abstract idea to a particular technological environment or field of use (See MPEP 2106.05(h))
Claim 27 recites a mere data-gathering step (“receiving a time series of sensor data of a device; and calculating a probability for a new data point of the time series from the learned probability distribution;”) and an abstract idea mental process step (“assessing the data point of the time series as an anomaly when the probability for the data point violates a further predetermined criterion.”). The additional elements of this claim have been evaluated under Step 2A – Prong 2 (See MPEP 2106.05(g)) and reevaluated under Step 2B (see, e.g., court recognized WURC examples in MPEP 2106.05(d)(II)(i)).
Claim 28 recites data-types recited at a high level of generality and are found to generally link the abstract idea to a particular technological environment or field of use (See MPEP 2106.05(h)).
Claim 29 recites abstract idea mental process steps “generating new data points for continuing a time series of sensor data resulting from normally distributed data points in a counter-flow direction of the parameterizable generic flow-based model;”, “controlling a device or a system based on the new data points,”, and “determining a state of a device or of a system based on the new data points.”
Claim 30 recites data-types recited at a high level of generality and are found to generally link the abstract idea to a particular technological environment or field of use (See MPEP 2106.05(h)).
Claim Rejections - 35 USC § 103
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 17-22, 24-26, and 31-32 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yu et al. (Anomaly Detection in High-Dimensional Data Based on Autoregressive Flow, 2020), hereinafter Yu; Brownlee, Jason (How to Use Greedy Layer-Wise Pretraining in Deep Learning Neural Networks, 2020), hereinafter Brownlee; and Oliva et al. (Transformation Autoregressive Networks, 2018), hereinafter Oliva.
In regards to claim 17: The present invention claims: “A computer-implemented method for training a machine learning system, comprising:” Yu’s model ADAF, is a trained machine learning model (Abstract).
“providing at least one training data set that includes a number of numerical vectors;” Yu experiments on their model with multiple input datasets of varying dimensionality, and encode some of them (Page 133, 4.1 Datasets, mapping to inputting as numerical vectors).
“and propagating the numerical vectors of the at least one training data set through a parameterizable generic flow-based model, the parameterizable generic flow-based model including a concatenation of at least two parameterizable submodules, each of the submodules being a parameterizable function;” Yu teaches their model, ADAF, being a flow-based model (at least Page 129, Section 3), in which two submodules are used (Page 131, Multiple Modules), each of which is the single module of Pages 130-131, and representative of the functions shows in Equation 5 (Page 130).
“and learning model parameters of the parameterizable generic flow-based model;” Yu teaches an objective function that guides the model’s training based on parameters μ and α (Pages 131-132).
“wherein parameterizations of each of the parameterizable submodules are learned successively in a flow direction of the parameterizable generic flow-based model…” Yu’s equation 12 (model parameters), utilizes both x and z, which are outputs of each single module, and teaches multiple of said modules stacked into a flow (Equations 9-10, mapping to “each of the parameterizable submodules are learned successively in a flow direction”).
“and the learning being directed at output data of each of the submodules being distributed according to a predetermined probability distribution,” Yu teaches “sample neural density can be further inferred… where pZ is a simple tractable distribution (e.g., an isotropic Gaussian distribution).” This is then used in the training in Equation 12.
Yu fails to explicitly teach “…and are fixed before parameterizations of the parameterizable submodule next in the flow direction are learned,” However, Brownlee teaches “Pretraining involves successively adding a new hidden layer to a model and refitting, allowing the newly added model to learn the inputs from the existing hidden layer, often while keeping the weights for the existing hidden layers fixed. This gives the technique the name “layer-wise” as the model is trained one layer at a time.
The technique is referred to as “greedy” because the piecewise or layer-wise approach to solving the harder problem of training a deep network. As an optimization process, dividing the training process into a succession of layer-wise training processes is seen as a greedy shortcut that likely leads to an aggregate of locally optimal solutions, a shortcut to a good enough global solution.” (Page 2)
Brownlee highlights the usefulness of training a deep network greedily, both in computational ease, and to address a disappearing gradient problem (Page 2). It would have been obvious to one of ordinary skill in the art at the time of the applicant’s filing to train the multiple-module flow of Yu greedily, by fixing the parameters of each singular module, to improve the model’s overall computational overhead and to allow each module to properly receive full gradient adjustments before allowing the flow to continue to the next module.
The combination of Yu and Brownlee fails to explicitly teach: “wherein each of the submodules is a concatenation of parameterizable functions.” (Although Yu teaches “where x is the input data for d dimensions, K is the number of single module, fi represents an autoregressive module, z is the latent variable.” (emphasis added) and the Examiner submits one could reasonably draw a mapping between “an autoregressive module” and the generic recitation of a concatenation of parameterizable functions, the BRI of which could include almost any neural network model with layers), however, Oliva, in a similar field of endeavor, teaches transformation autoregressive networks, especially throughout Section 2. The Examiner submits a person of ordinary skill in the art would recognize the module of Yu to be similar to the structures/functions of Oliva, and that their combination reads on the generic recitation of “a concatenation of parameterizable functions.”
Oliva teaches “we show that jointly leveraging transformations of variables and autoregressive conditional models, results in a considerable improvement in performance.” (Abstract). It would have been obvious to one of ordinary skill in the art at the time of the Applicant’s filing to use the structures or methods of Oliva in a combination of Yu and Brownlee to improve the overall model performance.
In regards to claim 18: The present invention claims: “wherein at least one of the submodules of the generic flow-based model includes a generic autoregressive flow.” See Equations 9 and 10 of Yu, (fi represents an autoregressive module) (Page 131). See Fig. 1 for each variable being dependent on past variables (autoregressive flow).
In regards to claim 19: The present invention claims: “wherein each generic autoregressive flow includes a conditioner parameterizable by model parameters and an associated transformer parameterizable by model parameters, each conditioner being a function that determines the model parameters of the associated transformer and is an autoregressive neural network.” Sections 3.2 and 3.3 of Yu teach each single module being an autoregressive model, which generates a transformed dataset x from a base distribution and model parameters μ and α. These values are later used in Equation 12 during model learning (mapping to “an autoregressive neural network” using “a conditioner” (BRI of conditioning is the use of values other than solely the input data), and “an associated transformer” (to achieve a transformed dataset) for determining model parameters).
In regards to claim 20: The present invention claims: “wherein at least one of the submodules of the generic flow-based model includes a recurrent neural network.” See Equation 5 for a recurrent operation performed in the calculation of the autoregressive density (Page 130).
In regards to claim 21: The present invention claims: “wherein the numerical vectors of the at least one training data set propagate via the recurrent neural network into the generic flow-based model.” Yu teaches “We improve the model fit by stacking multiple instances of the single model into a deeper flow… where x is the input data for d dimensions…” (Page 131).
In regards to claim 22: The present invention claims: “wherein time series of differing length propagate via the recurrent neural network into the generic flow-based model.” Yu experiments on their model with KDDCUP, which contains continuous data. Yu also compares their model against other models pertaining to the analysis of time-series anomaly detection (Page 128), and a brief search shows that time-series or real-time analysis was a common problem in autoregressive models at the time of Yu, Brownlee, and Oliva’s writing.
In regards to claim 24: The present invention claims: “each of the parameterizable functions in each of the submodules includes a parameterizable transformer as a final chain link of the concatenation, the parameterizable transformer being a parameterizable invertible mapping.” See above how a combination of Yu, Brownlee, and Oliva reads on the generic recitation of a concatenated set of functions, and how both Yu and Oliva teach performing transformations as part of their respective processes. See Also Yu for each single module performs an invertible transformation (Sections 3.1 and 3.2).
In regards to claim 25: The present invention claims: “wherein the predetermined probability distributions are each a normal distribution.” Yu teaches “Normalizing means that the variable integral of the representation space is 1, which meets the definition of probability distribution function.” (Page 129)
In regards to claim 26: Claim 26 recites similar limitations to claim 17, with the exception of “A computer-implemented method for applying a trained machine learning system, the method comprising: applying the trained machine learning system, the trained machine learning system being trained by:” however, the combination of Yu, Brownlee, and Oliva reads on “applying” the method for anomaly detection (Yu, Abstract), therefore both claims are similarly rejected.
In regards to claim 31: Claim 31 recites similar limitations to claim 17, with the exception of “A computer-implemented system for training a machine learning system, the computer-implemented system configured to:”, therefore both claims are similarly rejected.
In regards to claim 32: Claim 32 recites similar limitations to claim 17, with the exception of “A non-transitory machine-readable memory medium on which is stored a computer program for training a machine learning system, the computer program, when executed by a computer, causing the computer to perform the following steps:”, therefore both claims are similarly rejected.
Claim(s) 23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yu, Brownlee, and Oliva as applied to claim 17 above, and further in view of Chang et al. (A FLOW-BASED ANOMALY DETECTION METHOD USING ENTROPY AND MULTIPLE TRAFFIC FEATURES, 2010), hereinafter Chang.
In regards to claim 23: The present invention claims: “wherein a measure for performance is calculated after the learning of each respective submodule of the submodules, the performance being determined via a Kullback-Leibler divergence between the predetermined probability distribution and a distribution of the output data of the respective submodule and, after the learning of each of the submodules according to a predetermined criterion for the performance, the generic flow-based model is extended by further submodules or is reduced by existing submodules.” While the combination of Yu, Brownlee, and Oliva reads on “wherein a measure for performance is calculated after the learning of each respective submodule of the submodules,” and “after the learning of each of the submodules according to a predetermined criterion for the performance, the generic flow-based model is extended by further submodules or is reduced by existing submodules.” in an implementation of greedy layer-wise pretraining (and where it is noted that Bengio (2006) teaches “An approach that has been explored with some success in the past is based on constructively adding layers. This was previously done using a supervised criterion at each stage,” referenced herein not as part of the rejection, but simply to illustrate that adding layers in a greedy fashion based on a criterion would have been known in the art at the time of Yu, Brownlee, and Oliva’s writing), the combination fails to explicitly teach “the performance being determined via a Kullback-Leibler divergence between the predetermined probability distribution and a distribution of the output data of the respective submodule” However, Chang, in a similar field of flow-based model anomaly detection, uses Kullback-Leibler divergence for anomaly detection within a network (Pages 223-225).
Chang shows that the Kullback-Leibler divergence was a known metric for anomaly detection at the time of Yu, Brownlee, and Oliva’s writing. It would have been obvious to one of ordinary skill in the art at the time of the applicant’s filing to utilize this metric when determining whether or not to add another submodule to the model.
Claim(s) 27-30 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yu, Brownlee, and Oliva as applied to claim 26 above, and further in view of Anderson et al. (US 2017/0284903 A1), hereinafter Anderson.
In regards to claim 27: The present invention claims: “receiving a time series of sensor data of a device; and calculating a probability for a new data point of the time series from the learned probability distribution; and assessing the data point of the time series as an anomaly when the probability for the data point violates a further predetermined criterion.” While the combination of Yu, Brownlee, and Oliva reads on greedily pretraining an autoregressive model for anomaly detection, their disclosures fail to explicitly teach the generating of new data points often used in such detection methods. However, Anderson, in a similar anomaly detection field, teaches “The instructions can cause the processing device to use the model to determine a predicted magnitude (new data point) value for the particular component of the time series associated with the target sensor (time series of a sensor) based on the additional sensor measurements. The instructions can cause the processing device to identify the anomaly with the machine by determining that (i) the predicted magnitude value of the particular component meets or exceeds a predetermined threshold (predetermined criterion); or (ii) multiple predicted magnitude values for the particular component comprise a predetermined pattern (predetermined criterion) that is indicative of the anomaly.” ([0004]).
Anderson highlights that complex industrial systems can be monitored with a generated model for detect failures or anomalies with in a sensor system (Background). It would have been obvious to one of ordinary skill in the art at the time of the applicant’s filing to use the combination of Yu, Brownlee, and Oliva in an industrial system or sensor network to predict failures or anomalies.
In regards to claim 28: The present invention claims: “wherein the time series includes: a sequence of image data or audio data; or a sequence of data for monitoring an operator of a device or of a system; or a sequence of data for monitoring or controlling a device or a system; or a sequence of data for monitoring or controlling an at least semi-autonomous robot.” Anderson reads on “a sequence of data for monitoring or controlling a device or a system” by the monitoring of an industrial sensor system or network.
In regards to claim 29: The present invention claims: “generating new data points for continuing a time series of sensor data resulting from normally distributed data points in a counter-flow direction of the parameterizable generic flow-based model; and (i) controlling a device or a system based on the new data points, or (ii) determining a state of a device or of a system based on the new data points.” See above where The combination of Yu, Brownlee, and Oliva reads on “normally distributed data points in a counter-flow direction of the parameterizable generic flow-based model”. A combination with Anderson would yield predicting magnitude values ([0004]) for time series senor data, and reads on “determining a state of a device” if a sensor is behaving anomalously.
In regards to claim 30: The present invention claims: “a sequence of data of an at least semi-autonomous vehicle to select a vehicle strategy; or a sequence of sensor data of one part of a digital twin to simulate data of another part of the digital twin; or a sequence of utilized capacity data in nodes of a network for simulating and analyzing utilized capacity, in order to assign network resources based on the simulated utilized capacity, the network being a computer network or a telecommunications network or a wireless network.” Anderson reads on “or a sequence of sensor data of one part of a digital twin to simulate data of another part of the digital twin;” with the model being based on a network of industrial sensors and predicting data of one or more of those sensors.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GRIFFIN T BEAN whose telephone number is (703)756-1473. The examiner can normally be reached M - F 7:30 - 4:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Li Zhen can be reached at (571) 272-3768. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/GRIFFIN TANNER BEAN/Examiner, Art Unit 2121
/Li B. Zhen/Supervisory Patent Examiner, Art Unit 2121