DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Introductory Remarks
In response to communications filed on 18 August 2025, claims 1-4, 6, and 9 are amended per Applicant's request. No claims were cancelled. No claims were withdrawn. No new claims were added. Therefore, claims 1-14 are presently pending in the application, of which claim 1 is presented in independent form.
The previously raised 112 rejection of the pending claims is withdrawn in view of the amendments to the claims.
The previously raised 101 rejection of the pending claims is maintained.
The previously raised 103 rejection of the pending claims is withdrawn in view of the amendments to the claims. A new ground(s) of rejection has been issued.
Response to Arguments
Applicant’s arguments filed 18 August 2025 with respect to the rejection of the claims under 35 U.S.C. 112 (see Remarks, p. 7) have been fully considered and are persuasive. The amendments overcome the previously raised issues and the 112 rejection has been accordingly withdrawn.
Applicant’s arguments filed 18 August 2025 with respect to the rejection of the claims under 35 U.S.C. 101 have been fully considered but are not persuasive.
Applicant’s argument that “Training the monitoring algorithm to perform a particular task is clearly not abstract” (see Remarks, p. 8-9) is not persuasive.
Applicant asserts that the claims cannot be “practically” performed in the human mind, stating as an example, SiRF Tech. (Applicant asserting that “claims did not recite a mental process because the claimed invention ‘could not, as a practical matter, be performed entirely in a human’s mind’”) (emphasis added) (see Remarks, p. 8).
However, Applicant has misinterpreted case law as well as what how Step 2A, Prong 1 is applied to determine patent eligibility.
Firstly, Applicant has placed this quote out of context. By stating that the “claimed invention” can or cannot be be performed “entirely” in a human’s mind, indicates that Applicant believes that if there is any limitation that falls outside mental processes, therefore the entire claimed invention does not recite a mental process. This is an erroneous interpretation of case law and patent eligibility.
SiRF Tech. made no such statement. Rather, the case specifically dealt with the use of the GPS receiver in combination with performing the calculations.1 In other words, what was assessed was whether the calculations could be performed without the claimed machine components, not whether the entire claimed invention could entirely be performed in the mind as asserted by Applicant.
The Office’s interpretation of SiRF Tech. is further evidenced in the same paragraph quoted by Applicant, in which the court wrote that “In order for the addition of a machine to impose a meaningful limit on the scope of a claim, it must play a significant part in permitting the claimed invention to be performed, rather than function solely as an obvious mechanism for permitting a solution to be achieved more quickly, i.e., through the utilization of a computer for performing calculations” (emphasis added) (Id. at p. 22).
Thus, it is clear from SiRF Tech. that the courts, when assessing the claims at Step 2A, Prong 1, the question is whether the claimed computations necessitate the claimed computing components (e.g., can the calculations be performed without the GPS receiver?). In SiRF Tech., the court assessed that the calculations required the GPS receiver.
Here, the present claims contain no such meaningful limits. The majority of the claimed invention is about analyzing the data by performing certain transformations, manipulations, etc. These can be practically performed in the mind of a person. The monitoring algorithm does little more than state these recitations of mental steps, while adding the words “apply it” with a computer. Adding a token extra-solution activity by stating that the monitoring algorithm is “trained” using the inputs/outputs, does nothing more than attempt to limit the claims to a particular technological environment (implementation via computers), but also is a well-understood, routine, and conventional activity (widely known in the industry as “supervised learning”). Thus, adding well-known, token activities does not alter the analysis that the claims recite an abstract idea, invoking the use of a monitoring algorithm to perform the disclosed steps at a high level of generality, with attempts to narrow this monitoring algorithm to being a “trained” algorithm, which is well-understood, routine, and conventional.
Therefore, Applicant’s attempts to link the use of a GPS receiver in SiRF Tech. to the monitoring algorithm in the present application are not persuasive. The claimed invention’s use of the monitoring algorithm does not impose meaningful limits on how the steps themselves are performed, but merely invoke the monitoring algorithm at a high level of generality, i.e., in a very generic manner, while attempting to make the claims nonabstract by simply stating, again at a very high level of generality, that the monitoring algorithm is “trained” (which is also well-understood, routine, and conventional).
Thus, the mere inclusion of “training” the algorithm does not, by itself, move the claims outside the realm of abstract ideas, as the calculations are not performed in any particular manner by the monitoring algorithm. Stating that it is trained does not further describe how the claimed invention performs the disclosed steps that raised the issue of reciting mental tasks and processes, but rather only is a tangential or nominal addition to the claim, at best providing context (i.e., an insignificant field-of-use limitation) and an insignificant extra-solution activity that is well-known in the industry.
Applicant’s argument that “Outputting a warning signal or deactivating a function of the technical device is clearly not abstract” (see Remarks, p. 9) is unpersuasive.
Applicant’s argument is based on the misinterpretation that if a single limitation does not fall within the realm of abstract ideas, then the claim as a whole is not an abstract idea. This is incorrect, as this particular step is a tangential or nominal addition that does not describe how the disclosed calculations are performed, and thus is treated at a later step. See, e.g., MPEP § 2106.05(g) on “Insignificant Extra-Solution Activity” with respect to, e.g., Parker v. Flook, 437 U.S. at 593-95, 198 USPQ at 197 (1978) (a formula would not be patentable by only indicating that it could be usefully applied to existing surveying techniques).
Here, stating that the claims can be applied to outputting a warning signal or deactivating a function of the technical device, does not amount to significantly more, as it does not state how the claimed steps themselves are accomplished.
Applicant’s arguments that the claims, if allegedly reciting an abstract idea, are not directed to an abstract idea due to purported improvements (see Remarks, p. 10-11) are unpersuasive.
Applicant argues that “existing solutions do not enable early detection or prediction of when the technical device will cease to operate correctly”, that “the method [is] used to monitor continuously a real technical device”, and that “the remaining useful life of the technical device can thereby be predicted” (see Remarks, p. 10-11). However, all of these calculations are capable of being performed by a person, with the monitoring algorithm simply invoked to simply automate a mental task or process, with no meaningful limits on how such steps must necessarily be performed by a computer.
Taking the example of SiRF Tech. that Applicant had cited, “[the machine] must play a significant part in permitting the claimed invention to be performed, rather than function solely as an obvious mechanism for permitting a solution to be achieved more quickly, i.e., through the utilization of a computer for performing calculations”. Here, Applicant is essentially making arguments that the court in SiRF Tech. would have found unpersuasive, which is that the claimed monitoring algorithm functions solely as an obvious mechanism for permitting a solution of monitoring a technical device to be achieved automatically by utilizing a computer for performing calculations.
Applicant’s arguments that “training a monitoring algorithm to predict the system behavior of the technical device” and “outputting a warning signal or deactivating a function of the technical device”, are non-abstract processes that integrate any alleged abstract idea into a practical application, are unpersuasive, because these are insignificant extra-solution activities that are unrelated to the focus of the claimed invention, which is the calculations that are performed during the monitoring.
Applicant’s arguments with respect to Step 2B (see Remarks, p. 11) are unpersuasive for at least the reasons already discussed above and for the 101 rejection below.
Thus, the 101 rejection has been maintained.
Applicant’s arguments filed 18 August 2025 with respect to the rejection of the claims under 35 U.S.C. 103 (see Remarks, p. 11-16) have been fully considered but are moot because the arguments do not apply to the new reference (and thus the new combination of references) being used in the current rejection.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-14 are rejected under 35 U.S.C. 101 because the claims are directed to a judicial exception (i.e., an abstract idea) without significantly more.
Independent Claim 1 recites learning and prediction phases with input data (and output data in the learning phase) being supplied of a technical device, normalizing the input data supplied to the monitoring algorithm to data of a reference signal, computing output comparison data (in the prediction phase); and in the prediction phase, detecting the inadmissible deviation of the technical device when, based on a difference from the output comparison data, the output data of the technical device lying outside the standard value range. These encompass an evaluation, observation, and/or judgment, as well as mathematical concepts, which fall under the “Mental Processes” group of abstract ideas.
The dependent claims variously recite steps relating to transforming data, including preprocessing data. Dependent Claim 2 recites harmonizing a number of values in the input data with a number of values of the data of the reference signal. Dependent Claim 3 recites where when a number of values of the input data and a number of values of the data of the reference signal are equal but the input data is skewed with respect to the data of the reference signal, then the input data is mapped onto the data of the reference signal. Dependent Claim 4 recites time-normalization of the input data in a time window onto the reference signal is performed; determining frequency segments of the input data by transforming the input data for time segments of the time window into a frequency domain; and combining the frequency segments of the input data, in which frequency segments are associated with different segments, according to the time-normalization of the first step. Dependent Claim 7 recites transforming the output data of the technical device into the frequency domain and compared in the frequency domain with the computed output comparison data. Dependent Claim 8 recites that the output comparison data is transformed into a time domain and compared in the time domain with the output data of the technical device. These encompass an evaluation, observation, and/or judgment, as well as mathematical concepts, which fall under the “Mental Processes” grouping of abstract ideas.
Because the claims cover performance of the mind with the exception of the recited hardware/software components, the claims therefore recite an abstract idea.
The claims do not recite additional elements that amount to significantly more than the judicial exception. The recitations to computer components/elements are recited at a high level of generality and recited so generically that they represent no more than mere instructions to apply the judicial exception on a computer (see MPEP 2106.05(f)). See, e.g., the claimed use of a “monitoring algorithm” for performing the claimed steps (independent Claim 1); a control unit in a vehicle for performing the method (dependent Claim 12); a computer program product including computer code configured to carry out the method (dependent Claim 13); a non-transitory machine-readable storage medium configured to store the computer program product (dependent Claim 14).
These limitations can also be viewed as nothing more than an attempt to generally link the use of the judicial exception to the technological environment of a computer (see MPEP 2106.05(h)).
In addition to the insignificant field-of-use limitations described with respect to the computer components/elements described above, the claims variously attempt to narrow the abstract idea to additional field-of-use limitations, which only describe the context rather than a particular manner of achieving the result. In particular, independent claim 1 states that the outputted signal is a warning signal or a deactivation of a function of the technical device. Dependent claim 4 recites that the time window is a “viewed” time window; dependent claim 5 recites that the time-normalization of claim 4 is carried out using “dynamic time warping”; dependent claim 6 recites that transforming of the input data for the viewed time window into the frequency domain is carried out using “a short-time Fourier transform”; dependent claim 9 recites that the reference signal is formed from a plurality of preceding values of the input data; dependent claim 10 recites that the reference signal corresponds to a defined driving maneuver of a vehicle; and dependent claim 11 recites that the monitoring algorithm is embodied as a neural network.
Lastly, independent Claim 1 recites “supplying” the monitoring algorithm with such information. This is an insignificant extra-solution activity and an attempt to limit the claims to a particular technological environment (i.e., implementation via computers). Additionally, independent Claim 1 recites training the monitoring algorithm to predict the system behavior of the technical device using input and output data. This is both an insignificant field-of-use limitation (an attempt to limit the claims to a particular technological environment, namely implementation via computers), as well as an insignificant extra-solution activity. Furthermore, the step of outputting a warning signal or deactivating a function of the technical device is also an insignificant extra-solution activity.2
Accordingly, the claims are not integrated into a practical application of the idea.
The claims do not recite additional elements that amount to significantly more than the judicial exception.
As discussed above with respect to the integration of the abstract idea into a practical application, the additional elements of various computing hardware components, which amount to no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept.
The step of “supplying” a monitoring algorithm with information is an insignificant extra-solution activity that is well-understood, routine, and conventional. See, e.g., MPEP 2106.05(d)(II) (“Receiving or transmitting data over a network, e.g., using the Internet to gather data”).
Additionally, the step of outputting a warning signal or deactivating a function of the technical device is also an insignificant extra-solution activity that is well-understood, routine, and conventional. See, e.g., MPEP 2106.05(d)(II) (“Receiving or transmitting data over a network, e.g., using the Internet to gather data”). See also, e.g., Alice Corp. v. CLS Bank Int'l, 573 U.S. __, 134 S. Ct. 2347 (2014) at p. 15 (“[Using] a computer to obtain data, adjust account balances, and issue automated instructions…are ‘well-understood, routine, conventional activit[ies]’ previously known to the industry”.
Lastly, the step of training an algorithm to be able to perform predictions using learned input and output data is well-understood, routine, and conventional. More particularly, this is conventionally known as “supervised learning”. See, e.g.,
Xiong et al. (US 2017/0024642 A1) at [0002] (“…a training set of data (a training set of inputs each having a known output) is used by a learning algorithm to adjust the feature vectors in the neural network. It is intended that the neural network learn how to provide an output for new input data by generalizing the information it learns in the training stage from the training data”);
Menon et al. (US 11,676,033 B1) at [Background] (“A machine learning model receives input and generates an output based on the received input and on values of the parameters of the model”) and [Summary] (“training a machine learning model having a plurality of model parameters…the method including: obtaining a training input and a corresponding ground truth output; processing the training input using the machine learning model…; computing a loss for the training output by evaluating an objective function that measures a difference between the training output and the ground truth output…”);
MathWorks (“Supervised Learning Workflow and Algorithms”) at [What is Supervised Learning?] (“…a computer ‘learns’ from the observations…. Specifically, a supervised learning algorithm takes a known set of input data and known responses to the data (output), and trains a model to generate reasonable predictions for the response to new data”);
IBM Developer (“Supervised learning models”) at [page 1] (“In supervised learning, you create a function (or model) by using labeled training data that consists of input data and a wanted output. The supervision comes in the form of the wanted output, which in turn lets you adjust the function based on the actual output it produces. When trained, you can apply this function to new observations to produce an output (prediction or classification) that ideally responds correctly”); and
Jatana (“Dive into Supervised Machine Learning”) at [page 1] (“[Supervised learning] is a type of learning in which both input and desired output data are provided. Input and output data are labeled for classification to provide a learning basis for future data processing…. Using these set of variables, we generate a function that map inputs to desired outputs. The training process continues until the model achieves a desired level of accuracy on the training data”).
Even when considered as an ordered combination, the claimed elements do not add anything that is not already present when the steps are considered separately. The claims recite a series of abstract steps at a high level of generality. See, e.g., Affinity Labs of Texas LLC v. DirecTV., 838 F.3d 1266 (Fed. Cir. 2016) at p. 7-8 (“At that level of generality, the claims do no more than describe a desired function or outcome, without providing any limiting detail that confines the claim to a particular solution to an identified problem. The purely functional nature of the claim confirms that it is directed to an abstract idea, not to a concrete embodiment of that idea”); and Elec. Power Grp., LLC v. Alstom S.A., 830 F.3d 1350 (Fed. Cir. 2016), slip op. 12 (“[The] essentially result-focused, functional character of claim language has been a frequent feature of claims held ineligible under § 101”).
Thus, despite the claims’ attempts to narrow the claims to particular contexts or types of information, such limitations do not move the claims outside the realm of abstract ideas. See, e.g., SAP America, Inc. v. InvestPic, LLC, 890 F.3d 1016, 126 USPQ2d 1638 (Fed. Cir. 2018) at p. 12) (finding that the claimed limitations attempting to narrow the claimed statistical methods to bootstrap, jackknife, and cross-validation were all particular methods of resampling, thus doing no more than simply providing further narrowing of what were still mathematical operations, and added nothing outside the abstract realm).
In other words, at this level of generality, the claims do no more than describe a desired function or outcome, and without providing any limiting detail that confines the claims to a particular solution to an identified problem. The purely functional nature of the claims confirm that they are directed to an abstract idea, not to a concrete embodiment of the idea.
A desired goal (i.e., result or effect), absent of structural or procedural means for achieving that goal, is an abstract idea. In this case, the claims are directed to an abstract idea for failing to describe how—by what particular process or structure—the goal is accomplished. Even with the additional elements, the claimed limitations fail to restrict how the goal is accomplished.
Thus, for at least the aforementioned reasons, the claims are rejected under 35 U.S.C. 101 for being directed to a judicial exception (i.e., an abstract idea) without significantly more.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1 and 11-14 are rejected under 35 U.S.C. 103 as being unpatentable over Hawkins et al. (“Hawkins”) (US 2014/0067734 A1, incorporating by reference Hawkins et al. (“IBR-Hawkins”) (App. No. 13/218,170, published as US 2013/0054495 A1) at [0042]), in view of Noda et al. (“Noda”) (US 2018/0231969 A1).
Regarding claim 1: Hawkins teaches A method for determining an inadmissible deviation of a system behavior of a technical device from a standard value range using a monitoring algorithm the method comprising:
in a learning phase, (i) supplying the monitoring algorithm with input data and output data of the technical device, … and (iii) training, using the input data and the output data, the monitoring algorithm to predict the system behavior of the technical device (Hawkins, [0048], where the sequence processor 314 learns and stores transitions between spatial patterns represented as sparse vector 342. See also Hawkins, [0066], where processing node 300 incorporates spatial patterns and temporal sequences (i.e., “input data of the technical device”) associated with the anomaly (i.e., “output data”) in its learning (i.e., “training”), such that predictions of whether an anomaly has occurred can be made when the same or similar spatial patterns and temporal sequences are later encountered);
in a prediction phase, which follows the learning phase, (i) supplying the monitoring algorithm with the input data of the technical device, (ii) normalizing the input data supplied to the monitoring algorithm to the data of the reference signal, (iii) computing, in the monitoring algorithm, output comparison data based on the input data supplied in the prediction phase, and (iv) detecting the inadmissible deviation of the technical device in response to a difference between the output comparison data and the output data of the technical device lying outside the … range (Hawkins, [0048], where the sequence processor 314 recognizes and predicts the same or similar transitions in the input signal based on the learned transitions. See also Hawkins, [0066], where the system learned patterns and sequences and produces predictions of whether an anomaly has occurred when the same or similar spatial patterns and temporal sequences are later encountered.
See Hawkins, [0052-0060], where cells send prediction output 404 as SP output 324 to anomaly detector 308. Anomaly detector 308 compares prediction output 404 (i.e., “output comparison data”) with subsequent sparse vector 342 (the actual value or state) (i.e., “the output data of the technical device”) to detect an anomaly.
See IBR-Hawkins, [0092], [0098], and [0114], where data may be preprocessed, such as converting integer values to floating point values, multiplying by a scalar value, applying a function or transform to the data (e.g., a linear, logarithmic, or dampening function, or a Fourier transform) to change the range of data values (i.e., “(ii) normalizing the input data supplied to the monitoring algorithm to the data of the reference signal”)); and
in response to detecting the inadmissible deviation of the technical device, at least one of outputting a warning signal or deactivating a function of the technical device (Hawkins, [0068-0069], where state information associated with anomalies are flagged after further analysis, and issues anomaly signal 352 when sequence processor 314 is placed in a state associated with the flagged anomalies. User interface device 344 alerts the user of the flagged anomaly after receiving anomaly signal 352 from the state monitor 521).
Although Hawkins does not appear to explicitly state that the anomaly signal is a “warning” signal as claimed, the claimed invention does not distinguish over the prior art because the differences in the claim limitations and the prior art’s disclosure are only found in the nonfunctional descriptive material and are not functionally involved in the steps recited. The claimed steps would have been performed the same regardless of the specific type of data involved (i.e., a warning signal as claimed, an anomaly signal as disclosed in the prior art, or some other type of signal alerting that there is some sort of deviation in the data). Thus, this descriptive material will not distinguish the claimed invention from the prior art in terms of patentability. See In re Gulack, 703 F.2d 1381, 1385, 217 USPQ2d 401, 404 (Fed. Cir. 1983); In re Lowry, 32 F.3d 1579, 32 USPQ2d 1031 (Fed. Cir. 1994).
Therefore, it would have been obvious to a person of ordinary skill in the art to have referred to Hawkins’ teachings in making the claimed invention, because such data does not functionally relate to the steps in the method claimed and because the subjective interpretation of the data does not patentably distinguish the claimed invention over the prior art.
Hawkins does not appear to explicitly teach [wherein the learning phase includes] (ii) normalizing the input data supplied to the monitoring algorithm to data of a reference signal; [and wherein the range is] a standard value range.
Noda teaches [wherein the learning phase includes] (ii) normalizing the input data supplied to the monitoring algorithm to data of a reference signal (Noda, [0096-0097], where the learning means 141 normalizes the feature points extracted in a previous step to convert into a feature vector, and clusters each feature vector to learn clusters. The learning means then stores the cluster center and the cluster radius r of each cluster in the learning result storage unit, completing a series of learning processing (and later, e.g., in Noda, [0102], when the diagnosis means will normalize the diagnosis target data to convert into a feature vector, and calculates an abnormality measure u based on a cluster)); [and]
[wherein the range is] a standard value range (Noda, [0083], where when the abnormality measure u>1, the diagnosis target data is present outside the cluster (outside the normal range (i.e., “standard value range”), and thus the diagnosis unit diagnoses the mechanical facility 2 as “abnormality predictor is present”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of Hawkins and Noda with the motivation of normalizing the learning data in order to make the learned data consistent with the predicted data (which Hawkins discloses normalizing), thereby obtaining consistent (and therefore more accurate) predictions, and (2) comparing the values against a standard value range with the motivation of enabling greater flexibility in the values that would be considered acceptable.
Regarding claim 11: Hawkins as modified teaches The method as claimed in claim 1, wherein the monitoring algorithm is embodied as a neural network (Hawkins, [0046-0057], where the sequence processor may learn, store, and detect temporal sequences via the use of cells activated by select signals at certain time steps, with connections between cells, where using these learned transitions, the sequence processor 314 recognizes and predicts the same or similar transitions in the input signal by monitoring the activation states of its cells. Although Hawkins does not appear to explicitly utilize the phrase “neural network”, one of ordinary skill in the art would have recognized that Hawkins describes a neural network as disclosed in, e.g., the cited portions3,4).
Regarding claim 12: Hawkins as modified teaches The method as claimed in claim 1, wherein a control unit in a vehicle is configured to perform the method (Hawkins, [Claim 11], where the disclosed system may be implemented as a non-transitory computer-readable storage medium storing instructions, the instructions when executed by a processor cause the processor to implement the disclosed steps).
Although Hawkins as modified does not appear to explicitly state that the processor pertains to “a control unit in a vehicle” for performing the method, Hawkins’ disclosed processor is analogous to the claimed “control unit in a vehicle” as it is reasonably pertinent to the problem faced by the inventor. Therefore, one of ordinary skill in the art would have found it obvious to have modified Hawkins to incorporate additional types of control units (other than a processor within a system, as disclosed by Hawkins) with the motivation of broadening the types of applications of anomaly signal detection, including within the realm of vehicles.
Regarding claim 13: Hawkins as modified teaches The method as claimed in claim 1, wherein a computer program product includes program code configured to carry out the method (Hawkins, [Claim 11], where the disclosed system may be implemented as a non-transitory computer-readable storage medium storing instructions, the instructions when executed by a processor cause the processor to implement the disclosed steps).
Regarding claim 14: Hawkins as modified teaches The method as claimed in claim 13, wherein a non-transitory machine-readable storage medium is configured to store the computer program product (Hawkins, [Claim 11], where the disclosed system may be implemented as a non-transitory computer-readable storage medium storing instructions, the instructions when executed by a processor cause the processor to implement the disclosed steps).
Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Hawkins et al. (“Hawkins”) (US 2014/0067734 A1, incorporating by reference Hawkins et al. (“IBR-Hawkins”) (App. No. 13/218,170, published as US 2013/0054495 A1) at [0042]), in view of Noda et al. (“Noda”) (US 2018/0231969 A1), in further view of Mezic et al. (“Mezic”) (US 2016/0203036 A1).
Regarding claim 2: Hawkins as modified teaches The method as claimed in claim 1, but does not appear to explicitly teach wherein the normalizing the input data in both of the learning phase and the prediction phase comprises: harmonizing a number of values of the input data supplied to the monitoring algorithm with a number of values of the data of the reference signal.
Mezic teaches harmonizing a number of values of the input data supplied to the monitoring algorithm with a number of values of the data of the reference signal (Mezic, [0039-0042], where the system maps information to a standard format, where the mapping of the provided information into the standard format or language allows the feature detector to determine which indicator functions are to be applied to any given time-series dataset. Note that the “information” comprises different features, i.e., “values”, which are, e.g., mapped into various columns that break up the provided phrase into discrete pieces of information using standard language).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of Hawkins as modified and Mezic with the motivation of determining which indicator functions are to be (specifically) applied to any given time-series dataset for detecting anomalies in a particular type of subsystem (Mezic, [0042]), thus resulting in improved accuracies in detection (since, e.g., components/sensors may be different and have variability; thus, having different indicator functions allows for improved/more accurate detections of anomalies depending on the specific type of component/sensor being monitored).
Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Hawkins et al. (“Hawkins”) (US 2014/0067734 A1, incorporating by reference Hawkins et al. (“IBR-Hawkins”) (App. No. 13/218,170, published as US 2013/0054495 A1) at [0042]), in view of Noda et al. (“Noda”) (US 2018/0231969 A1), in further view of Saini et al. (“Saini”) (US 2018/0225320 A1).
Regarding claim 3: Hawkins as modified teaches The method as claimed in claim 1, but does not appear to explicitly teach wherein when a number of values of the input data and a number of values of the data of the reference signal are equal, but the input data is skewed with respect to the data of the reference signal, the normalizing the input data in both of the learning phase and the prediction phase comprises: mapping the input data onto the data of the reference signal.
Saini teaches wherein when a number of values of the input data and a number of values of the data of the reference signal are equal, but the input data is skewed with respect to the data of the reference signal, the normalizing the input data in both of the learning phase and the prediction phase comprises: mapping the input data onto the data of the reference signal (Saini, [0060], where if the data of the interest is non-normal, a transformation may be applied to the data set of interest to normalize the data set, e.g., by applying a Box-Cox transformation which provides a data set having a normal or approximately normal distribution (i.e., “the data of the reference signal”). See Yu, [11:23-36], where each signal component has the same length as the signal, and the superposition of all the signal components results in the signal (i.e., “a number of items of the input data and a number of items of the reference signal are equal”).
See Hawkins as modified above with respect to the “normalizing the input data in both the learning phase and the prediction phase”, e.g., more specifically, IBR-Hawkins, [0092], [0098], and [0114], and Noda, [0096-0097] and [0102]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of Hawkins as modified and Saini with the motivation of accurately/correctly identifying data points (as skewness can result in incorrect identification of anomalous/non-anomalous data points) (Saini, [0008] and [0060]).
Furthermore, although Hawkins as modified and Saini do not appear to explicitly state that the transformation is performed “when” the number of items of the input data and the reference signal are equal, one of ordinary skill in the art would have found it obvious to have performed this transformation of the skew only under such circumstances with the motivation of ensuring that the data is comparable (i.e., thus enabling a clean transformation).
Claims 4 and 6-8 are rejected under 35 U.S.C. 103 as being unpatentable over Hawkins et al. (“Hawkins”) (US 2014/0067734 A1, incorporating by reference Hawkins et al. (“IBR-Hawkins”) (App. No. 13/218,170, published as US 2013/0054495 A1) at [0042]), in view of Noda et al. (“Noda”) (US 2018/0231969 A1), in further view of Yu et al. (“Yu”) (US 9,471,544 B1), in further view of Nguyen (“Nguyen”) (US 2019/0311289 A1).
Regarding claim 4: Hawkins as modified teaches The method as claimed in claim 1, wherein the input data supplied to the monitoring algorithm, are in time-discrete form (Noda, [0068], [0114], and [0133-0134], where the sensor data acquired by the diagnosis target data acquisition unit includes detection values of a sensor and an elapsed time from the start time t11, e.g., there are elapsed times from the start time of the operation process being indicative of a “time-discrete form” as claimed) … .
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of Hawkins and Noda with the motivation of enabling the system to quickly identify anomalies at any point in time (as opposed to, e.g., a time window), and thus quickly raise any necessary alarms/notifications faster.
Hawkins as modified does not appear to explicitly teach the normalizing the input data in both of the learning phase and the prediction phase comprising: in a first sub-step, performing time-normalization of the input data in a time window onto the reference signal, in a second sub-step, determining frequency segments of the input data by transforming the input data for time segments of the time window into a frequency domain, and in a third sub-step, combining the frequency segments of the input data, associated with different time segments, according to the time-normalization of the first sub-step.
Yu teaches in a first sub-step, performing time-normalization of the input data in a time window onto the reference signal (Yu, [6:34-45], where the system identifies a period of interest and segments the signal based on the identified period. The resulting segments may then be superimposed, thus building a point-by-point model of the cyclic pattern. Furthermore, the processor 106 may identify multiple cyclic components are different time scales by repeating the above analysis using different values for the period) … .
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of Hawkins as modified and Yu (hereinafter “Hawkins as modified”) with the motivation of enabling comparisons to be performed even at different time scales.
Hawkins as modified does not appear to explicitly teach in a second sub-step, determining frequency segments of the input data by transforming the input data for time segments of the time window into a frequency domain, and in a third sub-step, combining the frequency segments of the input data, associated with different time segments, according to the time- normalization of the first sub-step.
Nguyen teaches in a second sub-step, determining frequency segments of the input data by transforming the input data for time segments of the time window into a frequency domain (Nguyen, [0072], where the time domain signal is partitioned into overlapping short frames, and the Fourier transform is applied independently on each frame, e.g., using short time Fourier transform. See Yu, [11:37-49], where applying a Fourier transform, such as a Fast Fourier Transform (FFT) transforms the signal component in the time domain to a representation in a frequency domain), and
in a third sub-step, combining the frequency segments of the input data, associated with different time segments, according to the time-normalization of the first sub-step (Nguyen, [0073], where on each frame (i.e., “frequency segment”), the disclosed technology computes spectral energy, spectral centroid and spectral variance, and aggregates over different frames using statistical extraction).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of Hawkins as modified and Nguyen (hereinafter “Hawkins as modified”) with the motivation of attempting to ensure that the sampling rate of sensors are high enough to capture various ranges of information pertaining to different classifications, e.g., a vehicle moving or being idle (Nguyen, [0072]), and such that the behavior of sensors can be described at different time scales.5
Regarding claim 6: Hawkins as modified teaches The method as claimed in claim 4, wherein the transformation of the input data for the time window into the frequency domain, which is performed in the second sub-step, is carried out using a short-time Fourier transform (Nguyen, [0072], where the Fourier transform is applied independently on each frame, e.g., using short time Fourier transform).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of Hawkins as modified and Nguyen with the motivation of capturing rapidly changing data signals (see, e.g., Nguyen, [0072], where a vehicle may be accelerating/braking).
Regarding claim 7: Hawkins as modified teaches The method as claimed in claim 4, wherein the output data of the technical device is transformed into the frequency domain and compared in the frequency domain with the output comparison data computed in the monitoring algorithm (Hawkins, [0060], where anomaly detector compares prediction output 404 with subsequent sparse vector 342 (the actual value or state) to detect an anomaly. See Yu, [11:37-49], where the system computes a fast Fourier transform (FFT) of the ith signal component, where the FFT transforms the signal component in the time domain to a representation in a frequency domain).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of Hawkins as modified and Yu with the motivation of easily identifying and isolating certain frequency components of interest.6
Regarding claim 8: Hawkins as modified teaches The method as claimed in claim 4, wherein the output comparison data computed in the monitoring algorithm is transformed into a time domain and compared in the time domain with the output data of the technical device (Hawkins, [0060], where anomaly detector compares prediction output 404 with subsequent sparse vector 342 (the actual value or state) to detect an anomaly. See Yu, [11:23-36], where the signal is decomposed (i.e., “transformed”)7 into multiple signal components, e.g., by breaking the signal down into signal components in the time domain (i.e., “transformed into a time domain”)).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of Hawkins as modified and Kang with the motivation of preserving instantaneous frequency changes in the signal and phase information (Yu, [11:23-36]), as well as enabling potential integration with (typical) systems that identify faults based on an analysis of data in the time domain8, i.e., greater convenience for integrating with other tools.
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Hawkins et al. (“Hawkins”) (US 2014/0067734 A1, incorporating by reference Hawkins et al. (“IBR-Hawkins”) (App. No. 13/218,170, published as US 2013/0054495 A1) at [0042]), in view of Noda et al. (“Noda”) (US 2018/0231969 A1), in further view of Yu et al. (“Yu”) (US 9,471,544 B1), in further view of Nguyen (“Nguyen”) (US 2019/0311289 A1), in further view of Peng et al. (“Peng”) (US 2008/0201397 A1).
Regarding claim 5: Hawkins as modified teaches The method as claimed in claim 4, but does not appear to explicitly teach wherein the time-normalization of the input data onto the reference signal, which is performed in the first sub-step, is carried out using dynamic time warping.
Peng teaches wherein the time-normalization of the input data onto the reference signal, which is performed in the first sub-step, is carried out using dynamic time warping (Peng, [0005] and [0019], where the disclosed system applies dynamic time warping (DTW) to the time-series data).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of Hawkins as modified and Peng with the motivation of achieving the optimal alignment of highly correlated data and the approximate time shifts between them (Peng, [0033]) and efficiently minimizing the effects of shifting and distortion in time by allowing “elastic” transformation of time series in order to detect similar shapes with different phases.9
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Hawkins et al. (“Hawkins”) (US 2014/0067734 A1, incorporating by reference Hawkins et al. (“IBR-Hawkins”) (App. No. 13/218,170, published as US 2013/0054495 A1) at [0042]), in view of Noda et al. (“Noda”) (US 2018/0231969 A1), in further view of Yu et al. (“Yu”) (US 9,471,544 B1).
Regarding claim 9: Hawkins as modified teaches The method as claimed in claim 1, but does not appear to explicitly teach wherein the reference signal is formed from a plurality of preceding values of the input data.
Yu teaches wherein the reference signal is formed from a plurality of preceding values of the input data (Yu, [Claim 4], where the historical probability distribution is generated based on previously received samples; a likelihood is computed for each sample point in the signal based on at least in part on the historical probability distribution; selecting a likelihood threshold; and comparing that likelihood to the likelihood threshold. As a result, because the historical probability distribution is compared to the likelihood threshold (i.e., “reference signal”), this indicates that the likelihood threshold is “formed from a plurality of preceding items of the input data”, as claimed).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of Hawkins as modified and Yu with the motivation of determining past trends to predict an anomaly, as past trends are good references for learning from.
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Hawkins et al. (“Hawkins”) (US 2014/0067734 A1, incorporating by reference Hawkins et al. (“IBR-Hawkins”) (App. No. 13/218,170, published as US 2013/0054495 A1) at [0042]), in view of Noda et al. (“Noda”) (US 2018/0231969 A1), in further view of Nguyen (“Nguyen”) (US 2019/0311289 A1).
Regarding claim 10: Hawkins as modified teaches The method as claimed in claim 1, but does not appear to explicitly teach wherein the reference signal corresponds to a defined driving maneuver of a vehicle.
Nguyen teaches wherein the reference signal corresponds to a defined driving maneuver of a vehicle (Nguyen, [0017], where the system compares the telematics data with multiple instances of known driving behavior information to recognize safe and unsafe driving behavior, etc.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of Hawkins as modified and Nguyen with the motivation of enabling a telemetry-based insurance model, thereby enabling an insurance company, for example, to tailor their insurance plan for the driver (see, e.g., Nguyen, [0003] and [0017]).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to IRENE BAKER whose telephone number is (408)918-7601. The examiner can normally be reached M-F 8-5PM PT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, NEVEEN ABEL-JALIL can be reached at (571)270-0474. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/IRENE BAKER/Primary Examiner, Art Unit 2152
2 January 2026
1 “We also think that the presence of the GPS receiver in the claims places a meaningful limit on the scope of the claims. In order for the addition of a machine to impose a meaningful limit on the scope of a claim, it must play a significant part in permitting the claimed invention to be performed, rather than function solely as an obvious mechanism for permitting a solution to be achieved more quickly, i.e., through the utilization of a computer for performing calculations. We are not dealing with a solution in which there is a method that can be performed without a machine…there is no evidence here that the calculations here can be performed entirely in the human mind. Here, as described, the use of a GPS receiver is essential to the operation of the claimed methods” (SiRF Tech. at p. 22).
2 See, e.g., MPEP § 2106.05(g) on “Insignificant Extra-Solution Activity” with respect to, e.g., Parker v. Flook, 437 U.S. at 593-95, 198 USPQ at 197 (1978) (a formula would not be patentable by only indicating that it could be usefully applied to existing surveying techniques).
3 See, e.g., Thaler. US 5,852,815 A, which pertains to an artificial neural network with references to activation cells relating to, e.g., various layers of the neural network.
4 Ross et al. US 2017/0103316 A1, which, like Thaler above, pertains to a neural network with references to cells making up those neural networks.
5 Mezic et al. US 2016/0203036 A1 at [0045] (“Once the new time-series are generated, the spectral analyzer 142 can perform a spectral analysis (e.g., …a discrete Fourier transform…) of each of the new time-series to generate a spectral response for each of the new time-series. Performance of the spectral analysis may result in the conversion of the data from the time domain to the frequency domain such that the behavior of the sensors 115 (e.g., whether the data points at different time instances result in a true or false condition) can be described at different time-scales…”).
6 Jardine et al. “A review on machinery diagnostics and prognostics implementing condition-based maintenance”. Mechanical Systems and Signal Processing 20 (2006) 1483-1510. Published 2005. URL Link: < https://www.sciencedirect.com/topics/engineering/frequency-domain-analysis>. Accessed Jun 2025. [3.1.2. Frequency-domain analysis on p. 1487] (“The advantage of frequency-domain analysis over time-domain analysis is its ability to easily identify and isolate certain frequency components of interest”).
7 See, e.g., Rajagopal et al. US 2002/0150298 A1 at [0017] (“…the unified signal transform may be operable to decompose the signal into generalized basis functions…”), thus demonstrating that Yu’s disclosure of a “decomposition” is a type of transform.
8 Mezic et al. at [0062].
9 Senin. “Dynamic Time Warping Algorithm Review”. Published 2008. URL Link: <https://csdl.ics.hawaii.edu/techreports/2008/08-04/08-04.pdf>. See page 2 under section “DTW Algorithm”.