Prosecution Insights
Last updated: April 19, 2026
Application No. 18/248,432

METHOD AND DEVICE FOR TRAINING A CLASSIFIER OR REGRESSOR FOR A ROBUST CLASSIFICATION AND REGRESSION OF TIME SERIES

Non-Final OA §101§103§112
Filed
Apr 10, 2023
Examiner
CADY, MATTHEW ALAN
Art Unit
2145
Tech Center
2100 — Computer Architecture & Software
Assignee
Robert Bosch GmbH
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-55.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
11 currently pending
Career history
11
Total Applications
across all art units

Statute-Specific Performance

§101
24.3%
-15.7% vs TC avg
§103
43.2%
+3.2% vs TC avg
§102
13.5%
-26.5% vs TC avg
§112
18.9%
-21.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 22 recites the limitation "wherein the gradient for the gradient ascent is adapted according to the eigenvalues and eigenvectors.". There is insufficient antecedent basis for this limitation in the claims. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. According to the first part of the analysis, in the instant case, claims 16-26 are directed to a method, claims 27-29 are directed to an apparatus. Each of these claims fall within one of the four statutory categories (i.e., process, machine, manufacture, or composition of matter). Regarding claim 16, Step 2A Prong One b. ascertaining a first adversarial example, wherein the first adversarial example is an overlap of the first training time series with an ascertained first adversarial perturbation, wherein a first noise value of the first adversarial perturbation is not greater than a specifiable threshold, wherein the specifiable threshold is based on ascertained noise values of the training time series; (This step for ascertaining an adversarial example is considered a mental process) Step 2A Prong Two A computer-implemented method for training a machine learning system, (This step for training a machine learning system is extra-solution activity. See MPEP § 2106.05(g)) the machine learning system being configured to ascertain an output signal based on a time series of input signals of a technical system, the output signal characterizing a classification and/or a regression result of at least one first operating state and/or at least one first operating variable of the technical system, the method comprising the following steps:a. ascertaining a first training time series of input signals from a plurality of training time series and a desired training output signal which corresponds to the first training time series, the desired training output signal characterizing a desired classification and/or a desired regression result of the first training time series; (This step for ascertaining and outputting data is considered extra-solution activity. See MPEP § 2106.05(g)) c. ascertaining a training output signal for the first adversarial example using the machine learning system; and (This step for ascertaining data is considered extra-solution activity. See MPEP § 2106.05(g)) d. adapting at least one parameter of the machine learning system according to a gradient of a loss value, the loss value characterizing a deviation of the desired training output signal from the ascertained training output signal. (This step for updating model parameterrs is considered extra-solution activity. See MPEP § 2106.05(g)) Step 2B The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because, when considered individually and in combination, they do not add significantly more (also known as an inventive concept) to the exception. The claim recites mental processes such as ascertaining information while the additional elements of receiving and outputting data, and training and updating a generic machine learning model are a well-understood, routine, and conventional activity, as recognized by the court decisions listed in MPEP § 2106.05(d). Regarding claim 17, Step 2A Prong One (Claim 17 depends on claim 16, which has been determined to recite abstract ideas including mental processes. Therefore, claim 17 also recites an abstract idea.) Step 2A Prong Two The method according to claim 16, wherein the specifiable threshold corresponds to an average noise value of the first training time series of the plurality of training time series. (This step for specifying a threshold based on the timeseries does not integrate the abstract idea.) Step 2B The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because, when considered individually and in combination, they do not add significantly more (also known as an inventive concept) to the exception. The claim recites mental processes without any technological improvement or inventive step. Regarding claim 18, Step 2A Prong One (Claim 18 depends on claim 16, which has been determined to recite abstract ideas including mental processes. Therefore, claim 18 also recites an abstract idea.) Step 2A Prong Two The method according to claim 16, wherein a noise value of each training time series or adversarial perturbation or adversarial example is ascertained according to a Mahalanobis distance. (This step for specifying how the noise value is obtained for the abstract idea is without improvement and therefore does not integrate the abstract idea.) Step 2B The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because, when considered individually and in combination, they do not add significantly more (also known as an inventive concept) to the exception. The claim recites mental processes while the additional elements of ascertaining a noise value based on a generic distance formula are a well-understood, routine, and conventional activity, as recognized by the court decisions listed in MPEP § 2106.05(d). Regarding claim 19, Step 2A Prong One PNG media_image1.png 297 844 media_image1.png Greyscale (This step for ascertaining a noise value according to a formula is considered a mathematical concept) Step 2A Prong Two The claim does not include additional elements, when considered separately and in combination, that integrate the judicial exception into a practical application. Step 2B The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because, when considered individually and in combination, they do not add significantly more (also known as an inventive concept) to the exception. The claim recites mathematical concepts without any technological improvement or inventive step. Regarding claim 20, Step 2A Prong One PNG media_image2.png 484 840 media_image2.png Greyscale (This step for ascertaining a pseudo-inverse covariance matrix according to a formula is considered a mathematical concept) Step 2A Prong Two The claim does not include additional elements, when considered separately and in combination, that integrate the judicial exception into a practical application. Step 2B The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because, when considered individually and in combination, they do not add significantly more (also known as an inventive concept) to the exception. The claim recites mathematical concepts without any technological improvement or inventive step. Regarding claim 21 Step 2A Prong One (Claim 21 depends on claim 16, which has been determined to recite abstract ideas including mental processes. Therefore, claim 21 also recites an abstract idea.) Step 2A Prong Two The method according to claim 16, wherein the first adversarial perturbation is certained according to the following steps:h. providing a second adversarial perturbation; i. ascertaining a third adversarial perturbation, wherein with respect to the first training time series, the third adversarial perturbation is stronger than the second adversarial perturbation; j. providing the third adversarial perturbation as the first adversarial perturbation when a distance of the third adversarial perturbation from the second adversarial perturbation is less than or equal to a specifiable threshold; k. otherwise, when a noise value of the third adversarial perturbation is less than or equal to an expected noise value, performing step i., wherein, in the performance of step i., the third adversarial perturbation is used as the second adversarial perturbation; 1. otherwise, ascertaining a projected perturbation and performing step j., wherein, in the performance of step j., the projected perturbation is used as the third adversarial perturbation, and wherein the projected perturbation is ascertained by an optimization such that a distance of the projected perturbation from the second adversarial perturbation is as small as possible and the noise value of the projected perturbation is equal to the expected noise value. These steps in combination do integrate the abstract idea of; “…ascertaining a first adversarial example, wherein the first adversarial example is an overlap of the first training time series with an ascertained first adversarial perturbation, wherein a first noise value of the first adversarial perturbation is not greater than a specifiable threshold, wherein the specifiable threshold is based on ascertained noise values of the training time series” Into a practical application because this method for determining the adversarial perturbation (and therefore the ascertained adversarial example) does improve the technology, as stated in the applicant spec; “The advantage of this design of the method is that the machine learning system can be trained using PGD, wherein the attack model is limited to an expected noise of the plurality of training time series. As a result, the machine learning system advantageously becomes more robust to noise, wherein the predictive accuracy of the machine learning system is advantageously not degraded in comparison to other attack models.” Therefore, claim 21 is not rejected under 101. Regarding claim 23, Step 2A Prong One (Claim 23 depends on claim 16, which has been determined to recite abstract ideas including mental processes. Therefore, claim 23 also recites an abstract idea.) Step 2A Prong Two The method according to claim 16, wherein the first adversarial example is ascertained using certifiable robustness training. (This step for ascertaining the abstract idea using a specific training process is without improvement and therefore does not meaningfully integrate the abstract idea) Step 2B The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because, when considered individually and in combination, they do not add significantly more (also known as an inventive concept) to the exception. The claim recites mental processes while the additional element of ascertaining an adversarial example using a specific training process is a well-understood, routine, and conventional activity, as recognized by the court decisions listed in MPEP § 2106.05(d). Regarding claim 24, Step 2A Prong One (Claim 24 depends on claim 16, which has been determined to recite abstract ideas including mental processes. Therefore, claim 24 also recites an abstract idea.) Step 2A Prong Two The method according to claim 16, wherein the technical system dispenses a liquid via a valve, wherein each time series and each training time series characterizes a sequence of pressure values of the technical system, and the output signal and the desired training output signal each characterize an amount of liquid dispensed by the valve. (This step for obtaining and outputting data from a specific system is considered extra-solution activity. See MPEP § 2106.05(g)) Step 2B The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because, when considered individually and in combination, they do not add significantly more (also known as an inventive concept) to the exception. The claim recites mental processes while the additional element collecting and outputting data is a well-understood, routine, and conventional activity, as recognized by the court decisions listed in MPEP § 2106.05(d). Regarding claim 25, Step 2A Prong One (Claim 25 depends on claim 16, which has been determined to recite abstract ideas including mental processes. Therefore, claim 25 also recites an abstract idea.) Step 2A Prong Two The method according to claim 16, wherein the technical system is a robot and each time series and each training time series characterizes accelerations or position data of the robot ascertained using a corresponding sensor, and the output signal or the desired training output signal characterizes a position and/or an acceleration and/or a center of gravity and/or a zero moment point of the robot. (This step for obtaining and outputting data from a specific system is considered extra-solution activity. See MPEP § 2106.05(g)) Step 2B The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because, when considered individually and in combination, they do not add significantly more (also known as an inventive concept) to the exception. The claim recites mental processes while the additional element collecting and outputting data is a well-understood, routine, and conventional activity, as recognized by the court decisions listed in MPEP § 2106.05(d). Regarding claim 26, Step 2A Prong One (Claim 26 depends on claim 16, which has been determined to recite abstract ideas including mental processes. Therefore, claim 26 also recites an abstract idea.) Step 2A Prong Two The method according to claim 16, wherein the technical system is a production machine that produces at least one part, wherein the input signals of each the time series each characterize a force and/or a torque of the production machine, and the output signal characterizes a classification as to whether or not the part was produced correctly. (This step for obtaining and outputting data from a specific system is considered extra-solution activity. See MPEP § 2106.05(g)) Step 2B The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because, when considered individually and in combination, they do not add significantly more (also known as an inventive concept) to the exception. The claim recites mental processes while the additional element collecting and outputting data is a well-understood, routine, and conventional activity, as recognized by the court decisions listed in MPEP § 2106.05(d). Claims 27 – 29 are apparatus claims directly corresponding to claim 16, and are therefore rejected under 101 for the same reasoning. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. NOTE: The following rejections are interpreting noise and perturbation to have equivalent meaning based on page 16, lines 16-17 of the applicants spec: "Within the meaning of the invention, adversarial perturbations can also be understood as noise." Claim(s) 16-17, 27-29 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nitin Sharma et al. (Hereinafter Sharma) (US 20220094709 A1, 2022-03-24) in view of Sturlaugson Liessman E. (Hereinafter Liessman) (US 20180346151 A1, 2018-12-06) further in view of Edan Habler et al. (Hereinafter Habler) (“Using LSTM encoder-decoder algorithm for detecting anomalous ADS-B messages”, 2018). Regarding claim 16, Sharma teaches; A computer-implemented method for training a machine learning system, ([Abstract] In some embodiments, a computer system perturbs, using a set of adversarial attack methods, a set of training examples used to train a machine learning model.) NOTE: Teaches a computer implemented method for training a machine learning system. a. ascertaining a first training ([Abstract] In some embodiments, a computer system perturbs, using a set of adversarial attack methods, a set of training examples used to train a machine learning model. In some embodiments, the computer system identifies, from among the perturbed set of training examples, a set of sparse perturbed training examples that are usable to train machine learning models to identify adversarial attacks, where the set of sparse perturbed training examples includes examples whose perturbations are below a perturbation threshold and whose classifications satisfy a classification difference threshold.) NOTE: Teaches ascertaining a first training input (a set of training examples) and desired training output (classifications satisfy classification difference threshold) which corresponds to the first training data (set of training examples), the desired training output characterizing a desired classification result of the first training data (classifications whose classifications satisfy classification difference threshold). b. ascertaining a first adversarial example, wherein the first adversarial example is an overlap of the first training ([Abstract] In some embodiments, a computer system perturbs, using a set of adversarial attack methods, a set of training examples used to train a machine learning model. In some embodiments, the computer system identifies, from among the perturbed set of training examples…) NOTE: Teaches ascertaining a first adversarial example (perturbs a set of training examples, which indicates at least a first adversarial example) wherein the first adversarial example is an overlap of the first training data (the first adversarial example is a perturbed version of the original training example, and is therefore an overlap) with the first adversarial perturbation (perturbs the original training example). wherein a first noise value of the first adversarial perturbation is not greater than a specifiable threshold, ([Abstract] In some embodiments, the computer system identifies, from among the perturbed set of training examples, a set of sparse perturbed training examples that are usable to train machine learning models to identify adversarial attacks, where the set of sparse perturbed training examples includes examples whose perturbations are below a perturbation threshold) NOTE: Teaches a first noise value of the first adversarial perturbation not greater than a specifiable threshold (the set of sparse perturbed training includes examples whose perturbations / noise are below a perturbation threshold). c. ascertaining a training output signal for the first adversarial example using the machine learning system; ([0031] Trained machine learning classifier 240, in the illustrated embodiment, generates classifications 202 for the perturbed examples 122 and classifications 204 for examples in the set 124 of training examples. Classifier 240 outputs classifications 202 and 204 to comparison module 130.) NOTE: Teaches ascertaining a training output signal for the first adversarial example using the machine learning system (generates classifications for the perturbed examples of the set of training examples) and d. adapting at least one parameter of the machine learning system according to a gradient of a loss value, the loss value characterizing a deviation of the desired training output signal from the ascertained training output signal. ([0061] In some embodiments, the retraining includes inputting the set of sparse perturbed training examples and the set of training examples into the classifier. In some embodiments, the retraining includes backpropagating the set of sparse perturbed training examples through the classifier to identify error associated with respective nodes of the classifier. In some embodiments, the retraining further includes updating, based on the identified error, one or more weights of the respective nodes of the classifier.) NOTE: Teaches adapting at least one parameter (weights) of the machine learning system according to a gradient (backpropagation is gradient based loss optimization) of a loss value (error), the loss value characterizing a deviation of the desired training output from the ascertained training output (error based on classifier outputs). Sharma fails to teach but Liessman teaches; time series of input signals the machine learning system being configured to ascertain an output signal based on a time series of input signals of a technical system, [0081] The feature extraction module 62 may be configured to determine a statistic of sensor values and/or control input values during a time window, a difference of sensor values and/or control input values during a time window, a difference between sensor values and/or control input values measured at different locations and/or different points in time, and/or a statistic of derived sensor values and/or control input values (e.g., an average difference, a moving average etc.) NOTE: Teaches the machine learning system being configured to ascertain an output signal (determine a statistic of sensor values) based on a time series of input signals (determine a difference of sensor values during a time window or at different points in time) of a technical system, the output signal characterizing a classification and/or a regression result of at least one first operating state and/or at least one first operating variable of the technical system, the method comprising the following steps: [0081] Such differences and/or statistics may be referred to as feature data and/or extracted feature data... Feature data generally is derived from sensor values and/or control input values that relate to the same sensed parameter (e.g., a pressure, a temperature, a speed, a voltage, and a current) and/or the same component 44. NOTE: Teaches the output signal (feature data which corresponding to the aforementioned determined statistics and differences) characterizing at least one first operating variable of the technical system (the feature data is derived from operating variables of the technical system [pressure, temperature, etc.]) OBVIOUSNESS TO COMBINE LIESSMAN WITH SHARMA: Liessman and Sharma are analogous art to each other and to the present disclosure as the all pertain to data analysis using machine learning. Specifically, Sharma pertains to a method for training a machine learning model to handle adversarial attacks while Liessman pertains to a machine learning method for determining performance of components in an aircraft using timeseries sensor data. Additionally, Sharma further states; ([Abstract] The disclosed techniques may advantageously enable a machine learning model to correctly classify data associated with adversarial attacks.) NOTE: This indicates that the methods disclosed by Sharma enable a machine learning system to correctly classify adversarial attacks from input data. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to implement the methods of training a machine learning system to detect adversarial attacks taught by Sharma using the time-series data from the technical system disclosed by Liessman as the training time-series to enable the system to accurately determine adversarial attacks in the time series input signals from the technical system. Sharma and Liessman Fail to teach but Habler teaches; wherein the specifiable threshold is based on ascertained noise values of the training time series ([pg. 161] Injected anomalies. In order to evaluate the performance of the learned model, we injected three types of anomalies (in a segment of 70 sequential messages, from message 180 to message 250) into the flights included in the test sets: Random noise (RND)- anomalies are generated by adding random noise. We multiplied the original values of the message attributes of the ADS-B messages with a randomly generated floating number between zero and two.) NOTE: The anomalies of the disclosure of Habler include random noise. ([pg. 162] In order to set the threshold value for an anomalous window, we performed a 5-fold cross-validation evaluation on the training dataset... We computed the anomaly scores for the training set (according to Eq. (2)) and defined the threshold as the value for which 95% of the anomaly scores are smaller than the value.) PNG media_image3.png 714 969 media_image3.png Greyscale NOTE: Teaches a specifiable threshold being based on ascertained noise (the threshold is determined based on the anomaly scores of the training set, where the anomalies are generated by adding random noise) from the time-series training data (the data from the training dataset is in sequential windows representing the data at different points in time, and is therefore time-series training data) OBVIOUSNESS TO COMBINE HABLER WITH SHARMA AND LIESSMAN: Habler is analogous art to Sharma, Liessman, and the present disclosure as they all pertain to analyzing data by utilizing machine learning. Specifically, Sharma pertains to a machine learning based security solution for detecting anomalous messages, which are altered or spoofed messages that could be sent by an attacker or compromised system. Additionally, Habler states; ([pg. 162] We computed the anomaly scores for the training set (according to Eq. (2)) and defined the threshold as the value for which 95% of the anomaly scores are smaller than the value. To assess the performance of the models, we examined the false positive rate (FPR), true positive rate (TPR), and the alarm delay of the models (measured as the number of messages from the beginning of the attack until a malicious window is detected)… From the results we can infer from the results that the proposed model can efficiently predict an ongoing anomaly, while the alarm delay time depends on the attack’s aggressiveness. ) NOTE: This details that the results of Hablers method (which utilizes a threshold based on noise of the training time-series, as taught above) is capable of efficiently predicting anomalies (which could include adversarial attacks) in the training time-series. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to determine the threshold for the system of claim 16 based on the training time-series using method taught by Habler, to efficiently and effectively recognize anomalies such as adversarial examples having an exceedingly high perturbation magnitude. Regarding claim 17, Sharma in view of Liessman and Habler teach; The method according to claim 16, (Using the same reasoning from claim 16) Sharma and Liessman fail to teach but Habler teaches; wherein the specifiable threshold corresponds to an average noise value of the first training time series of the plurality of training time series. PNG media_image3.png 714 969 media_image3.png Greyscale ([pg. 162] In order to set the threshold value for an anomalous window, we performed a 5-fold cross-validation evaluation on the training dataset... We computed the anomaly scores for the training set (according to Eq. (2)) and defined the threshold as the value for which 95% of the anomaly scores are smaller than the value.) NOTE: Teaches the specifiable threshold corresponding to an average noise value of the first training time series (the threshold is set to the typical or average noise value of the training time series, as shown in the above image and excerpt) of the plurality of training time series (there are multiple different training time-series in the above image). Claims 27 and 28 are apparatus claims directly corresponding to method claim 16, and are therefore rejected using the same reasoning. Regarding claim 29, Shuarma teaches; A non-transitory machine-readable storage medium on which is stored a computer program for training a machine learning system, … the computer program, when executed by a processor, causing the processor to perform the following steps: ([0126] Various articles of manufacture that store instructions (and, optionally, data) executable by a computing system to implement techniques disclosed herein are also contemplated. The computing system may execute the instructions using one or more processing elements. The articles of manufacture include non-transitory computer-readable memory media. The contemplated non-transitory computer-readable memory media include portions of a memory subsystem of a computing device as well as storage media or memory media such as magnetic media (e.g., disk) or optical media (e.g., CD, DVD, and related technologies, etc.). The non-transitory computer-readable media may be either volatile or nonvolatile memory.) NOTE: Teaches a non-transitory machine-readable storage medium on which is stored a computer program for training a machine learning system and the computer program, when executed by a processor, causing the processor to perform the methods of the disclosure of Shurama. The remaining limitations directly correspond to claim 16, and are therefore rejected using the same reasoning. Claim(s) 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sharma (US 20220094709 A1, 2022-03-24) in view of Liessman (US 20180346151 A1, 2018-12-06) further in view of Habler (“Using LSTM encoder-decoder algorithm for detecting anomalous ADS-B messages”, 2018) further in view of Aoyama Kuniaki et al. (Hereinafter Kuniaki) (JP 2011090382 A, 2011-05-06). Regarding claim 18, Sharma in view of Liessman and Habler teaches The method according to claim 16, (Using the same reasoning as in claim 16) Sharma, Liessman, and Habler fail to teach but Kuniaki teaches; wherein a noise value of each training time series or adversarial perturbation or adversarial example is ascertained according to a Mahalanobis distance. ([pg. 5] In step ST1, a plurality of pieces of monitoring target data are acquired from the monitoring target 10 when the monitoring target 10 is in operation. For example, in this embodiment, the monitoring device 2 acquires these monitoring target data in real time at a predetermined measurement interval, and stores them in the storage unit 24 together with information on the measurement time. Note that the measurement interval of the monitoring target data can be arbitrarily set.) NOTE: Teaches that the monitoring targets are time-series (a set of data measured at time intervals) ([pg. 8] Further, in such a configuration, when the Mahalanobis distance D changes gradually due to aging degradation or environmental change of the monitoring target 10 (for example, noise such as a change in atmospheric temperature), the environmental change is determined as an abnormality of the monitoring target 10.) NOTE: Teaches a noise value ascertained according to Mahalanobis distance (the Mahalanobis distance D changes with respect to the noise in the data, therefore the Mahalanobis distance itself is considered a noise value) of each time-series (the noise value [Mahalanobis distance D] changes with respect to each of the aforementioned time-series [monitoring targets]). OBVIOUSNESS TO COMBINE KUNIAKI WITH SHARMA, LIESSMAN, HABLER: Kuniaki is analogous art to Sharma, Liessman, and Habler as it pertains to methods for data analysis. Specifically, Kuniaki pertains to detecting anomalous data in time-series signals using Mahalanobis distance. Additionally, Kuniaki states; ([pg. 4] Here, when an abnormality occurs in the operating state of the monitoring target 10, the monitoring target data at that time appears at a position outside the reference data group, and the Mahalanobis distance D increases. Therefore, by calculating the Mahalanobis distance D between the monitoring target data and the reference data group and comparing the Mahalanobis distance D with a predetermined threshold k, it can be determined whether or not the operating state of the monitoring target 10 is abnormal.) This excerpt explains that the Mahalanobis distance increases when data at that time deviates outside of its respective reference data group, and the data is classified as abnormal when the Mahalanobis distance exceeds a threshold K. This would be useful in the application of the system of claim 1, as the system needs a mechanism to determine if a given adversarial example deviates far enough from the noise values of the training time-series to exceed the perturbation threshold of claim 1. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to use the noise value derived from Mahalnobis distance taught by Kuniaki in the system of claim 1 to determine if an ascertained adversarial example exceeds the specifiable threshold. Claim(s) 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sharma (US 20220094709 A1, 2022-03-24) in view of Liessman (US 20180346151 A1, 2018-12-06) further in view of Habler (“Using LSTM encoder-decoder algorithm for detecting anomalous ADS-B messages”, 2018) further in view of Kuniaki (JP 2011090382 A, 2011-05-06) further in view of Almog Lahav et al. (Hereinafter Lahav) (“Mahalanonbis Distance Informed by Clustering”, 2017). Regarding claim 19, Sharma in view of Liessman, Habler, and Kuniaki teach; The method according to claim 18, Noise value (Using the same reasoning from the claim 18 rejection) Adversarial perturbation, adversarial example (Using the same reasoning from the claim 16 rejection) Sharma, Liessman, Habler, and Kuniaki fail to teach but Lahav teaches; wherein the PNG media_image4.png 26 20 media_image4.png Greyscale wherein s is the [pg. 6] PNG media_image5.png 22 27 media_image5.png Greyscale NOTE: Teaches Mahalanobis distance between columns (samples) being ascertained according to the pictured formula d. By taking the square root of both sides of the pictured formula d, where (c1 – c2) can be the adversarial perturbation s of the system of claim 16 (for example, if c1 is an adversarial example and c2 is the overlapped unperturbed example, (c1 – c2) would represent the adversarial perturbation of the adversarial example c1), and PNG media_image5.png 22 27 media_image5.png Greyscale is the pseudo-inverse covariance matrix PNG media_image4.png 26 20 media_image4.png Greyscale , the pictured formula d is equivalent to the claimed formula PNG media_image4.png 26 20 media_image4.png Greyscale (see below). PNG media_image6.png 323 1430 media_image6.png Greyscale pseudo-inverse covariance matrix characterizing a specifiable number k of greatest eigenvalues and corresponding eigenvectors of at least a subset of the plurality of [pg. 7] PNG media_image7.png 389 612 media_image7.png Greyscale NOTE: Teaches Ũk = Uk characterizing the k greatest eigenvalues (lambda) and corresponding eigenvectors (u) of at least a subset of the data, which can be the training time-series taught in claim 16 (reasoning for why this combination would be obvious will be provided later). [pg. 7] PNG media_image8.png 1 1 media_image8.png Greyscale NOTE: Teaches the pseudo-inverse covariance matrix being ascertained using Ũk, and therefore the pseudo-inverse covariance matrix characterizes a specifiable number k of greatest eigenvalues and corresponding eigenvectors of at least a subset of the data, which can be the training time-series taught in claim 1 (reasoning for why this combination would be obvious will be provided later). OBVIOUSNESS TO COMBINE LAHAV, SHARMA, LIESSMAN, HABLER, KUNIAKI: Lahav is analogous art to Sharma, Liessman, Habler, and Kuniaki as they all pertain to data processing. Specifically, Lahav pertains to a method of comparing distance between data points using Mahalanobis distance, clustering, and covariance. Kuniaki indicates that the Mahalanobis distance is used to calculate the noise value of claim 18, which is the same distance formula disclosed by Lahav. Using the Mahalanobis distance formula with the pseudo-inverse covariance matrix taught by Lahav to compute the noise value of claim 18 would then be a simple substitution of very similar techniques. Additionally, Lahav states; [pg. 3] PNG media_image9.png 102 1027 media_image9.png Greyscale NOTE: Lahav explains that the pseudo-inverse of the covariance matrix is used when the data is not full rank (data being not full rank means that some features of the data are redundant), which can make the covariance matrix singular or not invertible. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to use the pseudo-inverse covariance matrix in the Mahalanobis distance formula (as taught by Lahav) to calculate the noise value of claim 18 to allow the noise value to be ascertained for data that is not full rank. Regarding claim 20, Sharma in view of Liessman, Habler, Kuniaki and Lahav teaches; The method according to claim 19, (Using the same reasoning from the claim 19 rejection) Plurality of training time series (Using the same reasoning from the claim 16 rejection) Sharma, Liessman, Habler, and Kuniaki fail to teach but Lahav teaches; wherein the pseudo-inverse covariance matrix is ascertained by the following steps: e. ascertaining a covariance matrix of the at least subset of the [pg.3-4] PNG media_image10.png 37 688 media_image10.png Greyscale PNG media_image11.png 17 20 media_image11.png Greyscale NOTE: The covariance matrix ( PNG media_image11.png 17 20 media_image11.png Greyscale ) is ascertained using at least a subset of the data (the data being the random vector c). f. ascertaining a predefined plurality of greatest eigenvalue of the covariance matrix and eigenvectors corresponding to the eigenvalue; [pg. 7] PNG media_image12.png 19 33 media_image12.png Greyscale NOTE: Teaches ascertaining a predefined plurality of greatest eigenvalues (k largest eigenvalues lambda) of the covariance matrix ( PNG media_image12.png 19 33 media_image12.png Greyscale ) and eigenvectors corresponding to the eigenvalue (k eigenvectors u); g. ascertaining the pseudo-inverse covariance matrix according to the formula PNG media_image13.png 59 132 media_image13.png Greyscale [pg. 6] PNG media_image8.png 1 1 media_image8.png Greyscale NOTE: The pseudo-inverse covariance ( PNG media_image8.png 1 1 media_image8.png Greyscale ) is ascertained according to the pictured formula ( PNG media_image8.png 1 1 media_image8.png Greyscale ). [pg. 7] PNG media_image14.png 45 642 media_image14.png Greyscale PNG media_image15.png 21 144 media_image15.png Greyscale PNG media_image16.png 59 264 media_image16.png Greyscale NOTE: This teaches the pseudo-inverse formula given by Lahav ( PNG media_image8.png 1 1 media_image8.png Greyscale ) being equivalent to the formula provided in the claims ( PNG media_image13.png 59 132 media_image13.png Greyscale ). See reasoning below: PNG media_image17.png 1190 1429 media_image17.png Greyscale wherein Lambda i is the i-th eigenvalue of the plurality of greatest eigenvalues, vi is the eigenvector corresponding to the eigenvalue and k is the specifiable number of greatest eigenvalues. Claim(s) 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sharma (US 20220094709 A1, 2022-03-24) in view of Liessman (US 20180346151 A1, 2018-12-06) further in view of Habler (“Using LSTM encoder-decoder algorithm for detecting anomalous ADS-B messages”, 2018) further in view of Angelo Sotigiu et al. (Hereinafter Angelo) (“Deep neural rejection against adversarial examples”, 2020-04-07). Regarding claim 21, Sharma in view of Liessman and Habler teaches; The method according to claim 16, (Using the same reasoning as the claim 16 rejection) Sharma, Liessman, and Habler fail to teach but Angelo teaches; wherein the first adversarial perturbation is ascertained according to the following steps: NOTE: Ascertaining an adversarial example includes ascertaining an adversarial perturbation coupled with the original sample. From this, when ascertaining an adversarial example, you are also ascertaining an adversarial perturbation. Therefore, ascertaining an adversarial example can be used interchangeably with ascertaining an adversarial perturbation. h. providing a second adversarial perturbation; [AltContent: rect] PNG media_image18.png 458 717 media_image18.png Greyscale [pg. 4] ^ NOTE: x within the loop is provided as a second adversarial perturbation. i. ascertaining a third adversarial perturbation, wherein with respect to the first training time series, the third adversarial perturbation is stronger than the second adversarial perturbation; [AltContent: rect] PNG media_image18.png 458 717 media_image18.png Greyscale [pg. 4] ^ NOTE: x' within the loop is considered the ascertained 3rd perturbation. Each iteration of the ascertained third perturbation (x' within the loop) results in a more adverse model result than the second adversarial perturbation (x within the loop) therefore making the ascertained third perturbation stronger than the second perturbation. j. providing the third adversarial perturbation as the first adversarial perturbation when a distance of the third adversarial perturbation from the second adversarial perturbation is less than or equal to a specifiable threshold; [AltContent: rect][AltContent: rect] PNG media_image18.png 458 717 media_image18.png Greyscale [pg. 4] ^ NOTE: The third perturbation (x' within the loop) is provided as the first perturbation (returned x') when the distance of the third adversarial perturbation from the second adversarial perturbation (distance between omega x' and omega x) is less than or equal to a specifiable threshold (t) k. otherwise, when a noise value of the third adversarial perturbation is less than or equal to an expected noise value, performing step i., wherein, in the performance of step i., the third adversarial perturbation is used as the second adversarial perturbation; [AltContent: rect][AltContent: rect][AltContent: rect] PNG media_image18.png 458 717 media_image18.png Greyscale [pg. 4] ^ NOTE: Otherwise, when a noise value of the third adversarial perturbation is less than or equal to an expected noise value, performing step i. (the noise value [perturbation] of the third adversarial perturbation x’ is constrained by epsilon and is therefore less than or equal to an expected noise value [epsilon indicating the allowed perturbation, which is considered an expected noise value], and after each iteration where the ‘until’ condition is not met, the loop restarts, thereby performing step i again), wherein, in the performance of step i., the third adversarial perturbation is used as the second adversarial perturbation (in each iteration, the previous third adversarial perturbation x' is used as the second adversarial perturbation x); 1. otherwise, ascertaining a projected perturbation and performing step j., wherein, in the performance of step j., the projected perturbation is used as the third adversarial perturbation, and wherein the projected perturbation is ascertained by an optimization such that a distance of the projected perturbation from the second adversarial perturbation is as small as possible and the noise value of the projected perturbation is equal to the expected noise value. [AltContent: rect][AltContent: rect][AltContent: rect] PNG media_image18.png 458 717 media_image18.png Greyscale [pg. 4] ^ [AltContent: rect][AltContent: rect] PNG media_image19.png 234 799 media_image19.png Greyscale [pg. 4] ^ NOTE: Teaches otherwise, ascertaining a projected perturbation (when the candidate within the projection operator is greater than epsilon [epsilon being the aforementioned expected noise value], a projected perturbation is ascertained using the projection operator) and performing step j., wherein, in the performance of step j., the projected perturbation is used as the third adversarial perturbation (the projected perturbation indicated by the projection operator is used as the third perturbation, x', and the loop restarts, performing step j again), and wherein the projected perturbation is ascertained by an optimization (omega(x) within the projection operation indicates an optimization) such that a distance of the projected perturbation from the second adversarial perturbation is as small as possible (the projection operator is bounded by a constraint which keeps the projected perturbation as close to the second adversarial perturbation (x) as possible [the distance of the perturbation from the second adversarial perturbation x must be less than or equal to epsilon]) and the noise value of the projected perturbation is equal to the expected noise value (when the projected perturbation exceeds the expected noise value epsilon, the perturbation is projected back onto the expected noise value defined by the bounds of epsilon) OBVIOUSNESS TO COMBINE ANGELO WITH SHARMA, LIESSMAN, HABLER: Angelo is analogous art to Sharma, Liessman, and Habler as well as the present invention as they all pertain to Machine Learning. Specifically, Angelo pertains to making Machine Learning algorithms more robust to adversarial attacks. Additionally, Angelo states; [pg. 3-4] PNG media_image20.png 46 312 media_image20.png Greyscale PNG media_image21.png 146 308 media_image21.png Greyscale PNG media_image22.png 58 316 media_image22.png Greyscale NOTE: This excerpt indicates that an effective way to evaluate adversarial robustness is to create an optimized adversarial example (as shown by equation 1 and algorithm 1 above) to simulate an attack. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to create an optimized adversarial perturbation according to the process taught by Angelo, to evaluate the adversarial robustness of the machine learning system of claim 16. Claim(s) 22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sharma (US 20220094709 A1, 2022-03-24) in view of Liessman (US 20180346151 A1, 2018-12-06) further in view of Habler (“Using LSTM encoder-decoder algorithm for detecting anomalous ADS-B messages”, 2018) further in view of Angelo (“Deep neural rejection against adversarial examples”, 2020-04-07) further in view of N. Gopalan Nair (hereinafter Nair) (“Gradient Eigenspace Projections for Adaptjive Filtering”, 1995-08-16). Regarding claim 22, Sharma in view of Liessman, and Habler, teach; First training time series Desired training output signal (Using the same reasoning as the claim 16 rejection) Sharma in view of Liessman, Habler and Angelo teach; The method according to claim 21, (Using the same reasoning as the claim 21 rejection) Sharma teaches; wherein, in step i., the third adversarial perturbation is ascertained using a gradient ascent [pg. 4] [AltContent: textbox (Attack objective)][AltContent: rect] PNG media_image23.png 592 799 media_image23.png Greyscale [pg. 4] [AltContent: rect][AltContent: rect] PNG media_image18.png 458 717 media_image18.png Greyscale NOTE: The paper defines the attack objective in a minimization form, but the same objective can be rewritten as a maximization of its negative. Therefore, performing projected gradient descent on omega can then be equivalently considered gradient ascent on negative omega. See further reasoning below: PNG media_image24.png 930 1430 media_image24.png Greyscale based on an output of the machine learning system with respect to the [pg. 4] [AltContent: rect][AltContent: textbox (A)][AltContent: rect] PNG media_image23.png 592 799 media_image23.png Greyscale NOTE: The aforementioned gradient ascent is based on an output of a machine learning system with respect to a first source sample overlapped with the second adversarial perturbation (the aforementioned second adversarial example / perturbation is a source example overlapped with a perturbation, as detailed in ‘A’) and with respect to a desired output signal (minimizing the output of the true class while maximizing the output of the competing class). It would be obvious for the first sample source to be the first training time series of claim 16, and for the desired output signal to be the desired training output signal of claim 16, as these are both simple substitutions of analogous data. Sharma, Liessman, Habler and Angelo fail to teach but Nair teaches; wherein the gradient for the gradient ascent is adapted according to the eigenvalues and eigenvectors. [pg. 1] PNG media_image25.png 346 374 media_image25.png Greyscale PNG media_image26.png 469 398 media_image26.png Greyscale NOTE: Nair teaches a method for gradient projections according to subspaces defined by eigenvalues and eigenvectors (projecting the gradient vector on the high subspace, which is a matrix of the highest eigenvectors defined by their corresponding eigenvalues). This therefore teaches adapting a gradient according to the eigenvalues and eigenvectors. It would be obvious to adapt the gradient for the aforementioned gradient ascent based on the process disclosed by Nair, which will be further explained below. OBVIOUSNESS TO COMBINE NAIR, SHARMA, LIESSMAN, HABLER, ANGELO: Nair is analogous to Sharma, Liessman, Habler, Angelo, and the present disclosure as they all pertain to data processing. Specifically, Nair pertains to a method for projecting gradients according to subspaces defined by eigenvalues and eigenvectors. Additionally, Nair states; PNG media_image25.png 346 374 media_image25.png Greyscale NOTE: This excerpt details that the disclosed method for gradient projection onto eigen subspaces improves convergence properties for gradient algorithms when data is highly correlated. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, for the gradient ascent taught by Angelo to be adapted according to eigenvalues and eigenvectors using the method disclosed by Nair, to improve the convergence properties for the gradient algorithm when data is highly correlated. Claim(s) 23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sharma (US 20220094709 A1, 2022-03-24) in view of Liessman (US 20180346151 A1, 2018-12-06) further in view of Habler (“Using LSTM encoder-decoder algorithm for detecting anomalous ADS-B messages”, 2018) further in view of Bai Li et al. (Hereinafter Li) (“Certified Adversarial Robustness with Additive Noise”, 2019-11-10). Regarding claim 23, Sharma in view of Liessman and Habler teaches; The method according to claim 16, (using the same reasoning as the claim 16 rejection) Sharma, Liessman, and Habler fail to teach but Li teaches; wherein the first adversarial example is ascertained using certifiable robustness training. ([Abstract] Defensive methods that provide theoretical robustness guarantees have been studied intensively, yet most fail to obtain non-trivial robustness when a large-scale model and data are present. To address these limitations, we introduce a framework that is scalable and provides certified bounds on the norm of the input manipulation for constructing adversarial examples. We establish a connection between robustness against adversarial perturbation and additive random noise, and propose a training strategy that can significantly improve the certified bounds.) NOTE: Teaches ascertaining an adversarial example (provides certifiable bounds on the norm for constructing adversarial examples) using certifiable robustness training (Li teaches a framework providing certified bounds on input manipulation for constructing adversarial examples, and further teaches a training strategy that improves such certified bounds. Thus, Li teaches ascertaining an adversarial example within a certifiable robustness training framework). OBVIOUSNESS TO COMBINE LI WITH SHARMA, LIESSMAN, AND HABLER: Li is analogous art to Sharma, Liessman, Habler, and the present disclosure as they all pertain to data processing. Specifically, Li pertains to a framework providing certified bounds on the norm of the input manipulation for constructing adversarial examples. Additionally, Li states; ([Abstract] The existence of adversarial data examples has drawn significant attention in the deep-learning community; such data are seemingly minimally perturbed relative to the original data, but lead to very different outputs from a deep-learning algorithm. Although a significant body of work on developing defensive models has been considered, most such models are heuristic and are often vulnerable to adaptive attacks. Defensive methods that provide theoretical robustness guarantees have been studied intensively, yet most fail to obtain non-trivial robustness when a large-scale model and data are present. To address these limitations, we introduce a framework that is scalable and provides certified bounds on the norm of the input manipulation for constructing adversarial examples. We establish a connection between robustness against adversarial perturbation and additive random noise, and propose a training strategy that can significantly improve the certified bounds. Our evaluation on MNIST, CIFAR-10 and ImageNet suggests that the proposed method is scalable to complicated models and large data sets, while providing competitive robustness to state-of-the-art provable defense methods.) NOTE: This excerpt explains that the method disclosed by Li gives stronger and more practically provable robustness guarantees, while being scalable to complicated models and large datasets. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to ascertain the first adversarial example of the system of claim 16 using the certifiable robustness training disclosed by Li, to give stronger and more practically provable robustness guarantees for the system. Claim(s) 24 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sharma (US 20220094709 A1, 2022-03-24) in view of Liessman (US 20180346151 A1, 2018-12-06) further in view of Habler (“Using LSTM encoder-decoder algorithm for detecting anomalous ADS-B messages”, 2018) further in view of Patel Shwetak N et al. (Hereinafter Patel) (CN 102460104 B, 2015-05-20). Regarding claim 24, Sharma in view of Liessman and Habler teach; The method according to claim 16, Training time series Desired training output signal (Using the same reasoning from claim 16 rejection) Sharma, Liessman, and Habler fail to teach but Patel teaches; wherein the technical system dispenses a liquid via a valve, ([pg. 10] For each residential construction, first base line quiescent hydraulic pressure is measured, and then suitable pressure transducer (that is: scope is at the pressure transducer of 0psi to 50psi or 0psi to 100psi) is arranged on operational hose adapter, kitchen sink (utlity sink) water swivel, or on the water discharging valve of water heater.) NOTE: Patel teaches the technical system dispensing liquid from a valve (the water discharging valve). wherein each time series ([pg. 10] These pressure characteristics use a kind of log recording instrument of drawing to record, and this also provides the Real-time Feedback of pressure data by a kind of time series line chart of rolling.) NOTE: Teaches each time series (the pressure data is represented by a time-series line chart) characterizing a sequence of pressure values of the technical system (teaches a time series [which is considered a sequence] of pressure data of the system). It would be obvious for each training time series of the system of claim 16 to characterizes a sequence of pressure values of the technical system disclosed by Patel, further explained later. and the output signal and ([pg. 4] This illustrative methods comprises the following steps, and namely monitors the fluid pressure at first place in distribution system, and in response to this, produces the output signal that illustrates pressure in distribution system.) NOTE: The output signal characterizes an amount of liquid dispensed by the valve. (an output signal indicating the fluid pressure of the aforementioned water discharging valve would indicate the rate or amount of fluid dispensed by said valve) It would also be obvious for the desired training output signal of the system of claim 16 to also each characterize an amount of liquid dispensed by the valve, further explained later. OBVIOUSNESS TO COMBINE PATEL WITH SHARMA, LIESSMAN, HABLER: Patel is analogous art to Sharma, Liessman, Habler, and the present disclosure as they all pertain to data processing. Specifically, Patel pertains to monitoring and processing sensor values in a liquid distribution system. Claim 16 already discloses ascertaining output signals characterizing a first operating variable of a technical system based on time-series input signals [as taught in claim 16]. Additionally, in the system of claim 16, the training time-series and the time-series characterize the same type of data, so if the time series in the system disclosed by Patel characterizes a sequence of pressure values, it would be obvious for the training time series to characterize the same thing. Additionally, in the system of claim 16, the output signal and the desired output signal characterize the same type of data, so if the output signal of the disclosure of Patel characterizes an amount of liquid dispensed by the valve, it would be obvious for the desired output signal to represent the same thing. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, for the aforementioned technical system to be the liquid distribution system disclosed by Patel to monitor operating variables in a liquid distribution system. Claim(s) 25 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sharma (US 20220094709 A1, 2022-03-24) in view of Liessman (US 20180346151 A1, 2018-12-06) further in view of Habler (“Using LSTM encoder-decoder algorithm for detecting anomalous ADS-B messages”, 2018) further in view of Taehyun Kim et al. (Hereinafter Kim) (“KR 20190103084 A”, 2019-09-04). Regarding claim 25, Sharma in view of Liessman and Habler teach; The method according to claim 16, Training time series (Using the same reasoning from claim 16 rejection) Sharma, Liessman, and Habler fail to teach but Kim teaches; wherein the technical system is a robot ([Abstract] The intelligent electronic device of the present invention can be connected to an artificial intelligence module, a drone (unmanned aerial vehicle, UAV), a robot...) NOTE: Teaches the technical system of the disclosure being a robot. and each time series and ([pg. 25] The position sensor may sense the current position of the intelligent electronic device 100 in real time. The position sensor may use a GPS sensor to detect the position information, and the sensors may be triggered through the position information and the time information based on the GPS sensor.) NOTE: Teaches each time series (the position sensor is recorded with time information, and is therefore time series) characterizes position data of the robot using a corresponding sensor (position sensor characterizes the position of the device, which can be a robot). and the output signal or the desired training output signal characterizes a position and/or an acceleration and/or a center of gravity and/or a zero moment point of the robot. ([pg.26] The processor 180 may extract a feature value from each of the illuminance information, the sound information, and the position information. The feature value is determined to recognize the surrounding environment for the current location among at least one feature that can be extracted from the illumination information, the sound information, and the location information, and to specifically indicate whether the feature is a specific place.) NOTE: The output signal (extracted feature values) characterizes a position of the robot (the extracted feature values can be position information of the electronic device, which can be a robot) OBVIOUSNESS TO COMBINE KIM WITH SHARMA, LIESSMAN, HABLER: Kim is analogous art to Sharma, Liessman, Habler, and the present disclosure as they all pertain to data processing. Specifically, Kim pertains to monitoring and processing sensor values in an intelligent electronic system such as a robot. Claim 16 already discloses ascertaining output signals characterizing a first operating variable of a technical system based on time-series input signals [as taught in claim 16]. Additionally, in the system of claim 16, the training time-series and the time-series characterize the same type of data, so if the time series in the system disclosed by Kim characterizes a position data of the robot, it would be obvious for the training time series to characterize the same thing. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, for the aforementioned technical system to be the robot disclosed by Kim to monitor operating variables in a robot. Claim(s) 25 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sharma (US 20220094709 A1, 2022-03-24) in view of Liessman (US 20180346151 A1, 2018-12-06) further in view of Habler (“Using LSTM encoder-decoder algorithm for detecting anomalous ADS-B messages”, 2018) further in view of Imanari Hiroyuki et al. (Hereinafter Hiroyuki) (“WO 2020194534 A1”, 2020-10-01). Regarding claim 26, Sharma in view of Liessman and Habler teach; The method according to claim 16, (Using the same reasoning from claim 16 rejection) Sharma, Liessman, and Habler fail to teach but Hiroyuki teaches; wherein the technical system is a production machine that produces at least one part, ([Abstract] This abnormality determination assistance device is provided with a to-be-analyzed data creation unit, a primary determination unit, and a secondary determination unit. The to-be-analyzed data creation unit acquires, from a data extraction device of a production facility, a time-series signal representing at least one of the status of the production facility and a product quality, and extracts to-be-analyzed data from the time-series signal.) NOTE: Teaches the technical system being a production machine (production facility) that produces at least one part (product quality indicates that the production facility produces at least one part). wherein the input signals of each the time series each characterize a force and/or a torque of the production machine, ([pg. 5] FIG. 3 is a diagram illustrating an example of the processing flow of the analysis target data creation unit 3. In step S101, when the rolling of the rolled material to be analyzed is completed in the manufacturing facility 20, a time series signal including before and after rolling is acquired from the data collecting device 1. This time-series signal includes data representing the state of the manufacturing equipment 20 and sensor data representing the product quality. NOTE: Teaches time-series input signals representing the state of the manufacturing equipment. ([pg. 5] In step S102, data such as rolling load, rolling torque, electric machine current, and speed of rotating equipment are being rolled (load) for each rolling facility (two rolling mills and seven rolling stands constituting the finishing rolling mill).)) NOTE: The aforementioned states of the manufacturing equipment include torque (rolling torque). This therefore teaches the input signals of each the time series each characterize a force and/or a torque of the production machine. and the output signal characterizes a classification as to whether or not the part was produced correctly. ([pg. 5] In step S103, sensor data such as plate thickness and plate width indicating product quality are classified into measurement state data and non-measurement state data.) NOTE: Teaches that the output signal characterizes a classification as to whether or not the part was produced correctly (sensor data indicating product quality are classified). OBVIOUSNESS TO COMBINE HIROYUKI WITH SHARMA, LIESSMAN, HABLER: Hiroyuki is analogous art to Sharma, Liessman, Habler, and the present disclosure as they all pertain to data processing. Specifically, Hiroyuki pertains to abnormality detection using sensor values of a production machine. Claim 16 already discloses ascertaining output signals characterizing a first operating variable of a technical system based on time-series input signals [as taught in claim 16]. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, for the aforementioned technical system to be the production machine disclosed by Hiroyuki to monitor operating variables in a production machine. CONCLUSION Any inquiry concerning this communication or earlier communications from the examiner should be directed to Matthew Alan Cady whose telephone number is (571) 272-7229. The examiner can normally be reached Monday - Friday, 7:30 am - 5:00 pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Cesar Paula can be reached on (571)272-4128. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MATTHEW ALAN CADY/ Examiner, Art Unit 2145 /CESAR B PAULA/ Supervisory Patent Examiner, Art Unit 2145
Read full office action

Prosecution Timeline

Apr 10, 2023
Application Filed
Mar 12, 2026
Non-Final Rejection — §101, §103, §112 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month