Prosecution Insights
Last updated: April 19, 2026
Application No. 17/581,416

COMPUTER-IMPLEMENTED METHOD AND DEVICE FOR A MANIPULATION DETECTION FOR EXHAUST GAS TREATMENT SYSTEMS WITH THE AID OF ARTIFICIAL INTELLIGENCE METHODS

Non-Final OA §101§102§103§112
Filed
Jan 21, 2022
Examiner
KWON, JUN
Art Unit
2127
Tech Center
2100 — Computer Architecture & Software
Assignee
Robert Bosch GmbH
OA Round
3 (Non-Final)
38%
Grant Probability
At Risk
3-4
OA Rounds
4y 3m
To Grant
84%
With Interview

Examiner Intelligence

Grants only 38% of cases
38%
Career Allow Rate
26 granted / 68 resolved
-16.8% vs TC avg
Strong +46% interview lift
Without
With
+46.2%
Interview Lift
resolved cases with interview
Typical timeline
4y 3m
Avg Prosecution
34 currently pending
Career history
102
Total Applications
across all art units

Statute-Specific Performance

§101
31.8%
-8.2% vs TC avg
§103
41.4%
+1.4% vs TC avg
§102
7.6%
-32.4% vs TC avg
§112
18.1%
-21.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 68 resolved cases

Office Action

§101 §102 §103 §112
Detailed Action This Office Action is in response to the remarks entered on 02/12/2026. Claims 17 and 19 have been canceled. Claims 16, 18 and 20-31 are currently pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. DE10 2021 200 789.9, filed on 01/28/2021. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are: ‘A device for detecting a manipulation of a technical device … the device being configured to: supply time characteristics … use a data-based manipulation detection model … detect an anomaly as a function … detect a manipulation as a function of the detected anomalies’ in claim 30. Because these claim limitations are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have these limitations interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim limitation “A device for detecting a manipulation of a technical device … the device being configured to: supply time characteristics … use a data-based manipulation detection model … detect an anomaly as a function … detect a manipulation as a function of the detected anomalies” in claim 30 invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. The device for detecting a manipulation of a technical device, and the device configured to: supply time characteristics, use a data-based manipulation detection model, detect an anomaly as a function, and detect a manipulation as a function of the detected anomalies is disclosed in the instant paragraph [Spec, page 12, line 24 – page 13, line 22]. However, the disclosure is devoid of any structure (e.g., algorithm, structure, or circuit) that performs the function in the claim, and no association between the structure and the function can be found in the specification. Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph. Applicant may: (a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph; (b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)). If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either: (a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 16, 18, 20-28 and 30-31 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding claim 16, 2A Prong 1: detecting an anomaly as a function of a modeling error for each one of the output variables; (a mental process of evaluation. The broadest reasonable interpretation of the claim encompasses a person examining a set of data to check if there is any abnormal data which can be performed in human mind) detecting a manipulation as a function of the detected anomalies, . (mental processes of evaluation – evaluating whether the data includes any anomaly does not require a computer component and can be performed with the aid of pen and paper) 2A Prong 2: A computer-implemented method for detecting a manipulation of a technical device that includes an exhaust gas treatment device in a motor vehicle, the method comprising the following steps: (mere instructions to apply an exception of detecting a device operation using a computer MPEP 2106.05(f)) providing time characteristics of operating variables having one or more system variables and/or at least one correction variable for an intervention in the technical device (an insignificant extra-solution activity MPEP 2106.05(g) of transmitting data over a network), which correspond to time series of values of the operating variables for consecutive time steps in each case; (a field of use and technological environment MPEP 2106.05(h). The limitation merely provides the definition of the time characteristics) using a data-based manipulation detection model in each current time step to ascertain one or more output variables which correspond to at least a portion of the operating variables as a function of input variables that include at least a portion of the operating variables, the manipulation detection model including a variational autoencoder having a first recurrent neural network, a prediction model having a second recurrent neural network, and an evaluation model, outputs of the variational autoencoder and the prediction model being combined with one another and then conveyed to an evaluation model for an ascertainment of the output variables, the manipulation detection model being trained to model current values of the output variables as a function of current values of the at least one portion of the operating variables; (mere instructions to apply an exception of detecting a manipulation using a computer component MPEP 2106.05(f). The limitation broadly recites common machine learning model structure and training process of a generic machine learning model structure to apply an exception of detecting anomaly) wherein in each current time step, current values of first ones of the input variables are supplied to the variational autoencoder, and values of the second ones of the input variables that are only from a preceding time step are supplied to the prediction model. (mere instructions to apply an exception using a generic computer component (autoencoder) MPEP 2106.05(f)) 2B: A computer-implemented method for detecting a manipulation of a technical device that includes an exhaust gas treatment device in a motor vehicle, the method comprising the following steps: (mere instructions to apply an exception of detecting a device operation using a computer MPEP 2106.05(f)) providing time characteristics of operating variables having one or more system variables and/or at least one correction variable for an intervention in the technical device (indicated as an insignificant extra-solution activity MPEP 2106.05(g). Thus, the limitation is re-evaluated as well understood, routine, and conventional activity MPEP 2106.05(d) of transmitting data over a network), which correspond to time series of values of the operating variables for consecutive time steps in each case; (a field of use and technological environment MPEP 2106.05(h). The limitation merely provides the definition of the time characteristics) using a data-based manipulation detection model in each current time step to ascertain one or more output variables which correspond to at least a portion of the operating variables as a function of input variables that include at least a portion of the operating variables, the manipulation detection model including a variational autoencoder having a first recurrent neural network, a prediction model having a second recurrent neural network, and an evaluation model, outputs of the variational autoencoder and the prediction model being combined with one another and then conveyed to an evaluation model for an ascertainment of the output variables, the manipulation detection model being trained to model current values of the output variables as a function of current values of the at least one portion of the operating variables; (mere instructions to apply an exception of detecting a manipulation using a computer component MPEP 2106.05(f). The limitation broadly recites common machine learning model structure and training process of a generic machine learning model structure to apply an exception of detecting anomaly) wherein in each current time step, current values of first ones of the input variables are supplied to the variational autoencoder, and values of the second ones of the input variables that are only from a preceding time step are supplied to the prediction model. (mere instructions to apply an exception using a generic computer component (autoencoder) MPEP 2106.05(f)) Regarding claim 18, 2A Prong 1: Incorporates the rejection of claim 16. 2A Prong 2: wherein the autoencoder is a variational autoencoder and has a latent feature space which is developed with two linear feature space layers for imaging a mean value vector and a standard deviation vector, and the variational autoencoder is trained using a regularization term, which induces development of the feature space layers for imaging the mean value vector and a standard deviation vector during the training. (mere instructions to apply an exception of detecting a manipulation using a computer component MPEP 2106.05(f). The limitation broadly recites common machine learning model structure and training process of a generic machine learning model structure to apply an exception of detecting anomaly) 2B: wherein the autoencoder is a variational autoencoder and has a latent feature space which is developed with two linear feature space layers for imaging a mean value vector and a standard deviation vector, and the variational autoencoder is trained using a regularization term, which induces development of the feature space layers for imaging the mean value vector and a standard deviation vector during the training. (mere instructions to apply an exception of detecting a manipulation using a computer component MPEP 2106.05(f). The limitation broadly recites common machine learning model structure and training process of a generic machine learning model structure to apply an exception of detecting anomaly) Regarding claim 20, 2A Prong 1: and the modeling error is determined as a function of the modeled current values of the output variables and the current values of the operating variables corresponding to the output variables. (a mathematical concept, as the limitation is directed to error value calculation) 2A Prong 2: wherein the first ones of the input variables and the second ones of the input variables each include a portion of the operating variables that is identical, partially identical or that differs, and the output variables include a portion of the operating variables that is identical to, partially identical to or that differs from the first and/or second input variables, (a field of use and technological environment MPEP 2106.05(h)) 2B: wherein the first ones of the input variables and the second ones of the input variables each include a portion of the operating variables that is identical, partially identical or that differs, and the output variables include a portion of the operating variables that is identical to, partially identical to or that differs from the first and/or second input variables, (a field of use and technological environment MPEP 2106.05(h)) Regarding claim 21, 2A Prong 1: and the modeling error furthermore is determined as a function of the modeled current values of the mean value vector and the standard deviation vector. (a mathematical concept, as the limitation is directed to error value calculation) 2A Prong 2: wherein the variational autoencoder has a latent feature space which is developed with two linear feature space layers for imaging a mean value vector and a standard deviation vector, (mere instructions to apply an exception using a computer MPEP 2106.05(f)) 2B: wherein the variational autoencoder has a latent feature space which is developed with two linear feature space layers for imaging a mean value vector and a standard deviation vector, (mere instructions to apply an exception using a computer MPEP 2106.05(f)) Regarding claim 22, 2A Prong 1: The method as recited in claim 20, wherein the modeling error is ascertained using a predefined error function, which is based on a mean squared error or a Huber loss function or a root mean squared error between the current values of the operating variables and the corresponding output variables. (a mathematical concept of error calculation using a mean squared error or a Huber loss function or a root mean squared error) 2A Prong 2: This judicial exception is not integrated into a practical application. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Regarding claim 23, 2A Prong 1: The method as recited in claim 20, wherein for multiple time intervals of an evaluation interval, a total error is determined for a number of consecutive time steps of each one of the output variables, from a plurality of modeling errors, by summing the modeling errors, and an anomaly for each of the time intervals is identified as a function of an exceeding of a predefined evaluation percentile for the respective output variable by the total error. (a mathematical concept of error calculation) 2A Prong 2: This judicial exception is not integrated into a practical application. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Regarding claim 24, 2A Prong 1: The method as recited in claim 23, wherein a manipulation of the technical device is detected when a share of anomalies during the time intervals of the evaluation interval exceeds a predefined share threshold value. (a mental process of evaluation. The limitation encompasses a human determining whether a specific value exceeds a predefined threshold value, which does not require a computer component and can be performed in the human mind) 2A Prong 2: This judicial exception is not integrated into a practical application. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Regarding claim 25, 2A Prong 1: The method as recited in claim 23, wherein the evaluation percentile value for each operating variable is determined in that, based on a characteristic of operating variables of a predefined validation dataset for a correct operation of the technical device for multiple time intervals of an evaluation interval for a number of consecutive time steps in each case, a total error is determined from multiple modeling errors for the respective multiple time intervals, by summing the modeling errors, and an error matrix is set up from the output variables and the assigned total errors, and a percentile value as the evaluation percentile value is determined for each output variable. (a mathematical concept of error calculation) 2A Prong 2: This judicial exception is not integrated into a practical application. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Regarding claim 26, 2A Prong 1: Incorporates the rejection of claim 25. 2A Prong 2: wherein the percentile value is 99.9%. (a field of use and technological environment MPEP 2106.05(h)) 2B: wherein the percentile value is 99.9%. (a field of use and technological environment MPEP 2106.05(h)) Regarding claim 27, 2A Prong 1: Incorporates the rejection of claim 16. 2A Prong 2: wherein the technical device includes an exhaust gas treatment device, and an input vector as the correction variable includes a correction variable for a urea injection system. (a field of use and technological environment MPEP 2106.05(h)) 2B: wherein the technical device includes an exhaust gas treatment device, and an input vector as the correction variable includes a correction variable for a urea injection system. (a field of use and technological environment MPEP 2106.05(h)) Regarding claim 28, 2A Prong 1: Incorporates the rejection of claim 16. 2A Prong 2: wherein a detected manipulation is signaled, or the technical device is operated as a function of the detected manipulation. (a field of use and technological environment MPEP 2106.05(h)) 2B: wherein a detected manipulation is signaled, or the technical device is operated as a function of the detected manipulation. (a field of use and technological environment MPEP 2106.05(h)) Regarding claim 30, 2A Prong 1: detect an anomaly as a function of a modeling error for each one of the output variables; (a mental process of evaluation. The broadest reasonable interpretation of the claim encompasses a person examining a set of data to check if there is any abnormal data which can be performed in human mind) detect a manipulation as a function of the detected anomalies, . (mental processes of evaluation – evaluating whether the data includes any anomaly does not require a computer component and can be performed with the aid of pen and paper) 2A Prong 2: A device for detecting a manipulation of a technical device that includes an exhaust gas treatment device in a motor vehicle, the technical device being an exhaust gas treatment device, the device being configured to: (a field of use and technological environment MPEP 2106.05(h)) supply time characteristics of operating variables having one or more system variables and/or having at least one correction variable for an intervention in the technical device (an insignificant extra-solution activity MPEP 2106.05(g) of transmitting data over a network) which correspond to time series of values of the operating variables for consecutive time steps; (a field of use and technological environment MPEP 2106.05(h). The limitation merely provides the definition of the time characteristics) use a data-based manipulation detection model in each current time step to ascertain one or more output variables that correspond at least to a portion of the operating variables as a function of input variables that include at least a portion of the operating variables, the manipulation detection model including a variational autoencoder having a first recurrent neural network, a prediction model having a second recurrent neural network, and an evaluation model, outputs of the variational autoencoder and of the prediction model being combined with one another and then conveyed to an evaluation model for an ascertainment of the output variables, the manipulation detection model being trained to model current values of the output variables as a function of current values of the at least one portion of the operating variables; (mere instructions to apply an exception of detecting a manipulation using a computer component MPEP 2106.05(f). The limitation broadly recites common machine learning model structure and training process of a generic machine learning model structure to apply an exception of detecting anomaly) wherein in each current time step, current values of first ones of the input variables are supplied to the variational autoencoder, and values of the second ones of the input variables that are only from a preceding time step are supplied to the prediction model. (mere instructions to apply an exception using a generic computer component (autoencoder) MPEP 2106.05(f)) 2B: A device for detecting a manipulation of a technical device that includes an exhaust gas treatment device in a motor vehicle, the technical device being an exhaust gas treatment device, the device being configured to: (a field of use and technological environment MPEP 2106.05(h)) supply time characteristics of operating variables having one or more system variables and/or having at least one correction variable for an intervention in the technical device (indicated as an insignificant extra-solution activity MPEP 2106.05(g). Thus, the limitation is re-evaluated as well understood, routine, and conventional activity MPEP 2106.05(d) of transmitting data over a network) which correspond to time series of values of the operating variables for consecutive time steps; (a field of use and technological environment MPEP 2106.05(h). The limitation merely provides the definition of the time characteristics) use a data-based manipulation detection model in each current time step to ascertain one or more output variables that correspond at least to a portion of the operating variables as a function of input variables that include at least a portion of the operating variables, the manipulation detection model including a variational autoencoder having a first recurrent neural network, a prediction model having a second recurrent neural network, and an evaluation model, outputs of the variational autoencoder and of the prediction model being combined with one another and then conveyed to an evaluation model for an ascertainment of the output variables, the manipulation detection model being trained to model current values of the output variables as a function of current values of the at least one portion of the operating variables; (mere instructions to apply an exception of detecting a manipulation using a computer component MPEP 2106.05(f). The limitation broadly recites common machine learning model structure and training process of a generic machine learning model structure to apply an exception of detecting anomaly) wherein in each current time step, current values of first ones of the input variables are supplied to the variational autoencoder, and values of the second ones of the input variables that are only from a preceding time step are supplied to the prediction model. (mere instructions to apply an exception using a generic computer component (autoencoder) MPEP 2106.05(f)) Regarding claim 31, Step 1: Claim 31 recites a non-transitory machine-readable memory medium on which are stored instructions for detecting a manipulation of a technical device. Therefore, it is directed to the statutory category of a machine. 2A Prong 1: Claim 31 is a non-transitory machine-readable memory medium claim having similar limitation to the method claim 16 above. Therefore, claim 31 is rejected under the same rationale as claim 16. 2A Prong 2: A non-transitory machine-readable memory medium on which are stored instructions for detecting a manipulation of a technical device that includes an exhaust gas treatment device in a motor vehicle, the instructions, when executed by a computer, causing the computer to perform the following steps: (mere instructions to apply an exception using a generic computer component MPEP 2106.05(f)) 2B: A non-transitory machine-readable memory medium on which are stored instructions for detecting a manipulation of a technical device that includes an exhaust gas treatment device in a motor vehicle, the instructions, when executed by a computer, causing the computer to perform the following steps: (mere instructions to apply an exception using a generic computer component MPEP 2106.05(f)) Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 16, 20, and 23-31 are rejected under 35 U.S.C. 103 as being unpatentable over Niu et al. (Niu et al, “LSTM-Based VAE-GAN for Time-Series Anomaly Detection”, 2020, hereinafter ‘Niu’) in view of Schat et al. (US 20190331555 A1, hereinafter ‘Schat’) and further in view of Mohajerin et al. (Mohajerin et al., “Multistep Prediction of Dynamic Systems With Recurrent Neural Networks”, 2019, hereinafter ‘Mohajerin’). Regarding claim 16, Niu teaches: A computer-implemented method for detecting a manipulation of a technical device , the method comprising the following steps: ([Niu, page 1, 1. Introduction, line 1-8] The method can be used to detect anomalies occurring in the production process of industrial equipment) providing time characteristics of operating variables having one or more system variables and/or at least one correction variable for an intervention in the technical device, which correspond to time series of values of the operating variables for consecutive time steps in each case; ([Niu, page 3, line 3-17] The datasets used in the experiment includes health status of machines (servers, routers, and switches) which are system variables. [Niu, page 4, line 1-8] The time series data is divided into sub-sequences by a sliding window in a certain step size denoting the sub-sequence. The sliding window in a certain time step size (input variables) provides temporal dependence of time series which is interpreted as the time characteristics. The sliding window are input to the encoder) using a data-based manipulation detection model in each current time step to ascertain one or more output variables which correspond to at least a portion of the operating variables as a function of input variables that include at least a portion of the operating variables, the manipulation detection model including a variational autoencoder having a first recurrent neural network, a prediction model having a second recurrent neural network, and an evaluation model, outputs of the variational autoencoder and the prediction model being combined with one another and then conveyed to an evaluation model for an ascertainment of the output variables, the manipulation detection model being trained to model current values of the output variables as a function of current values of the at least one portion of the operating variables; ([Niu, page 4, line 3-19] The time series data is divided into sub-sequences by a sliding window in a certain step size denoting the sub-sequence. Each time window corresponds to time steps. [Niu, page 4, Figure 2] and [Niu, page 6, Algorithm 1. Anomaly detection algorithm used the LSTM-based VAE-GAN] The x-Encoder-z-Generator-x’ network corresponds to the manipulation detection model including variational autoencoder with the first recurrent neural network. [Niu, page 5, line 1-8] The reconstruction error L_re corresponds to the modeling error of for reconstructing (modeling) input variable x. The x-Discriminator network corresponds to the prediction model having a second recurrent neural network. The calculated reconstruction difference Re and calculated discrimination results Dis are combined. and input to the purple square. The purple square that includes average and threshold comparison process corresponds to the evaluation model. The evaluation model is interpreted as a mathematical model (function, equation) as the broadest reasonable interpretation of ‘model’ encompasses mathematical model) detecting an anomaly as a function of a modeling error for each one of the output variables; ([Niu, page 5, line 1-8] The reconstruction error L_re corresponds to the modeling error of for reconstructing (modeling) input variable x. [Niu, page 5, 2.3. Anomaly Score, line 1-10] The anomaly score is calculated using the reconstruction difference and discrimination results) detecting a manipulation as a function of the detected anomalies, ([Niu, page 5, 2.3. Anomaly Score, line 1-16] and [Niu, page 6, Algorithm 1] The anomaly score is calculated using the reconstruction difference and discrimination results using an if-else function. The anomaly score is compared against the predefined threshold to detect a manipulation) wherein in each current time step, current values of first ones of the input variables are supplied to the variational autoencoder, and values of the second ones of the input variables that are . ([Niu, page 4, Figure 2; line 3-8] Each input sample to the encoder and the discriminator are divided into sub-sequences by a sliding window in a certain step size. The x1, x2, x3, and x4 in Figure 2 denote each time step. Both the autoencoder (encoder-generator) and the prediction model (discriminator) receives all x1, x2, x3, and x4 which includes current values of first ones of the input variables and the second ones of the input variables for a preceding time step) However, Niu does not specifically disclose: detecting a manipulation of a technical device that includes an exhaust gas treatment device in a motor vehicle; wherein in each current time step, current values of first ones of the input variables are supplied to the variational autoencoder, and values of the second ones of the input variables that are only from a preceding time step are supplied to the prediction model. Schat teaches: detecting a manipulation of a technical device that includes an exhaust gas treatment device in a motor vehicle. ([Schat, 0015] When a defeat system is detected (anomaly), urea-based exhaust after-treatment system ([Schat, 0002] It is exhaust gas treatment device) is activated. The activation command generated by the machine learning algorithm based anomaly detection system [Schat, 0031] output disclosed in [Schat, 0019; Fig. 5] that alters the behavior of the ECU is the correction variable) Before the effective filing date of the invention to a person of ordinary skill in the art, it would have been obvious, having the teachings of Niu and Schat to incorporate the exhaust gas treatment device of Schat into the anomaly detection method of Niu. The suggestion and/or motivation to do so is to improve the anomaly detection performance of the exhaust gas treatment device in an automobile as the method of Niu allows the method to avoid an optimization process at the anomaly detection stage so that anomalies can be detected more quickly and more accurately [Niu, page 10, 4. Discussion]. However, Niu in view of Schat does not specifically disclose: wherein in each current time step, current values of first ones of the input variables are supplied to the variational autoencoder, and values of the second ones of the input variables that are only from a preceding time step are supplied to the prediction model. Mohajerin teaches: wherein in each current time step, current values of first ones of the input variables are supplied to the variational autoencoder, and values of the second ones of the input variables that are only from a preceding time step are supplied to the prediction model. ([Mohajerin, page 3373, Fig. 1] and [Mohajerin, page 3373, right col, line 12-22] Each RNN receives different time sequence inputs u k 0 + 1 , u k 0 + 2 , and u k 0 + T . u k 0 + T are the first ones of the input variables that are only from the current time step, u k 0 + 1 and u k 0 + 2 are the second ones of the input variables that are only from the preceding time steps. The RNN blocks represents the same network copied over time, which indicates that each RNN block represents separate RNN network) Before the effective filing date of the invention to a person of ordinary skill in the art, it would have been obvious, having the teachings of Niu, Schat and Mohajerin to incorporate the method of processing preceding time step data and current time step data using different RNNs of Mohajerin into the anomaly detection method of Niu. The suggestion and/or motivation to do so is to improve the efficiency of the exhaust gas treatment device in an automobile as the method of Mohajerin allows the Niu to reduce the overall computational burden while maintaining smoothness and accuracy of the resulting response [Mohajerin, page 3370, right col, line 7-14]. Regarding claim 20, Niu teaches: wherein the first ones of the input variables and the second ones of the input variables each include a portion of the operating variables that is identical, partially identical or that differs, and the output variables include a portion of the operating variables that is identical to, partially identical to or that differs from the first and/or second input variables, and the modeling error is determined as a function of the modeled current values of the output variables and the current values of the operating variables corresponding to the output variables. ([Niu, page 4, Figure 2; line 3-8] Each input sample to the encoder and the discriminator are divided into sub-sequences by a sliding window in a certain step size. The x1, x2, x3, and x4 in Figure 2 denote each time step. Both the autoencoder (encoder-generator) and the prediction model (discriminator) receives all x1, x2, x3, and x4 which includes current values of first ones of the input variables and the second ones of the input variables for a preceding time step. The x1, x2, x3, and x4, can be identical to, partially identical to or differs from each other. [Niu, page 5, line 1-8] The reconstruction error L_re corresponds to the modeling error for reconstructing (modeling) input variable x. The reconstructing difference is calculated based on difference between input variables and output variables) Regarding claim 23, Niu teaches: wherein for multiple time intervals of an evaluation interval, a total error is determined for a number of consecutive time steps of each one of the output variables, from a plurality of modeling errors, by summing the modeling errors, and an anomaly for each of the time intervals is identified as a function of an exceeding of a predefined evaluation percentile for the respective output variable by the total error. ([Niu, page 3, 2.2. LSTM-Based VAE-GAN, line 2-6], [Niu, page 5, 2.3. Anomaly Score, line 11-13] Anomaly scores are calculated many times for each point, and average anomaly score for each point are calculated. To calculate average anomaly score for each point, anomaly score distributions for each points must be summed before dividing it by total number of anomaly scores for each point. [Niu, page 10, line 1-6; Figure 5] The model outputs anomaly score of the time series and optimal threshold. The detected anomalies are shown as red parts in the graph. [Niu, page 3, line 18-21] All values ​​in each time series are normalized to the range [0,1], i.e. the values ​​are relative values ​​that can be converted to percentiles) Regarding claim 24, Niu teaches: wherein a manipulation of the technical device is detected when a share of anomalies during the time intervals of the evaluation interval exceeds a predefined share threshold value. ([Niu, page 4, Figure 2; line 3-8] Each input sample to the encoder and the discriminator are divided into sub-sequences by a sliding window in a certain step size. The x1, x2, x3, and x4 in Figure 2 denote each time step (time interval). [Niu, page 10, line 1-6; Figure 5] The model outputs anomaly score of the time series and optimal threshold. The detected anomalies are shown as red parts in the graph) Regarding claim 25, Niu teaches: wherein the evaluation percentile value for each operating variable is determined in that, based on a characteristic of operating variables of a predefined validation dataset for a correct operation of the technical device for multiple time intervals of an evaluation interval for a number of consecutive time steps in each case, a total error is determined from multiple modeling errors for the respective multiple time intervals, by summing the modeling errors, and an error matrix is set up from the output variables and the assigned total errors, and a percentile value as the evaluation percentile value is determined for each output variable. ([Niu, page 3, 2.2. LSTM-Based VAE-GAN, line 2-6], [Niu, page 5, 2.3. Anomaly Score, line 11-13] Anomaly scores are calculated many times for each point, and average anomaly score for each point are calculated. To calculate average anomaly score for each point, anomaly score distributions for each point must be summed before dividing it by total number of anomaly scores for each point. [Niu, page 6, Algorithm 1] Training data X_train and Testing data X_test (predefined validation dataset) are input data. [Niu, page 10, line 1-6; Figure 5] The model outputs anomaly score of the time series and optimal threshold. The detected anomalies are shown as red parts in the graph. [Niu, page 3, line 18-21] All values ​​in each time series are normalized to the range [0,1], i.e. the values ​​are relative values ​​that can be converted to percentiles) Regarding claim 26, Niu teaches: wherein the percentile value is 99.9%. (The 99.9% is just a random value set as a threshold value. [Niu, page 6, Algorithm 1] The final anomaly score is compared against the pre-defined threshold to determine whether the dataset is anomalous. [Niu, page 10, line 1-6; Figure 5] The model outputs anomaly score of the time series and optimal threshold. The detected anomalies are shown as red parts in the graph) Regarding claim 27, Niu in view of Schat teaches: the method as recited in claim 16, wherein the technical device includes an exhaust gas treatment device, and an input vector as the correction variable includes a correction variable for a urea injection system. ([Schat, 0015] When a defeat system is detected (anomaly), urea-based exhaust after-treatment system ([Schat, 0002] It is exhaust gas treatment device) is activated. The activation command generated by the machine learning algorithm based anomaly detection system [Schat, 0031] output disclosed in [Schat, 0019; Fig. 5] that alters the behavior of the ECU is the correction variable) Regarding claim 28, Niu in view of Schat teaches: wherein a detected manipulation is signaled, or the technical device is operated as a function of the detected manipulation. ([Schat, 0015] When a defeat system is detected (anomaly), urea-based exhaust after-treatment system ([Schat, 0002] It is exhaust gas treatment device) is activated. The activation command generated by the machine learning algorithm based anomaly detection system [Schat, 0031] output disclosed in [Schat, 0019; Fig. 5] that alters the behavior of the ECU is the correction variable) Regarding claim 29, Niu teaches: A method for training a data-based manipulation detection model as a function of characteristics of operating variables of a technical device , ([Niu, page 1, 1. Introduction, line 1-8] The method can be used to detect anomalies occurring in the production process of industrial equipment) the operating variables including one or more system variables and/or at least one correction variable for an intervention in the technical device and corresponding to time series of values of the operating variables for consecutive time steps in each case, ([Niu, page 3, line 3-17] The datasets used in the experiment includes health status of machines (servers, routers, and switches) which are system variables. [Niu, page 4, line 1-8] The time series data is divided into sub-sequences by a sliding window in a certain step size denoting the sub-sequence. The sliding window in a certain time step size (input variables) provides temporal dependence of time series which is interpreted as the time characteristics. The sliding window are input to the encoder) the manipulation detection model including a variational autoencoder that has a first recurrent neural network, a prediction model that has a second recurrent neural network, and an evaluation model, outputs of the variational autoencoder and the prediction model being combined with one another and then conveyed to an evaluation model for an ascertainment of the output variables, the method comprising: ([Niu, page 4, line 3-19] The time series data is divided into sub-sequences by a sliding window in a certain step size denoting the sub-sequence. Each time window corresponds to time steps. [Niu, page 4, Figure 2] and [Niu, page 6, Algorithm 1. Anomaly detection algorithm used the LSTM-based VAE-GAN] The x-Encoder-z-Generator-x’ network corresponds to the manipulation detection model including variational autoencoder with the first recurrent neural network. [Niu, page 5, line 1-8] The reconstruction error L_re corresponds to the modeling error of for reconstructing (modeling) input variable x. [Niu, page 4, Figure 2] and [Niu, page 6, Algorithm 1. Anomaly detection algorithm used the LSTM-based VAE-GAN] The x-Discriminator network corresponds to the prediction model having a second recurrent neural network. The calculated reconstruction difference Re and calculated discrimination results Dis are combined. and input to the purple square. The purple square that includes average and threshold comparison process corresponds to the evaluation model. The evaluation model is interpreted as a mathematical model (function, equation) as the broadest reasonable interpretation of ‘model’ encompasses mathematical model) training the manipulation detection model to model current values of output variables that correspond to one or more of the operating variables as a function of current values of the at least one portion of the operating variables, ([Niu, page 4, Figure 2; and line 1-15] and [Niu, page 6, Algorithm 1. Anomaly detection algorithm used the LSTM-based VAE-GAN] The x-Encoder-z-Generator-x’ network corresponds to the manipulation detection model including variational autoencoder with the first recurrent neural network. [Niu, page 6, Algorithm 1] The encoder and generator are trained in each iteration by generating random mini-batch X from training data X_train and updating parameters according to the gradient. [Niu, page 5, line 1-8] The variational autoencoder is trained to model (reconstruct) current values of operating variables. The reconstruction error L_re corresponds to the modeling error of for reconstructing (modeling) input variable x) wherein in each current time step, current values of first ones of the input variables are supplied to the variational autoencoder, and values of the second ones of the input variables that are from a preceding time step are supplied to the prediction model. ([Niu, page 4, Figure 2; line 3-8] Each input sample to the encoder and the discriminator are divided into sub-sequences by a sliding window in a certain step size. The x1, x2, x3, and x4 in Figure 2 denote each time step. Both the autoencoder (encoder-generator) and the prediction model (discriminator) receives all x1, x2, x3, and x4 which includes current values of first ones of the input variables and the second ones of the input variables for a preceding time step) However, Niu does not specifically disclose: a function of characteristics of operating variables of a technical device that includes an exhaust gas treatment device in a motor vehicle wherein in each current time step, current values of first ones of the input variables are supplied to the variational autoencoder, and values of the second ones of the input variables that are only from a preceding time step are supplied to the prediction model. Schat teaches: a function of characteristics of operating variables of a technical device that includes an exhaust gas treatment device in a motor vehicle ([Schat, 0015] When a defeat system is detected (anomaly), urea-based exhaust after-treatment system ([Schat, 0002] It is exhaust gas treatment device) is activated. The activation command generated by the machine learning algorithm based anomaly detection system [Schat, 0031] output disclosed in [Schat, 0019; Fig. 5] that alters the behavior of the ECU is the correction variable) Before the effective filing date of the invention to a person of ordinary skill in the art, it would have been obvious, having the teachings of Niu and Schat to incorporate the exhaust gas treatment device of Schat into the anomaly detection method of Niu. The suggestion and/or motivation to do so is to improve the anomaly detection performance of the exhaust gas treatment device in an automobile as the method of Niu allows the method to avoid an optimization process at the anomaly detection stage so that anomalies can be detected more quickly and more accurately [Niu, page 10, 4. Discussion]. However, Niu in view of Schat does not specifically disclose: wherein in each current time step, current values of first ones of the input variables are supplied to the variational autoencoder, and values of the second ones of the input variables that are only from a preceding time step are supplied to the prediction model. Mohajerin teaches: wherein in each current time step, current values of first ones of the input variables are supplied to the variational autoencoder, and values of the second ones of the input variables that are only from a preceding time step are supplied to the prediction model. ([Mohajerin, page 3373, Fig. 1] and [Mohajerin, page 3373, right col, line 12-22] Each RNN receives different time sequence inputs u k 0 + 1 , u k 0 + 2 , and u k 0 + T . u k 0 + T are the first ones of the input variables that are only from the current time step, u k 0 + 1 and u k 0 + 2 are the second ones of the input variables that are only from the preceding time steps. The RNN blocks represents the same network copied over time, which indicates that each RNN block represents separate RNN network) Before the effective filing date of the invention to a person of ordinary skill in the art, it would have been obvious, having the teachings of Niu, Schat and Mohajerin to incorporate the method of processing preceding time step data and current time step data using different RNNs of Mohajerin into the anomaly detection method of Niu. The suggestion and/or motivation to do so is to improve the efficiency of the exhaust gas treatment device in an automobile as the method of Mohajerin allows the Niu to reduce the overall computational burden while maintaining smoothness and accuracy of the resulting response [Mohajerin, page 3370, right col, line 7-14]. Regarding claim 30, Niu teaches: A device for detecting a manipulation of a technical device [Niu, page 1, 1. Introduction, line 1-8] The method can be used to detect anomalies occurring in the production process of industrial equipment) supply time characteristics of operating variables having one or more system variables and/or having at least one correction variable for an intervention in the technical device which correspond to time series of values of the operating variables for consecutive time steps; ([Niu, page 3, line 3-17] The datasets used in the experiment includes health status of machines (servers, routers, and switches) which are system variables. [Niu, page 4, line 1-8] The time series data is divided into sub-sequences by a sliding window in a certain step size denoting the sub-sequence. The sliding window in a certain time step size (input variables) provides temporal dependence of time series which is interpreted as the time characteristics. The sliding window are input to the encoder) use a data-based manipulation detection model in each current time step to ascertain one or more output variables that correspond at least to a portion of the operating variables as a function of input variables that include at least a portion of the operating variables, the manipulation detection model including a variational autoencoder having a first recurrent neural network, a prediction model having a second recurrent neural network, and an evaluation model, outputs of the variational autoencoder and of the prediction model being combined with one another and then conveyed to an evaluation model for an ascertainment of the output variables, the manipulation detection model being trained to model current values of the output variables as a function of current values of the at least one portion of the operating variables; ([Niu, page 4, line 3-19] The time series data is divided into sub-sequences by a sliding window in a certain step size denoting the sub-sequence. Each time window corresponds to time steps. [Niu, page 4, Figure 2] and [Niu, page 6, Algorithm 1. Anomaly detection algorithm used the LSTM-based VAE-GAN] The x-Encoder-z-Generator-x’ network corresponds to the manipulation detection model including variational autoencoder with the first recurrent neural network. [Niu, page 5, line 1-8] The reconstruction error L_re corresponds to the modeling error of for reconstructing (modeling) input variable x. The x-Discriminator network corresponds to the prediction model having a second recurrent neural network. The calculated reconstruction difference Re and calculated discrimination results Dis are combined. and input to the purple square. The purple square that includes average and threshold comparison process corresponds to the evaluation model. The evaluation model is interpreted as a mathematical model (function, equation) as the broadest reasonable interpretation of ‘model’ encompasses mathematical model) detect an anomaly as a function of a modeling error for each one of the output variables; ([Niu, page 5, line 1-8] The reconstruction error L_re corresponds to the modeling error of for reconstructing (modeling) input variable x. [Niu, page 5, 2.3. Anomaly Score, line 1-10] The anomaly score is calculated using the reconstruction difference and discrimination results) detect a manipulation as a function of the detected anomalies, ([Niu, page 5, 2.3. Anomaly Score, line 1-16] and [Niu, page 6, Algorithm 1] The anomaly score is calculated using the reconstruction difference and discrimination results using an if-else function. The anomaly score is compared against the predefined threshold to detect a manipulation) wherein in each current time step, current values of first ones of the input variables are supplied to the variational autoencoder, and values of the second ones of the input variables that are from a preceding time step are supplied to the prediction model. ([Niu, page 4, Figure 2; line 3-8] Each input sample to the encoder and the discriminator are divided into sub-sequences by a sliding window in a certain step size. The x1, x2, x3, and x4 in Figure 2 denote each time step. Both the autoencoder (encoder-generator) and the prediction model (discriminator) receives all x1, x2, x3, and x4 which includes current values of first ones of the input variables and the second ones of the input variables for a preceding time step) Niu does not specifically disclose: detecting a manipulation of a technical device that includes an exhaust gas treatment device in a motor vehicle, the technical device being an exhaust gas treatment device; wherein in each current time step, current values of first ones of the input variables are supplied to the variational autoencoder, and values of the second ones of the input variables that are only from a preceding time step are supplied to the prediction model. Schat teaches: detecting a manipulation of a technical device that includes an exhaust gas treatment device in a motor vehicle, the technical device being an exhaust gas treatment device. ([Schat, 0015] When a defeat system is detected (anomaly), urea-based exhaust after-treatment system ([Schat, 0002] It is exhaust gas treatment device) is activated. The activation command generated by the machine learning algorithm based anomaly detection system [Schat, 0031] output disclosed in [Schat, 0019; Fig. 5] that alters the behavior of the ECU is the correction variable) Before the effective filing date of the invention to a person of ordinary skill in the art, it would have been obvious, having the teachings of Niu and Schat to incorporate the exhaust gas treatment device of Schat into the anomaly detection method of Niu. The suggestion and/or motivation to do so is to improve the anomaly detection performance of the exhaust gas treatment device in an automobile as the method of Niu allows the method to avoid an optimization process at the anomaly detection stage so that anomalies can be detected more quickly and more accurately [Niu, page 10, 4. Discussion]. However, Niu in view of Schat does not specifically disclose: wherein in each current time step, current values of first ones of the input variables are supplied to the variational autoencoder, and values of the second ones of the input variables that are only from a preceding time step are supplied to the prediction model. Mohajerin teaches: wherein in each current time step, current values of first ones of the input variables are supplied to the variational autoencoder, and values of the second ones of the input variables that are only from a preceding time step are supplied to the prediction model. ([Mohajerin, page 3373, Fig. 1] and [Mohajerin, page 3373, right col, line 12-22] Each RNN receives different time sequence inputs u k 0 + 1 , u k 0 + 2 , and u k 0 + T . u k 0 + T are the first ones of the input variables that are only from the current time step, u k 0 + 1 and u k 0 + 2 are the second ones of the input variables that are only from the preceding time steps. The RNN blocks represents the same network copied over time, which indicates that each RNN block represents separate RNN network) Before the effective filing date of the invention to a person of ordinary skill in the art, it would have been obvious, having the teachings of Niu, Schat and Mohajerin to incorporate the method of processing preceding time step data and current time step data using different RNNs of Mohajerin into the anomaly detection method of Niu. The suggestion and/or motivation to do so is to improve the efficiency of the exhaust gas treatment device in an automobile as the method of Mohajerin allows the Niu to reduce the overall computational burden while maintaining smoothness and accuracy of the resulting response [Mohajerin, page 3370, right col, line 7-14]. Claim 31 is a non-transitory machine-readable memory medium claim having similar limitation to the method claim 16 above. Therefore, claim 31 is rejected under the same rationale as claim 16. Claims 18 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Niu in view of Schat in view of Mohajerin and further in view of Yao et al. (US 20210134002 A1, hereinafter ‘Yao’). Regarding claim 18, Niu teaches: wherein the autoencoder is a variational autoencoder and has a latent feature space ([Niu, page 4, line 3-11] The encoder of the VAE encodes the input vector to the vector in the latent space). Niu in view of Schat and further in view of Mohajerin does not specifically disclose: a latent feature space which is developed with two linear feature space layers for imaging a mean value vector and a standard deviation vector, and the variational autoencoder is trained using a regularization term, which induces development of the feature space layers for imaging the mean value vector and a standard deviation vector during the training. Yao teaches: a latent feature space which is developed with two linear feature space layers for imaging a mean value vector and a standard deviation vector, and the variational autoencoder is trained using a regularization term, which induces development of the feature space layers for imaging the mean value vector and a standard deviation vector during the training. ([Yao, 0073] During training, the detected object is first encoded to a feature vector. Such a feature vector is then passed through two parallel FC layers (two linear feature space layers) to produce the mean μ.sub.Zi.sup.(Q) and standard deviation σ.sub.Zi.sup.(Q). Then, a differentiable reparameterization technique is applied to sample the latent variable. ‘Trained using a regularization term’ merely directs to train a model using a loss term. [Yao, 0076] The loss (modeling error) is calculated based on KL divergence with boundary values. The calculation of boundary value comprise calculation of latent space variables as disclosed in [Yao, 0073-0074] and equation 8) Before the effective filing date of the invention to a person of ordinary skill in the art, it would have been obvious, having the teachings of Niu, Schat, Mohajerin and Yao to combine the latent feature space having two feature space layers for a mean value vector and a standard deviation vector of Yao into the anomaly detection method of Niu. The suggestion and/or motivation to do so is to improve the accuracy of the anomaly detection method as using multiple feature space layers and both mean value vector and a standard deviation vector allows the machine learning model to generate more accurate representation of input data by capturing finer details. Regarding claim 21, Niu teaches: The method as recited in claim 20, wherein the variational autoencoder has a latent feature space ([Niu, page 4, line 3-11] The encoder of the VAE encodes the input vector to the vector in the latent space). Niu in view of Schat and further in view of Mohajerin does not specifically disclose: a latent feature space which is developed with two linear feature space layers for imaging a mean value vector and a standard deviation vector, and the modeling error furthermore is determined as a function of the modeled current values of the mean value vector and the standard deviation vector. Yao teaches: a latent feature space which is developed with two linear feature space layers for imaging a mean value vector and a standard deviation vector, and the modeling error furthermore is determined as a function of the modeled current values of the mean value vector and the standard deviation vector. ([Yao, 0073] During training, the detected object is first encoded to a feature vector. Such a feature vector is then passed through two parallel FC layers (two linear feature space layers) to produce the mean μ.sub.Zi.sup.(Q) and standard deviation σ.sub.Zi.sup.(Q). Then, a differentiable reparameterization technique is applied to sample the latent variable. [Yao, 0076] The loss (modeling error) is calculated based on KL divergence with boundary values. The calculation of boundary value comprise calculation of latent space variables as disclosed in [Yao, 0073-0074] and equation 8) Before the effective filing date of the invention to a person of ordinary skill in the art, it would have been obvious, having the teachings of Niu, Schat, Mohajerin and Yao to combine the latent feature space having two feature space layers for a mean value vector and a standard deviation vector of Yao into the anomaly detection method of Niu. The suggestion and/or motivation to do so is to improve the accuracy of the anomaly detection method as using multiple feature space layers and both mean value vector and a standard deviation vector allows the machine learning model to generate more accurate representation of input data by capturing finer details. Claim 22 is rejected under 35 U.S.C. 103 as being unpatentable over Niu in view of Schat in view of Mohajerin and further in view of Munawar et al. (US 20170076224 A1, hereinafter ‘Munawar’). Regarding claim 22, Niu teaches: The method as recited in claim 20, wherein the modeling error is ascertained using a predefined error function. ([Niu, page 5, line 1-14] The reconstruction errors are calculated using a predefined Kullback-Leibler divergence function) Niu in view of Schat and further in view of Mohajerin does not specifically disclose: a predefined error function, which is based on a mean squared error or a Huber loss function or a root mean squared error between the current values of the operating variables and the corresponding output variables. Munawar teaches: a predefined error function, which is based on a mean squared error or a Huber loss function or a root mean squared error between the current values of the operating variables and the corresponding output variables. ([Munawar, 0102] Similarities (error function) between the test image and reconstructed image by the autoencoder using the trained parameter were calculated using means square error) Before the effective filing date of the invention to a person of ordinary skill in the art, it would have been obvious, having the teachings of Niu, Schat, Mohajerin and Munawar to combine the method of using mean square error to calculate loss function of an autoencoder of Yao into the anomaly detection method of Niu. The suggestion and/or motivation to do so is to improve the accuracy of the machine learning model as mean square error is more suitable for calculating difference between continuous value (time-series). Response to Arguments Applicant's arguments filed 02/12/2026 have been fully considered but they are not persuasive. Response to Arguments under 35 U.S.C. 101 Arguments: Applicant asserts that (a) the Office Action failed to apply to these statements the standard set forth in 2106.04(d)(1) of the MPEP for judging whether a statement of technological improvement in the spec is “conclusory” [Remarks, page 10-11], (b) this standard cannot hold the statements of technological improvement in the present specification to be “conclusory” because doing so would inconsistent with the recently issued precedential decision Ex Parte Desjardins, Appeal No. 2024-000567 [Remarks, page 11-12], (c) the examiner’s analysis fails to comply with the MPEP section because it relies on mere conclusory statements without analyzing this element under Step 2B. See Ex parte Mercer (nonprecedential) [Remarks, page 14]. Examiner’s Response: Examiner respectfully disagrees. First, regarding (a), examiner respectfully reiterates the arguments in the previous rejection mailed on 08/13/2025. MPEP 2106.05(d)(1) and 2106.05(a) both say: if the specification explicitly sets forth an improvement but in a conclusory manner (i.e., a bare assertion of an improvement without the detail necessary to be apparent to a person of ordinary skill in the art), the examiner should not determine the claim improves technology, and MPEP 2106.05(a) says: during examination, the examiner should analyze the "improvements" consideration by evaluating the specification and the claims to ensure that a technical explanation of the asserted improvement is present in the specification, and that the claim reflects the asserted improvement. Examiner performed the evaluation and concluded that the the specification “the variational autoencoder is used to obtain a greater generalizability of the input variable characteristics not imaged by training data in the configuration of the manipulation detection model … a better generalization capability of unseen data by the variational autoencoder” and [0063] “prediction model 30 is trained together with the autoencoder and therefore capable of making an output available that compensates/supplements the output of the autoencoder” does not reflect an improvement to a computer or technical field, rather the specification and the claim merely recites the conclusion that the introducing the autoencoder results in an improvement in generalizability of the input variable without explaining the detailed architecture of the autoencoder that helps generalize input variables. Regarding (b), Desjardins and the instant application are distinguishable. the claims at issue in Desjardins explicitly linked the steps recited to an improvement in training by reciting “optimizing performance of the machine learning model on the second machine learning task while protecting performance of the machine learning model on the first machine learning task.” However, there is no such nexus in the instant claims. Regarding (c), as noted in the argument itself, Ex parte Mercer is nonprecedential, therefore it is not binding on the patent office. Accordingly, arguments to claim 16 is not persuasive. Similarly, arguments to independent claims 29, 30 and 31 are not persuasive. Therefore, arguments to dependent claims 17-28 depend from claim 16 are not persuasive. Response to Arguments under 35 U.S.C. 102 & 103 Arguments: Applicant asserts that Niu fails to teach the claimed structural arrangement of temporally separate, non-overlapping inputs, as the Discriminator and the Encoder receives the same inputs x1-x4 and it is in contrast with the input values provided to the claimed prediction model. [Remarks, page 15-16] Examiner’s Response: Applicant’s arguments with respect to claims 16, 29, 30 and 31 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JUN KWON whose telephone number is (571)272-2072. The examiner can normally be reached Monday – Friday 7:30AM – 4:30PM ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abdullah Kawsar can be reached at (571)270-3169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JUN KWON/Examiner, Art Unit 2127 /ABDULLAH AL KAWSAR/Supervisory Patent Examiner, Art Unit 2127
Read full office action

Prosecution Timeline

Jan 21, 2022
Application Filed
Mar 12, 2025
Non-Final Rejection — §101, §102, §103
Jul 21, 2025
Response Filed
Aug 11, 2025
Final Rejection — §101, §102, §103
Feb 12, 2026
Request for Continued Examination
Feb 24, 2026
Response after Non-Final Action
Mar 02, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602569
EXTRACTING ENTITY RELATIONSHIPS FROM DIGITAL DOCUMENTS UTILIZING MULTI-VIEW NEURAL NETWORKS
2y 5m to grant Granted Apr 14, 2026
Patent 12602609
UPDATING MACHINE LEARNING TRAINING DATA USING GRAPHICAL INPUTS
2y 5m to grant Granted Apr 14, 2026
Patent 12579436
Tensorized LSTM with Adaptive Shared Memory for Learning Trends in Multivariate Time Series
2y 5m to grant Granted Mar 17, 2026
Patent 12572777
Policy-Based Control of Multimodal Machine Learning Model via Activation Analysis
2y 5m to grant Granted Mar 10, 2026
Patent 12493772
LAYERED MULTI-PROMPT ENGINEERING FOR PRE-TRAINED LARGE LANGUAGE MODELS
2y 5m to grant Granted Dec 09, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
38%
Grant Probability
84%
With Interview (+46.2%)
4y 3m
Median Time to Grant
High
PTA Risk
Based on 68 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month