DETAILED ACTION
This nonfinal rejection is responsive to the amendment filed on January 19, 2026. Claims 1-11, 13, and 16 are pending. Claim 1 is independent, claims 14 and 15 are canceled, and claim 16 is added.
Claim rejections under 35 USC §101 are withdrawn in light of applicant’s amendment. See section Response to Arguments below.
Claim rejections under 35 USC §102 and 103 of claims 1-11 and 13 are withdrawn in light of applicant’s amendment. However, a new grounds of rejection is made under 35 USC §103 of claims 1-11, 13, and 16. See section Claim Rejections – 35 USC §103 below.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on January 19, 2025 has been entered.
Claim Objections
Claim 9 is objected to because of the following informalities: The claim recites "an output of each of base of the plurality of bases" but should recite "an output of each . Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 11, 13, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Bauer et al. (DE102006054425), hereinafter Bauer, in view of Junginger et al. (US20200236005), hereinafter Junginger.
Regarding claim 1, Bauer teaches:
A method for determining an impermissible deviation of a system behavior of a technical device from a normal value range using an artificial neural network comprising: (Bauer, paragraph 0020: “A further development of the method and the device provides that deviations of the estimated value of the model parameter from a given initial value are determined using the artificial neural network.” And paragraph 0030: “Furthermore, a method for influencing the driving state of a motor vehicle is provided, in which the driving state is influenced depending on a deviation between an actual value of a first driving state variable and a reference value of the first driving state variable.”)
in a learning phase supplying the artificial neural network with (i) first input data that was also provided to the technical device, and (ii) first output data generated by of the technical device based on the first input data; (Bauer, 0007: “For this purpose, the artificial neural network is adapted using a learning procedure so that the estimated value of the model parameter approaches the actual value of the parameter in repeated calculations.” And paragraphs 0013-0014: “Further development of the method and the device includes the fact that the learning method is a supervised learning method. Supervised learning has the advantage that the training of the artificial neural network is very targeted and realistic estimates for the model parameters are obtained very quickly.” – Paragraph 0051 additionally discusses the input signals of the neural network being driving condition variables either measured or input by the driver of the vehicle, thus the input data is being provided to the technical device, as well as the output signals being parameters of the vehicle model of the vehicle dynamics control system which was generated by the technical device by the first input data. The neural network being a supervised learning method means that during the learning procedure it was first fed an input data and an output data which corresponds to the first input data and first output data.)
in a prediction phase following the learning phase (i) feeding only second input data to the artificial neural network, and (ii) calculating output reference data using the artificial neural network based on the second input data; (Bauer, paragraph 0045: “The calculation of the reference value Yref of the controlled variable Y is carried out in the reference value calculation device 104 on the basis of a reference model of the vehicle 101 using quantities E that specify the driving state of the vehicle 101 desired by the driver.” – Since this occurs after the training phase, calculating the reference value Y would be a prediction phase where the input data is being fed into the artificial neural network to be calculated.)
generating second output data using the technical device based on the second input data; (Bauer, paragraph 0044: “The current actual value Yist of the controlled variable Y is either measured directly using a sensor of the vehicle 101 or derived from the measured values of one or more sensors… The control deviation ΔY represents the input variable of a control device 102, which calculates an output signal depending on the control deviation.” – The current actual value of Y would correspond to the second output data that is generated using the technical device.)
identifying the impermissible deviation when a difference between the second output data and the output reference data is outside the normal value range; and (Bauer, paragraph 0044: “The control deviation ΔY = Yref – Yist is calculated from the difference between the actual value Yist and a reference value Yref of the controlled variable Y…The control device 102 is usually activated when the control deviation ΔY and, if applicable, other variables exceed predefined control entry thresholds.” – The predefined thresholds would indicate the normal value range, thus the variables exceeding this would indicate an impermissible deviation between the second output data and the output reference data.)
taking a countermeasure in connection with the technical device when the impermissible deviation is identified, (Bauer, paragraph 0044: “The control device 102 is usually activated when the control deviation ΔY and, if applicable, other variables exceed predefined control entry thresholds. The output signals of the control device 102 correspond to a position request, according to which at least one actuator 103 is controlled, with which the driving behavior of the vehicle 101 can be influenced.” – Activating the control device and influencing the driving behavior of the vehicle would be countermeasures taken in connection with the technical device.)
Bauer does not explicitly teach:
wherein the countermeasure includes at least one of (i) deactivating partial functions of the technical device, and (ii) using an alternative technical device.
However, Junginger teaches:
wherein the countermeasure includes at least one of (i) deactivating partial functions of the technical device, (Junginger, paragraph 0058: “However, if this is the case (2500), it is decided that there is an anomaly, and error variable F is set to value “1,” which results in countermeasures being initiated (2600), for example, by motor vehicle 100 being switched to a safe mode. The method thus ends.” – Switching the vehicle to a safe mode indicates that some of the functions of the device have been deactivated which is in response to detecting an anomaly, i.e., an impermissible deviation.) and (ii) using an alternative technical device. (Junginger, paragraph 0045: “If error signal F already contains specific information about the cause of the anomaly, control variable A may initiate a specific countermeasure, such as cutting off one of communication nodes 110, 120, 130 from the data communications traffic over the CAN bus.” – Page 7, lines 3-8 of the specification discusses the technical device in the vehicle. The communication nodes are in the vehicle, see e.g. Fig. 1, and therefore cutting off one of the communication nodes would switch communication to another node and is therefore analogous to using an alternative technical device.)
Junginger is considered analogous to the claimed invention as it is in the same field of endeavor, machine learning. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to have modified Bauer, which already teaches a method for determining an impermissible deviation of a system behavior and then taking a countermeasure but does not explicitly teach that the countermeasure includes deactivating partial functions of the device or using an alternative device, to include the teachings of Junginger which does teach that the countermeasure includes deactivating partial functions of the device or using an alternative device in order to prevent external attacks. (Junginger, paragraphs 0007-0010)
Regarding claim 11, Bauer and Junginger teach the method of claim 1, as cited above.
Bauer further teaches:
wherein a control unit in a vehicle is configured to carry out the method. (Bauer, paragraph 0046: “These parameters can be determined on a prototype of the vehicle 101 and stored in a non-volatile memory 105 of the vehicle dynamics control system, which is connected to the reference value calculation device 104.”)
Regarding claim 13, Bauer and Junginger teach the method of claim 1, as cited above.
Bauer further teaches:
wherein a non-transitory machine- readable storage medium is configured to store a computer program product that includes program code configured to carry out the method. (Bauer, paragraph 0046: “These parameters can be determined on a prototype of the vehicle 101 and stored in a non-volatile memory 105 of the vehicle dynamics control system, which is connected to the reference value calculation device 104.”)
Regarding claim 16, Bauer and Junginger teach the method of claim 1, as cited above.
Bauer further teaches:
wherein the technical device is an electronic stability program ("ESP") module for a braking system of a vehicle. (Bauer, paragraph 0002: “Vehicle dynamics control systems, such as the well-known ESP system (ESP: Electronic Stability Program), stabilize the vehicle by compensating for a control deviation between an actual value of a vehicle state variable, measured by a vehicle sensor, and a reference value of the driving state variable by influencing the driving behavior by means of an actuator.”)
Claims 2, 3, and 5-9 are rejected under 35 U.S.C. 103 as being unpatentable over Bauer in view of Junginger in view of Wen et al. (Time Series Anomaly Detection Using Convolutional Neural Networks and Transfer Learning), hereinafter Wen.
Regarding claim 2, Bauer and Junginger teach the method of claim 1, as cited above.
Bauer further teaches:
the artificial neural network is divided into a base network and a head network, (Bauer, paragraph 0050: “The topology of the KNN 500 provides for several layers of neurons 201, in particular an input layer 501 and an output layer 502.” – The output layer is analogous to the head network while the input and hidden layers are analogous to the base network.)
… trained on a first technical device, and (Bauer, paragraphs 0010 and 11: “Since all parameters of the further vehicle model must be determined using the artificial neural network in order to carry out the learning process, it is advantageous if the further vehicle model has as little parameter excess as possible compared to the reference vehicle model. Therefore, a further embodiment of the method and the device provides that the reference vehicle model and the other vehicle model are identical.” – Training on a reference vehicle and a further vehicle indicates that the method is capable of training on a first vehicle therefore the training is being done on a first technical device.)
… trained on a second technical device which is identical to the first technical device. (Bauer, paragraphs 0010 and 11: “Since all parameters of the further vehicle model must be determined using the artificial neural network in order to carry out the learning process, it is advantageous if the further vehicle model has as little parameter excess as possible compared to the reference vehicle model. Therefore, a further embodiment of the method and the device provides that the reference vehicle model and the other vehicle model are identical.” – Training on a reference vehicle and a further vehicle indicates that the method is capable of training on a second vehicle therefore the training is being done on a second technical device which is stated to be identical to the first.)
Bauer and Junginger do not explicitly teach:
in a first section of the learning phase both the base network and the head network are trained
in a second section of the learning phase only the head network is trained
However, Wen teaches:
in a first section of the learning phase both the base network and the head network are trained (Wen, section 4, paragraph 1: “The transfer learning approach uses the weights from a base model pretrained on an available large-scale data set and fine-tunes the model weights with a small-scale data set related to the task of interest.” And section 4.1: “We found two fin-tuning strategies with good performance in our tests. The first one is to set up different learning rate multipliers in 10 sections (5 encoding sections, 4 decoding sections, and the output section) as 0.01, 0.04, 0.09, … , 0.81, 1.0) – The pretraining is analogous to the first section of a learning phase while the neural network is being divided up into sections is analogous to the base network and head network with the encoding sections being the base and the other sections being the head.)
in a second section of the learning phase only the head network is trained (Wen, section 4.2, paragraph 3: “During fine-tuning, we freeze the first four encoding sections and tune the weights of the remaining layers.” – The four encoding layers is analogous to the base network, as well as the input layer of Bauer, while the “remaining layers” is analogous to the output layer of Bauer and the head layer.)
Wen is considered analogous to the claimed invention as it is in the same field of endeavor, machine learning. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to have modified Bauer and Junginger, which already teaches a method of training and using an artificial neural network to detect deviations of system behaviors that has a base network and a head network but does not explicitly teach that the training is done in two sections where the first section trains both the base network and head network while the second section trains only the head network, to include the teachings of Wen which does teach that the training is done in two sections where the first section trains both the base network and head network while the second section trains only the head network in order to "resolve the data sparsity issue." (Wen, section 1, paragraph 4)
Regarding claim 3, Bauer and Junginger teach the method of claim 2, as cited above.
Bauer further teaches:
wherein in the prediction phase both the base network and the head network are used to determine the impermissible deviation of the second technical device. (Bauer, paragraph 0011-0012: “Therefore, a further embodiment of the method and the device provides that the reference vehicle model and the other vehicle model are identical. This avoids the need to determine parameters of the further vehicle model with the artificial neural network that are not included in the reference model, thus minimizing the number of parameters to be determined by the artificial neural network.” – The vehicles being identical and the further vehicle model being determined by the same artificial neural network indicates that the prediction phase of both the base network and head network, as taught by Bauer above, are used to determine the impermissible deviation of the second technical device.)
Regarding claim 5, Bauer, Junginger, and Wen teach the method of claim 2, as cited above.
Bauer further teaches:
wherein an output of the base network is used as an input for the head network. (Bauer, paragraph 0050: “The neurons 201 of input layer 501 receive the input signals I of the KNN 500 and forward them to output layer 502.” – As noted above, the input layer is analogous to the base network and the output layer is analogous to the head network. Thus the input layer forwarding signals to the output layer is analogous to the output of the base network being used as input for the head network.)
Regarding claim 6, Bauer, Junginger, and Wen teach the method of claim 2, as cited above.
wherein measured values of the second technical device are fed to the head network as an input. (Bauer, paragraph 0011-0012: “Therefore, a further embodiment of the method and the device provides that the reference vehicle model and the other vehicle model are identical. This avoids the need to determine parameters of the further vehicle model with the artificial neural network that are not included in the reference model, thus minimizing the number of parameters to be determined by the artificial neural network.” And paragraph 0051: “The input signals I of the KNN 500, which is used to determine the estimated values of the model parameters, are driving condition variables that are measured using sensors of the vehicle 101 or determined from the measured values of vehicle sensors, and/or one or more of the quantities E that are set by the driver of the vehicle 101.” – Since this could be used with a second vehicle, this is analogous to the second technical device. The driving condition variables which are used in the neural network come from the vehicle thus this is analogous to the measured values of the second device being fed into the head network as an input.)
Regarding claim 7, Bauer, Junginger, and Wen teach the method of claim 2, as cited above.
Bauer does not explicitly teach:
wherein information about a type or a class of the second input data is fed to the head network as an input.
However, Junginger further teaches:
wherein information about a type or a class of the second input data is fed to the head network as an input. (Junginger, paragraph 0064: “Thus, by selecting switch position A, B, input variable x is labeled depending on whether input variable x is a normal datum n or an artificial datum f. Therefore, it is self-evidently also possible to feed mixed batches, which contain both normal data n as well as artificial variables f, to the discriminator.” – The input variable being labeled depending on whether it is a normal datum or artificial datum is analogous to information about a type or a class of the second input data.)
Regarding claim 8, Bauer, Junginger, and Wen teach the method of claim 2, as cited above.
Bauer and Junginger do not explicitly teach:
wherein the base network is included in a plurality of base networks to which different first input data are fed.
However, Wen further teaches:
wherein the base network is included in a plurality of base networks to which different first input data are fed. (Wen, section 4.2, paragraph 2: “However, the C channels of input series are separated by a slicing layer first, and then every channel has its own univariate encoding section, like the first four encoding sections in U-Net.” – This is analogous to a plurality of base networks, each with an input series being analogous to the plurality of base networks to which different input data are fed.)
Regarding claim 9, Bauer, Junginger, and Wen teach the method of claim 8, as cited above.
Bauer and Junginger do not explicitly teach:
wherein an output of each of base of the plurality of base networks is fed to a common head network.
However, Wen further teaches:
wherein an output of each of base of the plurality of base networks is fed to a common head network. (Wen, section 4.2, paragraph 2: “The outputs from the fourth encoding section over all channels will be concatenated before an integrated fifth encoding section, followed by four decoding sections and a final output section, the same as U-Net. We have nicknamed this architecture MU-Net (multivariate U-Net).” – The fifth encoding section is analogous to the head network which receives the output from the previous channels.)
Claims 4 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Bauer in view of Junginger in view of Wen in view of Lee et al. (Training Deep Spiking Neural Networks Using Backpropagation), hereinafter Lee.
Regarding claim 4, Bauer, Junginger, and Wen teach the method of claim 2, as cited above.
Bauer, Junginger, and Wen do not explicitly teach:
wherein a number of neurons of the head network is smaller than a number of neurons of the base network by at least a factor of five or ten.
However, Lee teaches:
wherein a number of neurons of the head network is smaller than a number of neurons of the base network by at least a factor of five or ten. (Lee, section 3.2: “The convolutional layers produce 20 and 50 feature maps, respectively, with kernels of size 5 x 5. The output of the second pooling layer is connected to a fully connected hidden layer with 200 neurons followed by the output layer with 10 class neurons.” – The convolutional and hidden layers are analogous to the base network while the output layer is analogous to the head network and 10 neurons is more than 10 times smaller than 200 neurons.)
Lee is considered analogous to the claimed invention as it is in the same field of endeavor, machine learning. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to have modified Bauer, Junginger, and Wen, which already teaches training a neural network that is divided into a base network and a head network but does not explicitly teach that the number of neurons of the head network is smaller than the number of neurons in the base network by at least a factor of five or ten, to include the teachings of Lee which does teach that the number of neurons of the head network is smaller than the number of neurons in the base network by at least a factor of five or ten in order to "achieve very competitive levels of accuracy." (Lee, section 3.2)
Regarding claim 10, Bauer, Junginger, and Wen teach the method of claim 2, as cited above.
Bauer, Junginger, and Wen do not explicitly teach:
wherein the second input data is subjected to pre-processing before the calculating takes place in the artificial neural network.
However, Lee teaches:
wherein the second input data is subjected to pre-processing before the calculating takes place in the artificial neural network. (Lee, page 9, Table 3 – The last row represents the CNN presented in this paper and indicates that there was preprocessing done on the input data prior to using in the neural network.) Lee is considered analogous to the claimed invention as it is in the same field of endeavor, machine learning. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to have modified Bauer, Junginger, and Wen, which already teaches training a neural network to identify an impermissible deviation in a technical device using input data but does not explicitly teach that the input data is pre-processed before being fed into the neural network, to include the teachings of Lee which does teach that the input data is pre-processed before being fed into the neural network in order to "further improve the accuracy." (Lee, page 9, column 1)
Response to Arguments
Applicant’s arguments with respect to claim rejections under 35 USC §101 of claims 1-11 and 13 have been fully considered and are persuasive. In particular, applicant amended claim 1 to include the limitations of canceled claims 14 and 15 which were previously noted as integrating the judicial exception into a practical application.
Applicant’s arguments with respect to claim rejections under 35 USC §102 of claims 1 and 13 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JACQUELINE MEYER whose telephone number is (703)756-5676. The examiner can normally be reached M-F 8:00 am - 4:30 pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tamara Kyle can be reached at 571-272-4241. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.C.M./Examiner, Art Unit 2144
/TAMARA T KYLE/Supervisory Patent Examiner, Art Unit 2144