Prosecution Insights
Last updated: April 19, 2026
Application No. 17/751,083

DEPENDENCY CHECKING FOR MACHINE LEARNING MODELS

Final Rejection §101§103
Filed
May 23, 2022
Examiner
BASOM, BLAINE T
Art Unit
2141
Tech Center
2100 — Computer Architecture & Software
Assignee
Oracle International Corporation
OA Round
2 (Final)
43%
Grant Probability
Moderate
3-4
OA Rounds
4y 5m
To Grant
66%
With Interview

Examiner Intelligence

Grants 43% of resolved cases
43%
Career Allow Rate
140 granted / 326 resolved
-12.1% vs TC avg
Strong +23% interview lift
Without
With
+22.7%
Interview Lift
resolved cases with interview
Typical timeline
4y 5m
Avg Prosecution
38 currently pending
Career history
364
Total Applications
across all art units

Statute-Specific Performance

§101
7.3%
-32.7% vs TC avg
§103
59.5%
+19.5% vs TC avg
§102
13.0%
-27.0% vs TC avg
§112
12.9%
-27.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 326 resolved cases

Office Action

§101 §103
DETAILED ACTION This Office Action is responsive to the Applicant’s submission, filed on September 17, 2025, amending claims 1-10 and 12-20. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claims 15-20 are objected to because of the following informalities. Appropriate correction is required. Regarding claim 15, there is no antecedent basis for “the estimate signal” occurring within the phrase, “check for the repeating probe signal in the estimate signal.” Before this phrase, claim 15 recites a plurality of “estimate signals” and so it is unclear as to which of the plurality of estimate signals “the estimate signal” refers. Claims 16-20 depend from claim 15 and thereby include all of the limitations of claim 15. Accordingly, claims 16-20 are objected to under the same rationale as described with respect to claim 15. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea (e.g. a mental process) without significantly more. As described in MPEP § 2106, the analyses as to whether a claim qualifies as eligible subject matter under 35 U.S.C. § 101 includes the following determinations: (1) Whether the claim is to a statutory category, i.e. to a process, machine, manufacture or composition of matter (“Step 1”) – see MPEP §§ 2106, subsection III, and 2106.03 (2) If the claim is to a statutory category, whether the claim recites any judicial exceptions, including certain groupings of abstract ideas (i.e., mathematical concepts, certain methods of organizing human activity, or mental processes) (“Step 2A, Prong One”) – see MPEP §§ 2106, subsection III, and 2106.04 (3) If the claim recites a judicial exception, whether the claim recites additional elements that integrate the judicial exception into a practical application (“Step 2A, Prong Two”) – see MPEP §§ 2106, subsection III, and 2106.04 (4) If the claim does not recite additional elements that integrate the judicial exception into a practical application, whether the claim recites additional elements that amount to significantly more than the judicial exception (“Step 2B”) – see MPEP §§ 2106, subsection III, and 2106.05 Claim 1 Regarding “Step 1,” independent claim 1 is to a statutory category as claim 1 is directed to a method, i.e. a process. Accordingly, the analysis proceeds to “Step 2A, Prong One” to determine if the claim recites a judicial exception. In this case, claim 1 recites a mental process and thus recites a judicial exception. “’[T]he mental processes’ abstract idea grouping in particular is defined as concepts performed in the human mind, and examples of mental processes include observations, evaluations, judgements, and opinions.” MPEP § 2106.04(a)(2), subsection III. “If a claim recites a limitation that can practically be performed in the human mind, with or without the use of a physical aid such as pen and paper, the limitation falls within the mental processes grouping, and the claims recites an abstract idea. MPEP § 2106.04(a)(2), subsection III,B (citations omitted). Here, claim 1 recites: “monitoring a plurality of estimate signals output from the multivariate machine learning model, wherein the estimate signals predict expected values of test signals other than the one test signal; [and] checking for the oscillating perturbation in the estimate signal.” When given their broadest reasonable interpretation, these steps can practically be performed in the human mind. Because claim 1 recites a judicial exception (i.e. a mental process), the analysis proceeds to “Step2A, Prong Two.” But here the claim does not recite additional elements that integrate the judicial exception into a practical application. In particular, in addition to the above-noted mental process, claim 1 recites “applying an oscillating perturbation to one test signal of a plurality of test signals input into a multivariate machine learning model.” However, such a limitation is indicative of a field of use (machine learning models) of the above-noted abstract idea. Limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception do not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application. MPEP § 2106.05(h). Claim 1 also recites, “based on the results of the checking for the oscillating perturbation, automatically implementing a mitigation technique that reduces dependency in the machine learning model.” However, this represents no more than mere instructions to apply the judicial exception on a computer, and thus also does not integrate the judicial exception into a practical application. See MPEP § 2106.05(f). Accordingly, as claim 1 does not recite additional elements that integrate the judicial exception into a practical application, the analysis proceeds to “Step 2B” to determine whether the claims recite additional elements that amount to significantly more than the judicial exception. However, in this case, the claim does not. As noted above, in addition to the above-noted mental process, claim 1 recites “applying an oscillating perturbation to one test signal of a plurality of test signals input into a multivariate machine learning model.” As further noted above, such a limitation is indicative of a field of use (machine learning models) of the above-noted abstract idea, and therefore does not amount to significantly more that the judicial exception. Claim 1 also recites, “based on the results of the checking for the oscillating perturbation, automatically implementing a mitigation technique that reduces dependency in the machine learning model.” However, like noted above, this represents no more than mere instructions to apply the judicial exception on a computer, and thus also does not amount to significantly more that the judicial exception. See MPEP § 2106.05(f) Consequently, claim 1 recites an abstract idea but does not include additional elements that integrate the abstract idea into a practical application or that amount to significantly more than the abstract idea. As a result, and for the reasons described above, claim 1 is rejected as being patent ineligible under 35 U.S.C. § 101. Claim 2 In claim 2, the limitation of “performing a cross power spectral density transform on the one test signal and one estimate signal of the plurality of estimate signals” recites an abstract idea, i.e. a mathematical concept. The rest of the claim recites an additional abstract idea, i.e. a mental process, as the following can practically be performed in the human mind: “examining the cross power spectral density at a frequency of the oscillating perturbation to determine whether a peak is present or absent at the frequency; and in response to (i) determining that the peak is present, indicating that the multivariate machine learning model inaccurately predicts the estimate signal in an evaluation of dependency; and (ii) determining that the peak is absent, indicating that the multivariate machine learning model accurately predicts the estimate signal in the evaluation of dependency.” Consequently, claim 2 recites abstract ideas but does not include additional elements that integrate the abstract ideas into a practical application or that amount to significantly more than the abstract ideas. As a result, claim 2 is also rejected as being patent ineligible under 35 U.S.C. § 101. Claim 3 In claim 3, the following limitation is considered a recitation of a mathematical concept: “inferring a coupling coefficient between the one test signal and the estimate signal based on the oscillating perturbation, wherein the coupling coefficient is a ratio of output amplitude of the one estimate signal at a frequency of the oscillating perturbation to the input amplitude of the one test signal at the frequency of the oscillating perturbation.” Also in claim 3, the recitation of “presenting the coupling coefficient in an evaluation of dependency” can be considered a mental process. Accordingly, claim 3 recites abstract ideas but does not include additional elements that integrate the abstract ideas into a practical application or that amount to significantly more than the abstract ideas. As a result, claim 3 is also rejected as being patent ineligible under 35 U.S.C. § 101. Claim 4 In claim 4, the recitation of “wherein applying the oscillating perturbation is applied to the one test signal by adding values of the one test signal and the oscillating perturbation at the corresponding indexes” is indicative of a mathematical concept. In addition to this abstract idea, claim 4 recites “generating the oscillating perturbation as a time series signal having corresponding indexes with the one test signal.” However, this is merely a characteristic of the field of use recited in claim 1. Accordingly, claim 4 does not include any additional elements (i.e. elements other than the abstract idea) that integrate the abstract idea into a practical application or amount to significantly more than the abstract idea, and is therefore also patent ineligible under 35 U.S.C. § 101. Claim 5 Claim 5 recites “automatically selecting an amplitude of the oscillating perturbation that is within one standard deviation of the one test signal; and generating the oscillating perturbation to have the amplitude.” However, this is merely a characteristic of the field of use recited in claim 1. Accordingly, claim 5 does not include any additional elements (i.e. elements other than the abstract idea) that integrate the abstract idea into a practical application or amount to significantly more than the abstract idea, and is therefore also patent ineligible under 35 U.S.C. § 101. Claim 6 The limitations of claim 6 recite an abstract idea, i.e. a mental process, as the following can practically be performed in the human mind: “determining that the one estimate signal erroneously predicts values for the second test signal that at least partially mimic the behavior of the one test signal; and indicating that the machine learning model is subject to spillover.” Accordingly, claim 6 recites an abstract idea but does not include additional elements that integrate the abstract idea into a practical application or that amount to significantly more than the abstract idea. As a result, claim 6 is also rejected as being patent ineligible under 35 U.S.C. § 101. Claim 7 The limitations of claim 7 recite an abstract idea, i.e. a mental process, as the following can practically be performed in the human mind: “determining that the one estimate signal erroneously predicts values that at least partially mimic the behavior of the one test signal; and indicating that the multivariate machine learning model is subject to following.” Accordingly, claim 7 recites an abstract idea but does not include additional elements that integrate the abstract idea into a practical application or that amount to significantly more than the abstract idea. As a result, claim 7 is also rejected as being patent ineligible under 35 U.S.C. § 101. Claim 8 Regarding “Step 1,” independent claim 8 is to a statutory category as claim 8 is directed to a non-transitory computer-readable medium, which can be considered a manufacture or composition of matter. Accordingly, the analysis proceeds to “Step 2A, Prong One” to determine if the claim recites a judicial exception. In this case, claim 8 recites a mental process and thus recites a judicial exception. Claim 8 particularly recites: “monitor a plurality of estimate signals output from the multivariate machine learning model, wherein the estimate signals predict expected values of test signals other than the one test signal; [and] check for the oscillating perturbation in the estimate signal.” When given their broadest reasonable interpretation, these tasks can practically be performed in the human mind. Because claim 8 recites a judicial exception (i.e. a mental process), the analysis proceeds to “Step2A, Prong Two.” But here the claim does not recite additional elements that integrate the judicial exception into a practical application. In particular, in addition to the above-noted mental process, claim 8 recites “apply an oscillating perturbation to one test signal of a plurality of test signals input into a multivariate machine learning model.” However, such a limitation is indicative of a field of use (machine learning models) of the above-noted abstract idea. Limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception do not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application. See MPEP § 2106.05(h). Claim 8 also recites, “based on the results of the checking for the oscillating perturbation, automatically implement a mitigation technique that reduces dependency in the machine learning model.” However, this represents no more than mere instructions to apply the judicial exception on a computer, and thus also does not integrate the judicial exception into a practical application. See MPEP § 2106.05(f). Claim 8 also recites that the non-transitory computer-readable medium has “stored thereon computer-executable instructions that when executed by at least a processor of a computer, cause the computer to” perform the tasks recited therein. However, this also represents no more than mere instructions to apply the judicial exception on a computer, and thus also does not integrate the judicial exception into a practical application. See MPEP § 2106.05(f). Accordingly, as claim 8 does not recite additional elements that integrate the judicial exception into a practical application, the analysis proceeds to “Step 2B” to determine whether the claims recite additional elements that amount to significantly more than the judicial exception. However, in this case, the claim does not. As noted above, in addition to the above-noted mental process, claim 8 recites “apply an oscillating perturbation to one test signal of a plurality of test signals input into a multivariate machine learning model.” As further noted above, such a limitation is indicative of a field of use (machine learning models) of the above-noted abstract idea, and therefore does not amount to significantly more that the judicial exception. Claim 8 also recites, “based on the results of the checking for the oscillating perturbation, automatically implement a mitigation technique that reduces dependency in the machine learning model,” and that the non-transitory computer-readable medium has “stored thereon computer-executable instructions that when executed by at least a processor of a computer, cause the computer to” perform the tasks recited therein. As noted above, however, this represents no more than mere instructions to apply the judicial exception on a computer, and thus also does not amount to significantly more than the judicial exception. Consequently, claim 8 recites an abstract idea but does not include additional elements that integrate the abstract idea into a practical application or that amount to significantly more than the abstract idea. As a result, and for the reasons described above, claim 8 is rejected as being patent ineligible under 35 U.S.C. § 101. Claim 9 In claim 9, the limitation reciting, “perform a cross power spectral density transform on the one test signal and one estimate signal of the plurality of estimate signals” recites an abstract idea, i.e. a mathematical concept. The rest of the claim recites an additional abstract idea, i.e. a mental process, as the following can practically be performed in the human mind: “examine the cross power spectral density at a frequency of the oscillating perturbation to determine whether a peak is present or absent at the frequency; and in response to (i) determining that the peak is present, indicate that the multivariate machine learning model inaccurately predicts the estimate signal in an evaluation of dependency; and (ii) determining that the peak is absent, indicate that the multivariate machine learning model accurately predicts the estimate signal in the evaluation of dependency.” Consequently, claim 9 recites abstract ideas but does not include additional elements that integrate the abstract ideas into a practical application or that amount to significantly more than the abstract ideas. As a result, claim 9 is also rejected as being patent ineligible under 35 U.S.C. § 101. Claim 10 The limitations of claim 10 recite an abstract idea, i.e. a mental process, as the following can practically be performed in the human mind: “infer a coupling coefficient between the one test signal and the estimate signal based on the oscillating perturbation; and present the coupling coefficient in an evaluation of dependency.” Claim 10 recites no other limitations. Accordingly, claim 10 recites an abstract idea but does not include additional elements that integrate the abstract idea into a practical application or that amount to significantly more than the abstract idea. As a result, claim 10 is also rejected as being patent ineligible under 35 U.S.C. § 101. Claim 11 Claim 11 recites that “the oscillating perturbation is sinusoidal.” However, this is merely a characteristic of the field of use recited in claim 8. Accordingly, claim 11 does not include any additional elements (i.e. elements other than the abstract idea) that integrate the abstract idea into a practical application or amount to significantly more than the abstract idea, and is therefore also patent ineligible under 35 U.S.C. § 101. Claim 12 Claim 12 recites “automatically select an amplitude of the oscillating perturbation that is within one standard deviation of the one test signal; and generate the oscillating perturbation to have the amplitude.” However, this is merely a characteristic of the field of use recited in claim 8. Accordingly, claim 12 does not include any additional elements (i.e. elements other than the abstract idea) that integrate the abstract idea into a practical application or amount to significantly more than the abstract idea, and is therefore also patent ineligible under 35 U.S.C. § 101. Claim 13 The limitations of claim 13 recite an abstract idea, i.e. a mental process, as the following can practically be performed in the human mind: “determine that the one estimate signal erroneously predicts values for the second test signal that at least partially mimic the behavior of the one test signal; and indicate that the multivariate machine learning model is subject to spillover.” Accordingly, claim 13 recites an abstract idea but does not include additional elements that integrate the abstract idea into a practical application or that amount to significantly more than the abstract idea. As a result, claim 13 is also rejected as being patent ineligible under 35 U.S.C. § 101. Claim 14 The limitations of claim 14 recite an abstract idea, i.e. a mental process, as the following can practically be performed in the human mind: “determine that the one estimate signal erroneously predicts values that at least partially mimic the behavior of the one test signal; and indicate that the multivariate machine learning model is subject to following.” Accordingly, claim 14 recites an abstract idea but does not include additional elements that integrate the abstract idea into a practical application or that amount to significantly more than the abstract idea. As a result, claim 14 is also rejected as being patent ineligible under 35 U.S.C. § 101. Claim 15 Regarding “Step 1,” independent claim 15 is to a statutory category as claim 15 is directed to a computing system, which can be considered a machine or manufacture. Accordingly, the analysis proceeds to “Step 2A, Prong One” to determine if the claim recites a judicial exception. In this case, claim 15 recites a mental process and thus recites a judicial exception. In particular, claim 15 recites: “monitor a plurality of estimate signals output from the multivariate machine learning model, wherein the estimate signals predict expected values of test signals other than the one test signal; [and] check for the repeating probe signal in the estimate signal.” When given their broadest reasonable interpretation, these steps can practically be performed in the human mind. Because claim 15 recites a judicial exception (i.e. a mental process), the analysis proceeds to “Step2A, Prong Two.” But here the claim does not recite additional elements that integrate the judicial exception into a practical application. In particular, in addition to the above-noted mental process, claim 15 recites “apply a repeating probe signal to one input signal of a plurality of input signals that are input into a multivariate machine learning model.” However, such a limitation is indicative of a field of use (machine learning models) of the above-noted abstract idea. Limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception do not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application. MPEP § 2106.05(h). Claim 15 also recites, “based on the results of the checking for the repeating probe signal, automatically implement a mitigation technique that reduces dependency in the machine learning model.” However, this represents no more than mere instructions to apply the judicial exception on a computer, and thus also does not integrate the judicial exception into a practical application. See MPEP § 2106.05(f). Also in addition to the above-noted mental process, claim 15 recites that the computing system comprises: “at least one processor connected to at least one memory; at least one network interface for communicating to one or more networks, [and] a non-transitory computer readable medium including instructions stored thereon that when executed by at least the processor cause the computing system to” perform the above-noted tasks. However, this also represents no more than mere instructions to apply the judicial exception on a computer, and thus also does not integrate the judicial exception into a practical application. See MPEP § 2106.05(f). Accordingly, as claim 15 does not recite additional elements that integrate the judicial exception into a practical application, the analysis proceeds to “Step 2B” to determine whether the claims recite additional elements that amount to significantly more than the judicial exception. However, in this case, the claim does not. As noted above, in addition to the above-noted mental process, claim 15 recites “apply a repeating probe signal to one input signal of a plurality of input signals that are input into a multivariate machine learning model.” As further noted above, such a limitation is indicative of a field of use (machine learning models) of the above-noted abstract idea, and therefore does not amount to significantly more that the judicial exception. Claim 15 also recites, “based on the results of the checking for the repeating probe signal, automatically implement a mitigation technique that reduces dependency in the machine learning model,” and that the computing system comprises: “at least one processor connected to at least one memory; at least one network interface for communicating to one or more networks, [and] a non-transitory computer readable medium including instructions stored thereon that when executed by at least the processor cause the computing system to” perform the above-noted tasks. As noted above, this represents no more than mere instructions to apply the judicial exception on a computer, and thus also does not amount to significantly more than the judicial exception. Consequently, claim 15 recites an abstract idea but does not include additional elements that integrate the abstract idea into a practical application or that amount to significantly more than the abstract idea. As a result, and for the reasons described above, claim 15 is rejected as being patent ineligible under 35 U.S.C. § 101. Claim 16 In claim 16, the limitation of “performing a cross power spectral density transform on the input signal and one estimate signal of the plurality of estimate signals” recites an abstract idea, i.e. a mathematical concept. The rest of the claim recites an additional abstract idea, i.e. a mental process, as the following can practically be performed in the human mind: “examining the cross power spectral density at a frequency of the repeating probe signal to determine whether a peak is present or absent at the frequency; and in response to (i) determining that the peak is present, indicating that the machine learning model inaccurately predicts the estimate signal in an evaluation of dependency; and (ii) determining that the peak is absent, indicating that the machine learning model accurately predicts the estimate signal in the evaluation of dependency.” Consequently, claim 16 recites abstract ideas but does not include additional elements that integrate the abstract ideas into a practical application or that amount to significantly more than the abstract ideas. As a result, claim 16 is also rejected as being patent ineligible under 35 U.S.C. § 101. Claim 17 The limitations of claim 17 recite an abstract idea, i.e. a mental process, as the following can practically be performed in the human mind: “determine a severity metric between the one input signal one estimate signal of the plurality of estimate signals, wherein the severity metric quantifies an extent to which dependency adversely affects accuracy of the one estimate signal; [and] evaluate the severity metric to determine that the mitigation technique should be applied to the multivariate machine learning model.” In addition to this abstract idea, claim 17 also recites “automatically implementing the mitigation technique, wherein the mitigation technique is one or more of (a) increasing a number of training vectors used to train the multivariate machine learning model, (b) filtering input signals to reduce noise, and (c) changing a number of input signals.” However, this represents no more than mere instructions to apply the judicial exception on a computer, and thus does not integrate the judicial exception into a practical application or amount to significantly more than the judicial exception. Accordingly, claim 17 recites an abstract idea but does not include additional elements that integrate the abstract idea into a practical application or that amount to significantly more than the abstract idea. As a result, claim 17 is also rejected as being patent ineligible under 35 U.S.C. § 101. Claim 18 Claim 18 recites “automatically select an amplitude for the repeating probe signal that is within a standard deviation of the one input signal; and generate the repeating probe signal to have the amplitude.” However, this is merely a characteristic of the field of use recited in claim 15. Accordingly, claim 18 does not include any additional elements (i.e. elements other than the abstract idea) that integrate the abstract idea into a practical application or amount to significantly more than the abstract idea, and is therefore also patent ineligible under 35 U.S.C. § 101. Claim 19 The limitations of claim 19 recite an abstract idea, i.e. a mental process, as the following can practically be performed in the human mind: “determining that the one estimate signal erroneously predicts values for the second input signal that at least partially mimic the behavior of the one input signal; and indicating that the multivariate machine learning model is subject to spillover.” Accordingly, claim 19 recites an abstract idea but does not include additional elements that integrate the abstract idea into a practical application or that amount to significantly more than the abstract idea. As a result, claim 19 is also rejected as being patent ineligible under 35 U.S.C. § 101. Claim 20 The limitations of claim 20 recite an abstract idea, i.e. a mental process, as the following can practically be performed in the human mind: “determining that the one estimate signal erroneously predicts values that at least partially mimic the behavior of the one input signal; and indicating that the multivariate machine learning model is subject to following.” Accordingly, claim 20 recites an abstract idea but does not include additional elements that integrate the abstract idea into a practical application or that amount to significantly more than the abstract idea. As a result, claim 20 is also rejected as being patent ineligible under 35 U.S.C. § 101. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1, 6-8, 10, 11, 13-15, 17, 19 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication No. 2007/0005311 to Wegerich et al. (“Wegerich”), and also over U.S. Patent No. 8,903,747 to Eilebrecht et al. (“Eilebrecht”). Regarding claims 1 and 8, Wegerich describes a method and system for the automated measurement of model performance in a data-driven equipment health monitoring system, the data-driven equipment monitoring health system being for use in the early detection of equipment problems and process problems in a mechanical or industrial engine or process (see e.g. paragraph 0006). Like claimed, Wegerich particularly teaches: applying a perturbation to one test signal of a plurality of test signals input into a multivariate machine learning model (see e.g. paragraphs 0013-0014: Wegerich discloses that the equipment health monitoring system comprises an estimation engine that uses one or more models to generate estimates in response to receiving input observations. Wegerich discloses that the models can be developed using one or more machine learning techniques – see e.g. paragraphs 0014-0015. Moreover, Wegerich teaches that the models can particularly be “autoassociative” multivariate models that receive a plurality of input signals, e.g. from a plurality of sensors, and that generate a plurality of respective estimated signals corresponding to the plurality of input signals – see e.g. paragraphs 0003, 0013 and 0015-0016. In such implementations, the one or more machine learning models are considered multivariate machine learning models like claimed. Wegerich further discloses that the equipment health monitoring system also comprises a model performance module for generating performance metrics for the machine learning models used by or for potential use by the equipment health monitoring system – see e.g. paragraphs 0014 and 0019-0020. To generate such performance metrics for a machine learning model, Wegerich teaches applying a perturbation to individual signals, i.e. variables, input to the model – see e.g. paragraphs 0023-0025 and 0028. Accordingly, Wegerich teaches applying a perturbation to an individual signal, i.e. one test signal, of the plurality of signals input to a multivariate machine learning model.); monitoring a plurality of estimate signals output from the multivariate machine learning model, wherein the estimate signals predict expected values of test signals other than the one test signal (see e.g. paragraphs 0014 and 0019-0020: like noted above, Wegerich discloses that the equipment health monitoring system comprises a model performance module for generating performance metrics for the machine learning models used by or for potential use by the equipment health monitoring system – see e.g. paragraphs 0014 and 0019-0020. Wegerich particularly discloses that one of these performance metrics, spillover, measures the relative amount that variables in a model deviate from normality when another variable is perturbed – see paragraph 0028. To determine the spillover, Wegerich teaches perturbing a variable, i.e. one test signal like claimed, input to a machine learning model and determining the amount that the estimated signals output from the machine learning model, with the exception of the estimated signal corresponding to the perturbed test signal, deviate from normality – see e.g. paragraphs 0028. Determining the amount that the estimated signals output from the machine learning model deviate from normal would entail monitoring the plurality of estimate signals output from the multivariate machine learning model, and wherein the estimate signals predict expected values of test signals other than the one perturbed test signal.); checking for the perturbation in the estimate signals (see e.g. paragraph 0028: like noted above, Wegerich teaches identifying the amount the estimated signals output from the machine learning model deviate from normal when a signal with perturbed data is input to the machine learning model; the deviation is indicative of the perturbation in the estimated signals.); and based on the results of the checking for the perturbation, automatically implementing a mitigation technique that reduces dependency in the machine learning model (see e.g. paragraphs 0006, 0014, 0019 and 0028: like noted above, Wegerich teaches that the performance metrics calculated for a machine learning model include spillover, which is based on the amount the estimated signals output from the machine learning model deviate from normal when a signal with perturbed data is input to the machine learning model. Wegerich further teaches automatically selecting and deploying a model based on the performance metrics, or redesigning the model as necessary – see e.g. paragraphs 0006, 0019 and 0032. That is, Wegerich teaches automatically redesigning the model based at least in part on the spillover metric, i.e. based on the results of the checking for the perturbation. As the purpose of redesigning the model would be to improve the one or more performance metrics, e.g. to reduce spillover, redesigning the model can be considered a mitigation technique that reduces dependency in the machine learning model.). Accordingly, Wegerich teaches a computer-implemented method similar to that of claim 1. Wegerich suggests that such teachings can be implemented via computer-executable instructions (i.e. software) for execution by at least a processor of a computer (see e.g. paragraphs 0014 and 0032). The non-transitory computer-readable medium necessary for storing such software is considered a non-transitory computer-readable medium similar to that of claim 8. However, Wegerich does not disclose or suggests that the perturbations applied to the test input signal are “oscillating” like required by claims 1 and 8. Eilebrecht nevertheless describes a software optimization system that isolates an effect of a change in a control variable from the effects of changes in other variables: the system particularly applies an oscillating perturbation to a control variable input to a software system (i.e. by varying the control variable at a specific frequency), and then checks for the oscillating perturbation in an output variable (i.e. by using digital signal processing techniques) to identify the effect of the control variable on the output variable (see e.g. column 2, lines 27-48; column 3, line 52 – column 4, line 4; and column 4, line 28 – column 5, line 13). It would have been obvious to one of ordinary skill in the art, having the teachings of Wegerich and Eilebrecht before the effective filing date of the claimed invention, to modify the method and non-transitory computer-readable medium taught by Wegerich such that the perturbations applied to the test signal input to the machine learning model are oscillating like taught by Eilebrecht, and whereby the output responses (i.e. estimates) dependent on such inputs are identified by checking for the oscillating perturbation in the output response (i.e. estimate) signal. It would have been advantageous to one of ordinary skill to utilize such a combination because it can isolate the effects of the change of the input variable from the effects of other changes, as is taught by Eilebrecht (see e.g. column 3, line 52 – column 4, line 4). Accordingly, Wegerich and Eilebrecht are considered to teach, to one of ordinary skill in the art, a computer-implemented method like that of claim 1 and a non-transitory computer-readable medium like that of claim 8. As per claims 6 and 13, Wegerich further teaches that one estimate signal of the plurality of estimate signals can predict values for a second test signal (i.e. variable) input into the machine learning model, wherein the method generally comprises determining that the one estimate signal erroneously predicts values for the second test signal that at least partially mimic the behavior of the one test signal, and indicating that the multivariate machine learning model is subject to spillover (see e.g. paragraphs 0015 and 0028). Accordingly, the above-described combination of Wegerich and Eilebrecht is further considered to teach a computer-implemented method like that of claim 6 and a non-transitory computer-readable medium like that of claim 13. As per claims 7 and 14, Wegerich further teaches that the one estimate signal of the plurality of estimate signals predicts values for the one test signal (i.e. variable), wherein the method comprises determining that the one estimate signal erroneously predicts values that at least partially mimic the behavior of the one test signal, and indicating that the multivariate machine learning model is subject to following in the evaluation of the dependency of the machine learning model (see e.g. paragraphs 0015 and 0023-0025). Accordingly, the above-described combination of Wegerich and Eilebrecht is further considered to teach a computer-implemented method like that of claim 7 and a non-transitory computer-readable medium like that of claim 14. As per claim 10, Wegerich teaches determining a spillover metric based on the applied perturbation, wherein the spillover metric quantifies an extent to which the one test signal (i.e. a variable) influences an estimate signal output by the machine learning model (see e.g. paragraph 0028). Such a metric is considered a coupling coefficient like claimed between the test signal and the estimate signal. Wegerich further suggests that the spillover metric is presented (e.g. to an engineer) in an evaluation of the dependency (see e.g. paragraph 0014). As described above, it would have been obvious to modify the non-transitory computer-readable medium taught by Wegerich such that the perturbations applied to the test signal are oscillating like taught by Eilebrecht, and whereby the responses (i.e. estimates) output by the machine learning model are identified by checking for the oscillating perturbation in the output (i.e. estimate) signal. It thus follows that the spillover metric, which as noted above is determined based on the perturbation, would particularly be inferred based on the oscillating perturbation like claimed. Accordingly, the above-described combination of Wegerich and Eilebrecht is further considered to teach a non-transitory computer-readable medium like that of claim 10. As per claim 11, it would have been obvious, as is described above, to modify the non-transitory computer-readable medium taught by Wegerich such that the perturbations applied to the one test signal are oscillating like taught by Eilebrecht, and whereby the responses (i.e. estimates) output by the machine learning model are identified by checking for the oscillating perturbation in the output (i.e. estimate) signal. Eilebrecht particularly teaches that the oscillating perturbation can be generated as a sinusoidal waveform (see e.g. column 3, lines 52-63; and column 4, lines 28-40). Accordingly, the above-described combination of Wegerich and Eilebrecht is further considered to teach a non-transitory computer-readable medium like that of claim 11. Regarding claim 15, and like noted above, Wegerich describes a method and system for the automated measurement of model performance in a data-driven equipment health monitoring system, the data-driven equipment monitoring health system being for use in the early detection of equipment problems and process problems in a mechanical or industrial engine or process (see e.g. paragraph 0006). Like claimed, Wegerich particularly teaches: applying a probe signal to one input signal of a plurality of input signals that are input into a multivariate machine learning model (see e.g. paragraphs 0013-0014: Wegerich discloses that the equipment health monitoring system comprises an estimation engine that uses one or more models to generate estimates in response to receiving input observations. Wegerich discloses that the models can be developed using one or more machine learning techniques – see e.g. paragraphs 0014-0015. Moreover, Wegerich teaches that the models can particularly be “autoassociative” multivariate models that receive a plurality of input signals, e.g. from a plurality of sensors, and that generate a plurality of respective estimated signals corresponding to the plurality of input signals – see e.g. paragraphs 0003, 0013 and 0015-0016. In such implementations, the one or more machine learning models are considered multivariate machine learning models like claimed. Wegerich further discloses that the equipment health monitoring system also comprises a model performance module for generating performance metrics for the machine learning models used by or for potential use by the equipment health monitoring system – see e.g. paragraphs 0014 and 0019-0020. To generate such performance metrics for a machine learning model, Wegerich teaches applying a perturbation to individual signals, i.e. variables, input to the model – see e.g. paragraphs 0023-0025 and 0028. The perturbation is considered a “probe signal” like claimed, which is applied to one input signal of a plurality of input signals that are input into a multivariate machine learning model.); monitoring a plurality of estimate signals output from the multivariate machine learning model, wherein the estimate signals predict expected values of test signals other than the one test signal (see e.g. paragraphs 0014 and 0019-0020: like noted above, Wegerich discloses that the equipment health monitoring system comprises a model performance module for generating performance metrics for the machine learning models used by or for potential use by the equipment health monitoring system – see e.g. paragraphs 0014 and 0019-0020. Wegerich particularly discloses that one of these performance metrics, spillover, measures the relative amount that variables in a model deviate from normality when another variable is perturbed – see paragraph 0028. To determine the spillover, Wegerich teaches perturbing a variable, i.e. one test signal like claimed, input to a machine learning model and determining the amount that the estimated signals output from the machine learning model, with the exception of the estimated signal corresponding to the perturbed test signal, deviate from normality – see e.g. paragraphs 0028. Determining the amount that the estimated signals output from the machine learning model deviate from normal would entail monitoring the plurality of estimate signals output from the multivariate machine learning model, and wherein the estimate signals predict expected values of test signals other than the one perturbed test signal.); checking for the probe signal in the estimate signal (see e.g. paragraphs 0025 and 0028: like noted above, Wegerich teaches identifying the amount the estimated signals deviate from normal when a signal with perturbed data is input to the machine learning model; the deviation is indicative of the perturbation in the estimated signals.); and based on the results of the checking for the probe signal, automatically implementing a mitigation technique that reduces dependency in the machine learning model (see e.g. paragraphs 0006, 0014, 0019 and 0028: like noted above, Wegerich teaches that the performance metrics calculated for a machine learning model include spillover, which is based on the amount the estimated signals output from the machine learning model deviate from normal when a signal with perturbed data is input to the machine learning model. Wegerich further teaches automatically selecting and deploying a model based on the performance metrics, or redesigning the model as necessary – see e.g. paragraphs 0006, 0019 and 0032. That is, Wegerich teaches automatically redesigning the model based at least in part on the spillover metric, i.e. based on the results of the checking for the probe signal. As the purpose of redesigning the model would be to improve the one or more performance metrics, e.g. to reduce spillover, redesigning the model can be considered a mitigation technique that reduces dependency in the machine learning model.). Wegerich suggests that such teachings can be implemented via computer-executable instructions (i.e. software) for execution by at least a processor of a computer system connected to a network (see e.g. paragraphs 0014 and 0032). Such a computer system, which would necessarily comprise a processor connected to at least one memory, at least one network interface for communicating with one or more networks, and a non-transitory computer readable medium storing the software for execution by the processor, is considered a computing system similar that of claim 15. However, Wegerich does not disclose or suggests that the probe signal applied to the input signal is “repeating” like required by claim 15. Eilebrecht nevertheless describes a software optimization system that isolates an effect of a change in a control variable from the effects of changes in other variables: the system particularly applies a repeating probe signal to a control variable input to a software system (i.e. by varying the control variable at a specific frequency), and then checks for the repeating probe signal in an output variable (i.e. by using digital signal processing techniques) to identify the effect of the control variable on the output variable (see e.g. column 2, lines 27-48; column 3, line 52 – column 4, line 4; and column 4, line 28 – column 5, line 13). It would have been obvious to one of ordinary skill in the art, having the teachings of Wegerich and Eilebrecht before the effective filing date of the claimed invention, to modify the computing system taught by Wegerich such that the probe signal applied to the input signal input to the machine learning model is repeating like taught by Eilebrecht, and whereby the output responses (i.e. estimates) dependent on such inputs are identified by checking for the repeating probe signal in the output response (i.e. estimate) signal. It would have been advantageous to one of ordinary skill to utilize such a combination because it can isolate the effects of the change of the input variable from the effects of other changes, as is taught by Eilebrecht (see e.g. column 3, line 52 – column 4, line 4). Accordingly, Wegerich and Eilebrecht are considered to teach, to one of ordinary skill in the art, computing system like that of claim 15. As per claim 17, Wegerich further suggests that if the probe signal (i.e. perturbation) appears in the estimate signal, the computer system (i) determines a severity metric (e.g. a spillover) between the one input signal and the one estimate signal of the plurality of estimate signals, wherein the severity metric quantifies an extent to which dependency adversely affects accuracy of the one estimate signal, (ii) evaluates the severity metric to determine that the mitigation technique should be applied to the multivariate machine learning model, and (iii) automatically implements the mitigation technique, wherein the mitigation technique is one or more of (a) increasing a number of training vectors used to train the multivariate machine learning model, (b) filtering input signals to reduce noise, and (c) changing a number of input signals (see e.g. paragraph 0006-0007: like noted above, Wegerich teaches automatically redesigning the machine learning model if necessary based on performance metrics of the model, including spillover. Wegerich is thus considered to teach determining severity metrics, including a spillover between one input signal and the one estimate signal output of the machine learning model, and redesigning the model if necessary based on an evaluation of the severity metrics. As noted above, redesigning the machine learning model is considered a mitigation technique to reduce dependency in the model. Wegerich teaches that models can differ according to which variables are selected to be grouped into the model, or the data used to train the model – see e.g. paragraph 0032. Accordingly, it is apparent that redesigning the machine learning model, i.e. the mitigation technique, can comprise changing the data used to train the model and/or the input variables of the model. Wegerich thus teaches that the mitigation technique can comprise increasing a number of training vectors used to train the multivariate machine learning model or changing the number of input signals.). As described above, it would have been obvious to modify the computing system taught by Wegerich such that the probe signal is repeating like taught by Eilebrecht. Accordingly, the above-described combination of Wegerich and Eilebrecht is further considered to teach a computing system like that of claim 17. As per claim 19, Wegerich further teaches that the one estimate signal of the plurality of estimate signals can predict values for a second input signal (i.e. variable) input into the multivariate machine learning model, wherein the computing system further determines that the one estimate signal erroneously predicts values for the second input signal that at least partially mimic the behavior of the one input signal, and indicates that the multivariate machine learning model is subject to spillover (see e.g. paragraphs 0015 and 0028). Accordingly, the above-described combination of Wegerich and Eilebrecht is further considered to teach a computing system like that of claim 19. As per claim 20, Wegerich further teaches that one estimate signal of the plurality of estimate signals predicts values for the one input signal (i.e. variable), wherein the computing system further determines that the one estimate signal erroneously predicts values that at least partially mimic the behavior of the one input signal, and indicates that the multivariate machine learning model is subject to following (see e.g. paragraphs 0015 and 0023-0025). Accordingly, the above-described combination of Wegerich and Eilebrecht is further considered to teach a computing system like that of claim 20. Claims 2, 9 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Wegerich and Eilebrecht, which is described above, and also over the article entitled, “Multi-Frequency Sinusoidal Perturbation Method for Dynamic Characterization of Multi-Processor Computer Servers” by Schuster et al. (“Schuster”). Regarding claims 2 and 9, Wegerich and Eilebrecht teach a computer-implemented method like that of claim 1 and a non-transitory computer-readable medium like that of claim 8, as are described above, and which entail applying an oscillating perturbation to one test signal input to a machine learning model and checking for the oscillating perturbation in a plurality of estimate signals output from the machine learning model. Wegerich suggests that the machine learning model is indicated as inaccurately predicting the estimate signal if the output is affected by the perturbed input (see e.g. paragraph 0025). Moreover, Eilebrecht generally teaches that digital signal processing techniques can be used to determine the effect of the input signal on the output signal (see e.g. column 4, line 28 – column 5, line 13). Wegerich and Eilebrecht, however, do not specifically teach: (a) performing a cross power spectral density transform on the one test signal and one estimate signal of the plurality of estimate signals; (b) examining the cross power spectral density at a frequency of the oscillating perturbation to determine whether a peak is present or absent at the frequency; and (c) in response to (i) determining that the peak is present, indicating that the multivariate machine learning model inaccurately predicts the estimate signal in an evaluation of dependency, and (ii) determining that the peak is absent, indicating that the multivariate machine learning model accurately predicts the estimate signal in the evaluation of dependency, as is required by claims 2 and 9. Similar to Wegerich and Eilebrecht, Schuster generally teaches identifying a relationship between an input variable of a system and an output variable by applying an oscillating perturbation (i.e. a sinusoidal excitation) to the input variable and checking for the oscillating perturbation in the output variable: Dynamic characterization of complex systems such as enterprise compute servers and web servers can be achieved by introducing perturbations in one or more “input” variables, and measuring the time-dependent responses in one or more “response” variables. We quantify this relationship between input variables and response variables with a “dynamic coupling coefficient,” which may be a function of load, or, more generally, may be a multivariate function of very many input variables. In a dynamically executing system such as a web server, distributed synthetic transaction generators can be employed for real-time continuous monitoring of system transaction latencies. These “canary tests” provide QoS performance metrics on a 24*7 basis as a dynamical function of system load. Specifically, in order to measure the impact of some performance parameter X on another performance parameter Y, the synthetic transactions introduce an (ideally small) perturbation in X, from which the resulting effect on parameter Y, if any, can be measured. As an example, one might compress a 10 Mbyte file and attempt to discern the temperature effect on one or more ASIC modules on a system board. Using time domain techniques, such a measurement would very likely be impossible on a large, multi-user, multi-cpu server, because of the extremely small effect one is seeking to discern and the poor signal-to-noise ratio. The well-known sinusoidal excitation technique for estimation of transfer functions [1] allows us to translate this input-output effect to the frequency domain. The advantage of working with this technique is that we concentrate our effort on a few number of frequency points (the frequencies of the sinusoidal excitations) where the correlation or coupling between variables is clearly seen. The technique has been already adapted by one of the authors for the dynamic system characterization of chaotic, nonlinearly interacting physical variables in nuclear power plants [2, 3]. This method, used now for dynamical system characterization of large, multi-processor servers, is an elegant and powerful exploratory analysis tool to characterize complex system behavior, particularly the relationships among various dynamic system parameters. (Pages 1-2; emphasis added). Regarding the claimed invention, Schuster particularly teaches that checking for the oscillating perturbation can comprise: (a) performing a cross power spectral density transform (i.e. cross spectrum density, “CSD,” in the following excerpt) on the input variable and the output variable; (b) examining the cross power spectral density at a frequency of the oscillating perturbation to determine whether a peak is present or absent at the frequency; and (c) in response to (i) determining that the peak is present, concluding that the output variable is dependent on the input variable, and (ii) determining that the peak is absent, concluding that the output variable is independent of the input variable: As an example of this technique, a sinusoidal perturbation with period of 15 minutes is generated in one of the input variables. This sinusoid has a very small amplitude in comparison with normal variations in user load patterns (typically < 1% of nominal variations). One or several synthetic client transactions (called the “canary” variables) are launched to execute typical user transactions (example: file compression, table lookup, inversion of a small matrix, sort a linear list). The response times for these canary tests are recorded to produce continuous time series that reflect QoS from the end-user perspective. In parallel with the canary tests, a large suite of system performance, throughput, and transaction latency variables as well as physical variables are recorded on a 24*7 basis using a continuous system telemetry harness, which has been separately developed by Sun Microsystems for high end UNIX ® servers. Figure 5 plots the PSD of the response time of the synthetic client (canary variable) as a function of frequency. The stochastic noise (chaotic user load) associated with the canary signal is so high that the test period of 15 minutes is not discernable in the univariate PSD. This illustrates that characterization of typical web-server performance metrics is not amenable to univariate spectral decomposition calculations via conventional Fourier analysis because the signal-to-noise ratio is too small to discern the sinusoidal perturbation in PSD of the response variables. To overcome this limitation of conventional Fourier analysis methods, we employ the CSD, a bi-variate diagnostic technique that is highly sensitive, even to weakly coupled parameters with very poor signal-to-noise ratios, dramatically and selectively amplifying the input sinusoid harmonics in response variables such that the period of the sinusoidal perturbation in the control variable is readily apparent with excellent peak resolution and low noise and “side lobe” contamination [6]. Typical results are illustrated in Figure 5. The presence of a peak in the CSD is evidence of a common periodicity and, hence, a cause-and-effect relationship between the sinusoidal perturbations in the load and the system variables as well as the canary variables. In the CSD subplot in Figure 5, a well-defined peak corresponding to the period of the sinusoid is readily observable, implying a common periodicity and, hence, a cause-and-effect relationship between the sinusoidal perturbation in the load and the synthetic client’s response time. (Pages 13-14; emphasis added). PNG media_image1.png 523 514 media_image1.png Greyscale It would have been obvious to one of ordinary skill in the art, having the teachings of Wegerich, Eilebrecht and Schuster before the effective filing date of the claimed invention, to modify the method and non-transitory computer-readable medium taught by Wegerich and Eilebrecht such that the check for the oscillating perturbation comprises: (a) performing a cross power spectral density transform on the input variable (i.e. the one test signal) and the output variable (i.e. each estimate signal of the plurality of estimate signals); (b) examining the cross power spectral density at a frequency of the oscillating perturbation to determine whether a peak is present or absent at the frequency; and (c) in response to (i) determining that the peak is present, concluding that the output variable is dependent on the input variable (and thus indicating that the multivariate machine learning model inaccurately predicts the estimate signal in the evaluation of dependency), and (ii) determining that the peak is absent, concluding that the output variable is independent of the input variable (and thus indicating that the multivariate machine learning model accurately predicts the estimate signal in the evaluation of dependency), as is taught by Schuster. It would have been advantageous to one of ordinary skill to utilize such a cross power spectral density transform because it “is highly sensitive, even to weakly coupled parameters with very poor signal-to-noise ratios, dramatically and selectively amplifying the input sinusoid harmonics in response variables such that the period of the sinusoidal perturbation in the control variable is readily apparent with excellent peak resolution and low noise and ‘side lobe’ contamination,” as is taught by Schuster (see page 14; internal citation omitted). Accordingly, Wegerich, Eilebrecht and Schuster are considered to teach, to one of ordinary skill in the art, a computer-implemented method like that of claim 2 and a non-transitory computer-readable medium like that of claim 9. Regarding claim 16, Wegerich and Eilebrecht teach a computing system like that of claim 15, as is described above, which applies a repeating probe signal to one input signal that is input to a multivariate machine learning model and checks for the repeating probe signal in one or more estimate signals output from the machine learning model. Wegerich suggests that the machine learning model is indicated as inaccurately predicting the estimate signal if the output is affected by the input (see e.g. paragraph 0025). Moreover, Eilebrecht generally teaches that digital signal processing techniques can be used to determine the effect of the input signal on the output signal (see e.g. column 4, line 28 – column 5, line 13). Wegerich and Eilebrecht, however, do not specifically teach: (a) performing a cross power spectral density transform on the one input signal and one estimate signal of the plurality of estimate signals; (b) examining the cross power spectral density at a frequency of the repeating probe signal to determine whether a peak is present or absent at the frequency; and (c) in response to (i) determining that the peak is present, indicating that the machine learning model inaccurately predicts the estimate signal in an evaluation of dependency, and (ii) determining that the peak is absent, indicating that the machine learning model accurately predicts the estimate signal in the evaluation of dependency, as is required by claim 16. Like noted above, Schuster similarly teaches identifying a relationship between an input variable of a system and an output variable by applying a repeating probe signal (i.e. an oscillating perturbation) to the input variable and checking for the repeating probe signal in the output variable (see e.g. the portions of pages 1-2 of Schuster excerpted above). Regarding the claimed invention, Schuster particularly teaches that checking for the repeating probe signal can comprise: (a) performing a cross power spectral density transform (i.e. cross spectrum density, “CSD”) on the input variable and the output variable; (b) examining the cross power spectral density at a frequency of the repeating probe signal to determine whether a peak is present or absent at the frequency; and (c) in response to (i) determining that the peak is present, concluding that the output variable is dependent on the input variable, and (ii) determining that the peak is absent, concluding that the output variable is independent of the input variable (see e.g. the portions of pages 13-14 and FIG. 5 of Schuster excerpted above). It would have been obvious to one of ordinary skill in the art, having the teachings of Wegerich, Eilebrecht and Schuster before the effective filing date of the claimed invention, to modify the computing system taught by Wegerich and Eilebrecht such that the check for the repeating probe signal comprises: (a) performing a cross power spectral density transform on the input variable (i.e. the input signal) and the output variable (i.e. each estimate signal of the plurality of estimate signals); (b) examining the cross power spectral density at a frequency of the repeating probe signal to determine whether a peak is present or absent at the frequency; and (c) in response to (i) determining that the peak is present, concluding that the output variable is dependent on the input variable (and thus indicating that the machine learning model inaccurately predicts the estimate signal in an evaluation of dependency), and (ii) determining that the peak is absent, concluding that the output variable is independent of the input variable (and thus indicating that the machine learning model accurately predicts the estimate signal in the evaluation of dependency), as is taught by Schuster. It would have been advantageous to one of ordinary skill to utilize such a cross power spectral density transform because it “is highly sensitive, even to weakly coupled parameters with very poor signal-to-noise ratios, dramatically and selectively amplifying the input sinusoid harmonics in response variables such that the period of the sinusoidal perturbation in the control variable is readily apparent with excellent peak resolution and low noise and ‘side lobe’ contamination,” as is taught by Schuster (see page 14; internal citation omitted). Accordingly, Wegerich, Eilebrecht and Schuster are considered to teach, to one of ordinary skill in the art, a computing system like that of claim 16. Claims 3 and 4 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Wegerich and Eilebrecht, which is described above, and also over U.S. Patent Application Publication No. 2016/0048125 to Cheta et al. (“Cheta”). Regarding claim 3, Wegerich and Eilebrecht teach a computer-implemented method like that of claim 1, as is described above, and which entails applying an oscillating perturbation to one test signal input to a multivariate machine learning model and checking for the oscillating perturbation in a plurality of estimate signals output from the machine learning model. Wegerich further teaches presenting (e.g. to an engineer) an evaluation of dependency (e.g. spillover) between the input and output signals (see e.g. paragraph 0014). Moreover, Eilebrecht generally teaches that digital signal processing techniques can be used to determine the effect of the input signal on the output signal (see e.g. column 4, line 28 – column 5, line 13). Wegerich and Eilebrecht, however, do not explicitly teach inferring a coupling coefficient between the one test signal and one estimate signal of the plurality of estimate signals based on the oscillating perturbation, wherein the coupling coefficient is a ratio of output amplitude of the one estimate signal at a frequency of the oscillating perturbation to the input amplitude of the one test signal at the frequency of the oscillating perturbation, and presenting the coupling coefficient in the evaluation of the dependency, as is required by claim 3. Similar to Wegerich and Eilebrecht, Cheta generally teaches applying an oscillating perturbation to a signal input to a system, monitoring a signal output from the system, and checking for the oscillating perturbation in the output signal (see e.g. paragraphs 0004, 0015, 0026-0027 and 0029-0031). Regarding the claimed invention, Cheta particularly teaches inferring a coupling coefficient (i.e. frequency response) between the input signal and the output signal based on the oscillating perturbation, wherein the coupling coefficient is a ratio of output amplitude of the output signal at a frequency of the oscillating perturbation to the input amplitude of the input signal at the frequency of the oscillating perturbation (see e.g. paragraphs 0034-0036). It would have been obvious to one of ordinary skill in the art, having the teachings of Wegerich, Eilebrecht and Cheta before the effective filing date of the claimed invention, to modify the computer-implemented method taught by Wegerich and Eilebrecht so as to infer a coupling coefficient like taught by Cheta between the input signal (i.e. the one test signal) and the output signal (i.e. one estimate signal of the plurality of estimate signals) based on the oscillating perturbation, wherein the coupling coefficient is a ratio of output amplitude of the output signal at a frequency of the oscillating perturbation to the input amplitude of the input signal at the frequency of the oscillating perturbation. It would have been advantageous to one of ordinary skill to utilize such a combination, because the coupling coefficient is useful in characterizing the relationship between the input and output signals, as is evident from Cheta (see e.g. paragraphs 0054-0055). Accordingly, Wegerich, Eilebrecht and Cheta are considered to teach, to one of ordinary skill in the art, a computer-implemented method like that of claim 3. Regarding claim 4, Wegerich and Eilebrecht teach a computer-implemented method like that of claim 1, as is described above, and which entails applying an oscillating perturbation to one test signal input to a multivariate machine learning model and checking for the oscillating perturbation in a plurality of estimate signals output from the machine learning model. Wegerich and Eilebrecht, however, do not explicitly teach generating the oscillating perturbation as a time series having corresponding indexes with the one test signal, wherein the oscillating perturbation is applied to the one test signal by adding values of the one test signal and the oscillating perturbation at the corresponding indexes, as is required by claim 4. Like noted above, Cheta generally teaches applying an oscillating perturbation to a signal input to a system, monitoring a signal output from the system, and checking for the oscillating perturbation in the output signal (see e.g. paragraphs 0004, 0015, 0026-0027 and 0029-0031). Regarding the claimed invention, Cheta particularly teaches generating the oscillating perturbation as a time series (e.g. a waveform) having corresponding indexes (e.g. temporal periods, which are inherently related to the frequency of the waveform) with the input signal, wherein the oscillating perturbation is applied to the input signal by adding values of the input signal and the oscillating perturbation at the corresponding indexes (see e.g. paragraphs 0026-0027 and 0029). It would have been obvious to one of ordinary skill in the art, having the teachings of Wegerich, Eilebrecht and Cheta before the effective filing date of the claimed invention, to modify the computer-implemented method taught by Wegerich and Eilebrecht so as to generate the oscillating perturbation as a time series having corresponding indexes with the input signal (i.e. the one test signal), wherein the oscillating perturbation is applied to the input signal by adding values of the input signal and the oscillating perturbation at the corresponding indexes, as is taught by Cheta. It would have been advantageous to one of ordinary skill to utilize such a combination, because it would enable the relationship between the input and output signals of the system to be identified during actual operation of the system, as is evident from Cheta (see e.g. paragraphs 0054-0055). Accordingly, Wegerich, Eilebrecht and Cheta are considered to teach, to one of ordinary skill in the art, a computer-implemented method like that of claim 4. Claims 5, 12 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Wegerich and Eilebrecht, which is described above, and also over the article entitled, “Determination of Sensor Quality of Calibration Using Advanced Data Analytics and Machine Learning Methods” by Cetiner et al. (“Cetiner”). Regarding claims 5, 12 and 18, Wegerich and Eilebrecht teach a computer-implemented method like that of claim 1, a non-transitory computer-readable medium like that of claim 8, and a computing system like that of claim 15, as are described above, and which entail applying an oscillating perturbation (or repeating probe signal like in claim 15) to one test signal input to a machine learning model and checking for the oscillating perturbation in a plurality of estimate signals output from the machine learning model. Wegerich further suggests automatically selecting an amplitude of the perturbation/probe signal that is within a data range of the one test signal, and generating the perturbation to have the amplitude (see e.g. paragraph 0024). Wegerich and Eilebrecht, however, do not explicitly teach that the selected amplitude is within one standard deviation of the one input signal, as is required by claims 5, 12 and 18. Similar to Wegerich and Eilebrecht, Cetiner teaches measuring sensitivity (e.g. auto-sensitivity, cross-sensitivity) in a multivariate machine learning model by applying a perturbation (i.e. artificial drift) to one test signal input to the model, and identifying the resulting effect in estimate signals output from the model (see e.g. section 3.2.2 “Performance Evaluation Metrics”). Cetiner suggests that the selected amplitude of the perturbation is within one standard deviation of the test signal (see e.g. section 3.2.2 “Performance Evaluation Metrics”). It would have been obvious to one of ordinary skill in the art, having the teachings of Wegerich, Eilebrecht and Cetiner before the effective filing date of the claimed invention, to modify the method, non-transitory computer-readable medium and computing system taught by Wegerich and Eilebrecht such that the selected amplitude of the perturbation/probe signal is within one standard deviation of the one input signal (i.e. one test signal), as is taught by Cetiner. It would have been advantageous to one of ordinary skill to utilize such a combination, because it would enable the dependency (i.e. cross-sensitivity) between the input test signal and the output estimate signals to be identified, as is evident from Cetiner (see e.g. section 3.2.2 “Performance Evaluation Metrics”). Accordingly, Wegerich, Eilebrecht and Cetiner are considered to teach, to one of ordinary skill in the art, a computer-implemented method like that of claim 5, a non-transitory computer-readable medium like that of claim 12, and a computing system like that of claim 18. Response to Arguments The Examiner acknowledges the Applicant’s amendments to claims 1-10 and 12-20. Regarding the 35 U.S.C. § 101 rejections, the Applicant argues that the recitation in claim 1 of “based on the results of the checking for the oscillating perturbation, automatically implementing a mitigation technique that reduces dependency in the machine learning model” integrates the judicial exception into a practical application. The Examiner however respectfully disagrees. Claim 1 does not recite any details of the automatically-implemented mitigation technique; instead it recites only the idea of a solution or outcome (i.e. that the mitigation technique “reduces dependency in the machine learning model”). Moreover, claim 1 invokes a computer merely as a tool to perform an existing process (i.e. to automatically implement the mitigation technique). Accordingly, like noted above, the Examiner respectfully maintains that the recitation in claim 1 of “based on the results of the checking for the oscillating perturbation, automatically implementing a mitigation technique that reduces dependency in the machine learning model” represents no more than mere instructions to apply the judicial exception on a computer, and thus does not integrate the judicial exception into a practical application. See MPEP § 2106.05(f). Further regarding the 35 U.S.C. § 101 rejections, the Applicant argues that the recitations in claim 1 of “applying an oscillating perturbation to a test signal input into a machine learning model” and “monitoring an estimate signal output from the machine learning model” cannot practically be performed in the human mind. In response, the Examiner respectfully maintains that the recitation of “monitoring an estimate signal output from the machine learning model” is indicative of a mental process. “’[T]he mental processes’ abstract idea grouping in particular is defined as concepts performed in the human mind, and examples of mental processes include observations, evaluations, judgements, and opinions.” MPEP § 2106.04(a)(2), subsection III. Here, even though the signal is produced by e.g. a computer, monitoring such signal is an observation or evaluation that can practically be performed in the human mind. The Examiner further respectfully submits that the recitation of “applying an oscillating perturbation to a test signal input into a machine learning model” is indicative of a field of use (machine learning models) of the judicial exception. Limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception do not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application. MPEP § 2106.05(h). In this case, the recitation of “applying an oscillating perturbation to a test signal input into a machine learning model” indicates a field of use (i.e. machine learning models) for the above-noted mental process steps of “monitoring a plurality of estimate signals output from the multivariate machine learning model, wherein the estimate signals predict expected values of test signals that than the one test signal” and “checking for the oscillating perturbation in the estimate signals.” Further regarding the 35 U.S.C. § 101 rejections, the Applicant argues that the claims recite an improvement to the technology of machine learning models, particularly, increased accuracy at lower compute cost when performing dependency checking. The Examiner, however, respectfully disagrees and submits that the purported improvements are not necessarily reflected in the claims. For example, increased accuracy does not necessarily follow from the general recitation of “checking for the oscillating perturbation in the estimate signals,” when given its broadest construction. Regarding the 35 U.S.C. § 103 rejections, the Applicant argues that neither Wegerich nor Eilebrecht teaches a multivariate machine learning model that inputs a plurality of test signals and produces a plurality of estimate signals, wherein the estimate signals predict expected values of test signals, as is now required by independent claims 1, 8 and 15. The Examiner however respectfully disagrees. Wegerich describes “autoassociative” models for monitoring equipment health, wherein the models receive a plurality of input signals, e.g. from a plurality of sensors on the equipment, and generate a plurality of respective estimated signals corresponding to the plurality of input signals (see e.g. paragraphs 0002-0003, 0013 and 0015-0016). Such an autoassociative model can be considered a multivariate machine learning model like claimed. Further regarding the 35 U.S.C. § 103 rejections, the Applicant argues that Wegerich and Eilebrecht fail to teach automatically implementing a mitigation technique that reduces dependency in the machine learning model, as is now required by claims 1, 8 and 15. The Examiner, however, respectfully disagrees. Like noted above, Wegerich teaches calculating various performance metrics for a machine learning model (see e.g. paragraphs 0006-0007, 0014 and 0019). These performance metrics particularly include a spillover measurement that is based on the amount the estimated signals output from the machine learning model deviate from normal when a signal with perturbed data is input to the machine learning model (see e.g. paragraphs 0014 and 0028). Such a spillover metric provides an indication of dependency between estimated signals output from the model and the input signal that is perturbed. Moreover, Wegerich teaches automatically selecting and deploying a model based on the performance metrics, or redesigning the model as necessary (see e.g. paragraphs 0006, 0019 and 0032). That is, Wegerich teaches automatically redesigning the model based at least in part on the spillover metric. As the purpose of redesigning the model would be to improve the one or more performance metrics, e.g. to reduce spillover, redesigning the model can be considered a mitigation technique that reduces spillover (i.e. dependency) in the machine learning model. Further regarding the 35 U.S.C. § 103 rejection of claim 3, the Applicant argues that Wegerich and Eilebrecht fail to teach a coupling coefficient that “is a ratio of output amplitude of the one estimate signal at a frequency of the oscillating perturbation to the input amplitude of the one test signal at the frequency of the oscillating perturbation” as is now claimed. In response, the Examiner respectfully submits that this argument has been considered, but is moot in view of the new grounds of rejection presented above, which is required in response to the Applicant’s amendments to claim 3. Particularly regarding the 35 U.S.C. § 103 rejection of claim 4, the Applicant argues that Wegerich and Eilebrecht fail to teach: “generating the oscillating perturbation as a time series signal having corresponding indexes with the one test signal; and wherein the applying the oscillating perturbation is applied to the one test signal by adding values of the one test signal and the oscillating perturbation at the corresponding indexes,” as is now claimed. In response, the Examiner respectfully submits that this argument has been considered, but is moot in view of the new grounds of rejection presented above, which is required in response to the Applicant’s amendments to claim 4. Particularly regarding the 35 U.S.C. § 103 rejections of claims 5, 12 and 18, the Applicant argues that Wegerich and Eilebrecht fail to teach “automatically selecting an amplitude of the oscillating perturbation that is within one standard deviation of the one test signal,” as is now claimed. In response, the Examiner respectfully submits that this argument has been considered, but is moot in view of the new grounds of rejection presented above, which is required in response to the Applicant’s amendments to claims 5, 12 and 18. Particularly regarding the 35 U.S.C. § 103 rejections of claim 17, the Applicant argues that Wegerich and Eilebrecht fail to teach a mitigation technique that is “one or more of (a) increasing a number of training vectors used to train the multivariate machine learning model, (b) filtering input signals to reduce noise, and (c) changing a number of input signals” as is now claimed. The Examiner, however, respectfully disagrees. Like noted above, Wegerich teaches automatically redesigning the machine learning model if necessary based on performance metrics of the model, including spillover (see e.g. paragraph 0006-0007). As noted above, redesigning the machine learning model is considered a mitigation technique to reduce dependency in the model. Wegerich teaches that models can differ according to which variables are selected to be grouped into the model, or the data used to train the model (see e.g. paragraph 0032). Accordingly, it is apparent that redesigning the machine learning model, i.e. the mitigation technique, can comprise changing the data used to train the model and/or the input variables of the model. The mitigation technique can thus comprise increasing a number of training vectors used to train the multivariate machine learning model (i.e. changing the data used to train the model) or changing the number of input signals (i.e. the input variables of the model). Further regarding the 35 U.S.C. § 103 rejections, the Applicant argues that the rejections change the respective functions of the prior art, i.e. the Applicant argues that the proposed modification would replace Wegerich’s constant-offset perturbation with an entirely different type of input signal, and would thereby alter the function of Wegerich’s robustness framework enough to render it useless. The Applicant also argues that the combination changes the principle operation of the references. In response, the Examiner respectfully submits that replacing the constant-offset perturbation taught by Wegerich with the oscillating perturbation of Eilebrecht would not render Wegerich’s robustness framework useless or change Wegerich’s principle of operation. The purpose of Wegerich’s robustness and spillover measurements is to identify how much the model’s output is affected by a change in the input (see e.g. paragraphs 0023, 0025 and 0025). This purpose would still be satisfied if the constant perturbation taught by Wegerich were replaced by an oscillating perturbation like taught by Eilebrecht. Eilebrecht’s oscillating perturbation still enables the effect of changing an input variable on an output variable to be measured (see e.g. column 3, line 52 – column 4, line 4; and column 4, line 28 – column 5, line 13). Accordingly, the Examiner respectfully maintains that the combination of Wegerich and Eilebrecht does not change the respective functions of the prior art or their principles of operations. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to BLAINE T BASOM whose telephone number is (571)272-4044. The examiner can normally be reached Monday-Friday, 9:00 am - 5:30 pm, EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matt Ell can be reached at (571)270-3264. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BTB/ 2/7/2026 /MATTHEW ELL/Supervisory Patent Examiner, Art Unit 2141
Read full office action

Prosecution Timeline

May 23, 2022
Application Filed
Jun 13, 2025
Non-Final Rejection — §101, §103
Sep 17, 2025
Response Filed
Feb 21, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12566981
METHOD AND SYSTEM FOR EVENT PREDICTION BASED ON TIME-DOMAIN BOOTSTRAPPED MODELS
2y 5m to grant Granted Mar 03, 2026
Patent 12487727
Sensory Adjustment Mechanism
2y 5m to grant Granted Dec 02, 2025
Patent 12443420
Automatic Image Conversion
2y 5m to grant Granted Oct 14, 2025
Patent 12373898
DISPLAY TOOL
2y 5m to grant Granted Jul 29, 2025
Patent 12271982
GENERATING MODIFIED USER CONTENT THAT INCLUDES ADDITIONAL TEXT CONTENT
2y 5m to grant Granted Apr 08, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
43%
Grant Probability
66%
With Interview (+22.7%)
4y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 326 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month