Prosecution Insights
Last updated: April 19, 2026
Application No. 17/395,972

APPARATUS AND METHOD FOR ELECTRONIC DETERMINATION OF SYSTEM DATA INTEGRITY

Non-Final OA §101§103
Filed
Aug 06, 2021
Examiner
ALSHAHARI, SADIK AHMED
Art Unit
2121
Tech Center
2100 — Computer Architecture & Software
Assignee
Predictiveiq LLC
OA Round
3 (Non-Final)
35%
Grant Probability
At Risk
3-4
OA Rounds
4y 5m
To Grant
82%
With Interview

Examiner Intelligence

Grants only 35% of cases
35%
Career Allow Rate
12 granted / 34 resolved
-19.7% vs TC avg
Strong +47% interview lift
Without
With
+47.1%
Interview Lift
resolved cases with interview
Typical timeline
4y 5m
Avg Prosecution
24 currently pending
Career history
58
Total Applications
across all art units

Statute-Specific Performance

§101
31.8%
-8.2% vs TC avg
§103
41.7%
+1.7% vs TC avg
§102
4.1%
-35.9% vs TC avg
§112
16.7%
-23.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 34 resolved cases

Office Action

§101 §103
DETAILED ACTION Status of Claims Claim(s) 1-20 are pending and are examined herein. Claim(s) 1-20 remain rejected under 35 U.S.C. § 101 and 35 U.S.C. § 103. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/23/2025 has been entered. Response to Arguments Applicant's arguments, with respect to the rejection under 35 U.S.C. § 101 filed on 12/23/2025, have been fully considered but are not persuasive. Applicant’s argument (P. 8 of the Remarks): Applicant argues that the claims are not directed to a mental process because the recited operations cannot practically be performed in the human mind, citing August 2025 Memo. Examiner's response: The examiner respectfully disagrees with Applicant’s arguments. Applicant asserts that the claims are not directed to a mental process the recited steps cannot practically be performed in the human mind. However, Applicant’s arguments are conclusory and do not identify specific claim limitations that are allegedly non-mental or that do not recite an abstract idea. Accordingly, the arguments are not persuasive. As stated in the Office Action, the claimed steps of generating a first value using sensor data, generating a second value using the first value and sensor data, and generating an estimated expected result based on the first and second values to assess sensor validity fall within the judicial exception of mental processes and/or mathematical concepts. Specifically, the “generating” steps involve calculating a first value and a second value based on sensor measurements and mathematical relationships, and calculating an expected output by using both the first value and second value (e.g., via a weighted combination). This operations encompasses a process that can be practically performed in the human mind with the aid of pen and paper. It is noted that the use of a physical aid (e.g., pencil and paper or a slide rule) to help perform a mental step (e.g., a mathematical calculation) does not negate the mental nature of the limitation. Furthermore, the recitation of determining/evaluating whether the sensor data is valid or invalid based on the results represents an act of evaluating information that can be practically be performed in the human mind. See MPEP § 2106.04(a)(2)(III). The recitation of executing a first and second model on a computer device does not integrate the judicial exception into a practical application and merely represents computer instructions to apply the abstract idea on a computer. See MPEP § 2106.05(f). In view of the above, the claim is primarily directed to the abstract idea of analyzing sensor data through mathematical calculation and mental steps: calculating first and second values, generating a predicted result using both values, and comparing the result to the actual sensor data to determine validity. The recitation of a generic computer components executing computer instruction to perform these steps does not provide a meaningful technological improvement or transform the abstract idea into a patent-eligible subject matter. Accordingly, Applicant’s arguments are not persuasive, and the rejection under 35 U.S.C. § 101 is maintained. Applicant's arguments, with respect to the rejection under 35 U.S.C. § 103 filed on 12/23/2025 (see remarks p. 9) have been fully considered but are not persuasive. Applicant asserts that the cited references fail to disclose or suggest the claimed limitations "a second value based on execution of a second model that operates on the first value and the sensor data. wherein the second model comprises coefficients trained to minimize a difference between generated second values and inputted sensor values characterized by corresponding inputted sensor data.” The examiner respectfully disagrees with Applicant’s assertions. Applicant’s arguments amount to a general allegation that the cited reference does not disclose the claimed limitations without specifically pointing out how the language of the claims patentably distinguishes them from the references. Accordingly, Applicant’s arguments fail to comply with 37 CFR 1.111(b) or (c). Claims 1-5, 8-16, 18, and 19 remain rejected as being unpatentable over Arbogast in view of Madasu. Specifically, Figure 5B and the corresponding paragraphs [0032]-[0039] of Arbogast disclose that raw sensor data is passed to a physics-based model (FPM tool 222), which determines predicted or validated sensor values. For example, paragraph [0032] states that the integrated analysis tool passes raw sensor data to both FPM tool and a neural network to determine predicted sensor values. Arbogast further discloses that the validated values generated by the FPM tool are interleaved with raw sensor data and provided to the neural network tool (226). Accordingly, Arbogast teaches a second model (neural network tool 226) that operates on both the output of a first model (validated values generated by FPM tool) and sensor data, as required by the claim. With respect to the limitation that the second model comprises coefficients trained to minimize a difference between generated values and sensor values, while Arbogast may not explicitly describe minimizing such difference, this feature would have been obvious in view of Madasu. In particular, Madasu discloses training an autoencoder-based model to minimize reconstruction error between input sensor data and the reconstructed output data (see, e.g., paragraphs [0029] and [0065]). Accordingly, a person of ordinary skill in the art would have been motivated to incorporate the PDNN error minimization training technique of Madasu into the neural network model of Arbogast to improve prediction accuracy and sensor validation performance. With respect to dependent claims 6-7, 17, and 20, the Examiner refers to rejection under 35 U.S.C. § 103 which provides a clear mapping of the claim limitations and provides a prima facie case of obviousness. Therefore, Applicant’s arguments are not persuasive, and the rejection under 35 U.S.C. § 103 is maintained. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. When considering subject matter eligibility under 35 U.S.C. 101, it must be determined whether the claim is directed to one of the four statutory categories of invention, i.e., process, machine, manufacture, or composition of matter (Step 1). If the claim does fall within one of the statutory categories, the second step in the analysis is to determine whether the claim is directed to a judicial exception (Step 2A). The Step 2A analysis is broken into two prongs. In the first prong (Step 2A, Prong 1), it is determined whether or not the claims recite a judicial exception (e.g., mathematical concepts, mental processes, certain methods of organizing human activity). If it is determined in Step 2A, Prong 1 that the claims recite a judicial exception, the analysis proceeds to the second prong (Step 2A, Prong 2), where it is determined whether or not the claims integrate the judicial exception into a practical application. If it is determined at step 2A, Prong 2 that the claims do not integrate the judicial exception into a practical application, the analysis proceeds to determining whether the claim is a patent-eligible application of the exception (Step 2B). If an abstract idea is present in the claim, any element or combination of elements in the claim must be sufficient to ensure that the claim integrates the judicial exception into a practical application, or else amounts to significantly more than the abstract idea itself. Applicant is advised to consult MPEP 2106 for more details of the analysis. Under Step 1 analysis, Claims 1-11 recite a computing device (representing a machine); Claims 12-17 recite a method (representing a process); and Claims 18-20 recite a non-transitory computer readable medium (representing an article of manufacture). Therefore, each set of the claims falls into one of the four statutory categories (i.e., process, machine, article of manufacture, or composition of matter). Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more, and hence is not patent-eligible subject matter. Regarding Previously Presented Claim 1, Step 2A Prong 1: The claim recites an abstract idea enumerated in the 2019 PEG. generate a first value based on ... a first model that operates on the sensor data, wherein the first value characterizes a relationship between inputs to the system and outputs from the system; generate a second value based on ... a second model the first value and the sensor data, ... generate a sensor prediction value for the at least one sensor based on the first value and the second value; (The “generating” steps are directed to the abstract idea of Mental Process and/or mathematical concepts. Examiner’s note: the “generating” steps, as drafted, and under their broadest reasonable interpretation (BRI), cover concepts that can be practically performed in the human mind with the aid of pen and paper. The claimed steps involve calculating/determining a first value based on sensor measurements using a physics function (e.g., mathematical relationships), calculating/determining a second value based on the first value and the sensor measurements, and calculating or determining an expected output by using both the first value and second value (e.g., via a weighted combination). See Spec e.g., [0066]-[0067]. This process encompasses the mathematical concepts and mental processes. See MPEP § 2106.04(a)(2)(I) & (III).) determine whether the sensor data is valid based on the sensor prediction value and the sensor data; (The “determining” step is an abstract idea of a mental process. The “determining” step, as drafted, and under its broadest reasonable interpretation, represents an act of evaluating/comparing information that can be practically performed in the human mind. Examiner note: an individual can manually compare the estimated result to the actual sensor data to determine whether the sensor data is valid or invalid. See MPEP § 2106.04(a)(2)(I).) Step 2A Prong 2: Under this prong, we evaluate whether the claim recites additional elements that integrate the abstract idea into a practical application by considering the claim as a whole. The judicial exception is not integrated into a practical application. Additional Elements Analysis: The claim recite the limitation: “A computing device comprising at least one processor and a memory storing instructions, ... wherein the at least one processer is configured to receive the instructions from the memory and execute the instructions, the execution of the instructions causing the at least one processor to:”. (These additional elements represent generic computer components and/or computer instructions to perform the aforementioned abstract ideas. Accordingly, these recitations amount to merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f).) Furthermore, the recitation of execution of first and second mode using the computer device to perform the abstract idea of generating a first value and a second value merely represents computer instructions configured to perform the abstract idea on a computer. Claim limitations that merely invoke computer component as a tool to perform the abstract idea cannot integrate the judicial exception into a practical application. Additionally, the recitation of “wherein the second model comprises coefficients trained to minimize a difference between generated second values and inputted sensor values characterized by corresponding inputted sensor data;” merely defines a generic computer function that is recited at high level of generality. In other words, the claim invokes computer or other machinery in its ordinary capacity merely as a tool to perform an existing process. The recitation of the model parameters are trained to minimize the error represent a generic machine learning function. The claim fails to recite the technical implementation of the model training. Thus, the claim limitation recites a generic computer component performing generic computer function at a high level of generality and does not meaningfully limit the claim. See MPEP § 2106.05(f). The claim recites the limitation “receive sensor data from at least one sensor for a system;” (This amount to no more than adding insignificant extra-solution activity to the judicial exception e.g., mere data gathering in conjunction with the abstract idea - see MPEP 2106.05(g). Examiner’s note: the claim recites the computing device configured to receive sensor data. This additional element represents mere data gathering that is necessary for use of the recited judicial exception (determining sensor prediction values) and is recited at a high level of generality.) The claim recites the limitation “generate an output signal characterizing whether the sensor data is valid based on the determination.” (This limitation merely describes the output of the sensor data analysis (e.g., the sensor is valid or invalid). This output step amounts to insignificant extra-solution activity to the judicial exception, as discussed in MPEP § 2106.05(g). In other words, the claim limitation merely defines a generic computer function (i.e., data output step) in conjunction with the abstract idea. All uses of the recited judicial exception require such data gathering or data output.) Step 2B: Under this prong, the claim must include additional elements that amount to significantly more than the judicial exception. These elements must not be well-understood, routine, or conventional in the relevant field. When viewed individually and as an ordered combination, the claim does not include any such additional elements that are sufficient to amount to significantly more (i.e., inventive concept). Additional Elements Analysis: As explained above, the recitation of a computing device configured to execute first and second models to performs the abstract idea, amounts to no more than invoking a computer as a tool and mere instructions to apply the abstract idea. As described in MPEP § 2106.05(f), additional elements that invoke computers or other machinery merely as a tool to perform an existing process will generally not amount to significantly more than a judicial exception. Mere instructions to apply an exception cannot provide an inventive concept. Thus, the same analysis utilized under Step 2A Prong 2 is equally true in Step 2B. The recitation of a computing device configured to “receive sensor data from at least one sensor for a system” and “generate an output signal ….” are considered to be insignificant extra-solution activity in Step 2A. The receiving/output steps do not provide a meaningful limitation to the abstract idea, as they merely define a data gathering and outputting steps. Such generic computer functions have been recognized by the courts as well-understood, routine, conventional functions. For example, the courts have recognized computer functions such as “receiving or transmitting data” and/or “storing and retrieving information” as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. See MPEP § 2106.05(d). These limitations therefore remain insignificant extra-solution activities even upon reconsideration. Accordingly, the additional limitations whether considered individually or in combination with the judicial exception, are not sufficient to integrate the judicial exception into a practical application or amount to significantly more. Therefore, claim 1 does not recite patent-eligible subject matter. Regarding Original Claim 2, Step 2A Prong 1: Claim 2, which incorporates the rejection of claim 1, recites further limitation such as: wherein the first model is a physics-based model that operates on the sensor data and is based on at least one mathematical relationship between inputs to the system and outputs from the system, (That is part of the abstract idea of claim 1. The claim further suggests that the first model is a physics-based model which involves mathematical and physics functions and/or equations. This recitation is part of the abstract of mathematical concepts and mental processes. See MPEP § 2106.04(a)(2)(I) & (III).) Step 2A Prong 2: The judicial exception is not integrated into a practical application. The recitation of “the second model is a machine learning model that operates on the first value.” amounts to no more than invoking computer component as a tool to perform the abstract idea. Merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f). Examiner’s note: high level recitation of using a machine learning model to perform the abstract idea (i.e., i.e., using a computer component or other machinery in its ordinary capacity merely as a tool to perform an existing process).) Step 2B: the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As noted previously, the additional element of using a machine learning model merely describes how to generally “apply” the exception on a generic computer. Mere instructions to apply an exception cannot provide an inventive concept. Thus, the same analysis utilized under Step 2A Prong 2 is equally true in Step 2B. Therefore, claim 2 is ineligible. Regarding Original Claim 3, Step 2A Prong 1: Claim 3, which incorporates the rejection of claim 2, doesn’t recite an abstract idea. Step 2A Prong 2: The judicial exception is not integrated into a practical application. wherein the physics-based model comprises a first weight and the machine learning model comprises a second weight, wherein the computing device is configured to train the first weight and the second weight based on the sensor prediction value and the sensor data. (This limitation amounts to merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f). Examiner’s note: high level recitation of training a model with previously determined data.) Step 2B: the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As explained above, the additional element of using conventional machine learning functions amounts to no more than applying generic computer component to perform generic computer functions. The training step is recited a high level of application, without specific detail on the training process. Thus, the same analysis utilized under Step 2A Prong 2 is equally true in Step 2B. Even when viewed in combination with the judicial exception, these additional elements do not integrate the judicial exception into a practical application or amount to significantly more (i.e., an inventive concept).This generic training recitation does not amount significantly more and does not provide an inventive concept. Therefore, claim 3 is ineligible. Regarding Previously Presented Claim 4, Step 2A Prong 1: Claim 3, which incorporates the rejection of claim 2, doesn’t recite an abstract idea. Step 2A Prong 2: The judicial exception is not integrated into a practical application. wherein the at least one processor is further configured to execute the instructions to receive model input data, and wherein the physics-based model operates on the model input data. (This limitation is part of the receiving step which was considered to be insignificant extra solution activity. This limitation merely defines model input data, which is still part of data gathering. Thus, the additional element for receiving model input data is mere data gathering in conjunction with the abstract idea, see MPEP § 2106.05(g).) Step 2B: the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As explained above, the additional element amounts to insignificant extra-solution activity in conjunction with the abstract idea. Retrieving data from memory and/or receiving or transmitting data over a network are a well-known and conventional functions. Therefore, the additional element remains insignificant extra-solution activity even upon reconsideration. See MPEP § 2106.05(d)(i)&(iv). Therefore, claim 4 is ineligible. Regarding Original Claim 5, Step 2A Prong 1: Claim 5, which incorporates the rejection of claim 4, doesn’t recite an abstract idea. Step 2A Prong 2: The judicial exception is not integrated into a practical application. wherein the model input data comprises timeseries data of prior sensor oil temperature readings of an engine, data identifying the engine's fuel consumption, data identifying the engine's coolant temperature, data identifying the mass flow rate of the oil in the engine, data identifying the mass flow rate of the coolant in the engine, and data identifying the speed of the engine's radiator fan. (This limitation is part of the receiving step which was considered to be insignificant extra solution activity. The claim further defines the type of data being used. This recited additional limitation amounts to generally linking the use of a judicial exception to a particular technological environment or field of use, as discussed in MPEP § 2106.05(h).) Step 2B: the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As explained above, the additional element amounts to generally linking the use of a judicial exception to a particular technological environment or field of use by defining the type of data involved. Therefore, the additional element does not add a meaningful limitation or provide an inventive concept. Therefore, claim 5 is ineligible. Regarding Previously Presented Claim 6, Step 2A Prong 1: Claim 6, which incorporates the rejection of claim 1, recites further limitation such as: determine a third value based on execution of a second classifier that operates on second sensor data from the second sensor; determine the second value based on execution of the final classifier that operates on the first value and the third value. (That is part of the abstract idea of a mental process and mathematical concept. As noted above, the claimed determining steps are recited at a high level of generality such that they cover the mental process and/or mathematical concept. Therefore, determining a third value and the second value based on sensor measurement and previous results to assess whether the sensor data is valid or invalid encompasses processes that can be performed manually with physical tools (e.g., pen and paper). See MPEP § 2106.04(a)(2)(I) & (III).) Step 2A Prong 2: The judicial exception is not integrated into a practical application. The recitation of a computing device configured to execute multiple classifiers to determine prediction values amount to merely using a computer to execute computer instruction to carry out the mathematical operations. The additional element is recited so generically (no details whatsoever are provided other than that it is a “classifier” to determine values). This represents no more than mere instructions to apply the judicial exceptions on a computer. Step 2B: the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As explained above, the recited additional element amounts to no more than mere instructions to apply the abstract idea on a computer. Mere instructions to apply an exception cannot provide an inventive concept. Therefore, claim 6 is ineligible. Regarding Previously Presented Claim 7, Step 2A Prong 1: Claim 7, which incorporates the rejection of claim 6, doesn’t recite an abstract idea. Step 2A Prong 2: The judicial exception is not integrated into a practical application. train the first classifier with first system data corresponding to a first operating regime of the system; train the second classifier with second system data corresponding to a second operating regime of the system; apply the trained first classifier to the first sensor data to generate first output data; apply the trained second classifier to the second sensor data to generate second output data; and train the final classifier with the first output data and the second output data.(The claim introduce the concept of computing device configured to train and apply multiple classifiers to determine sensor perdition values. These steps amount to merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f). Examiner’s note: high level recitation of training a model with previously determined data and high level application of a previously trained model to make a prediction.) Step 2B: the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As explained above, the additional element of using conventional machine learning functions amounts to no more than applying generic computer component to perform generic computer functions. As described in MPEP § 2106.05(f), additional elements that invoke computers or other machinery merely as a tool to perform an existing process will generally not amount to significantly more than a judicial exception. The training/applying steps are recited a high level of application, without specific detail on the technical process. Accordingly, These generic training recitation do not amount significantly more and do not provide an inventive concept. Therefore, claim 7 is ineligible. Regarding Previously Presented Claim 8, Step 2A Prong 1: Claim 8, which incorporates the rejection of claim 1, recites further limitation such as: determining whether the sensor prediction value is within a confidence interval; determining that the sensor data is valid when the sensor prediction value is within the confidence interval; and determining that the sensor data is invalid when the sensor prediction value is not within the confidence interval. (That is part of the Abstract idea of the mental processes and/or mathematical concept outlined in claim 1. The step of determining whether prediction value is valid based on confidence level, which is an act of evaluating information that can be practically performed in the human mind. Thus, this step is an abstract idea in the "mental process" grouping. See MPEP § 2106.04(a)(2)(I) & (III).) Step 2A Prong 2: The claim does not recite additional element that integrates the judicial exception into a practical application. Step 2B: The claim does not recite additional elements that amount to significantly more than the judicial exception. Therefore, claim 8 is ineligible. Regarding Previously Presented Claim 9, Step 2A Prong 1: The claim recites an abstract idea enumerated in the 2019 PEG. determine an error value based on the sensor prediction value and the current sensor data; and determine at least one adjustment to a weight applied by the first model based on the error value. (That is part of the Abstract idea of the mental processes and/or mathematical concept outlined in claim 1. The step of determining the difference between the actual value and the predicated value could be practically performed in the human mind with the aid of pen and paper. Thus, the claimed steps of determining an error and adjustment encompasses the mathematical concepts and mental processes. See MPEP § 2106.04(a)(2)(I) & (III).) Step 2A Prong 2: The judicial exception is not integrated into a practical application. The claim recite the limitation: “receive current sensor data for the at least one sensor;”. (The “receiving” step amounts to no more than adding insignificant extra-solution activity to the judicial exception, as discussed in MPEP § 2106.05(g). This step represents a generic computer function that is recited at a high level of generality (e.g., data gathering). Such data gathering step does not meaningful limit the claim (i.e., all uses of the recited judicial exception require such data gathering or data output).) Step 2B: the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As explained above, the additional element amounts to insignificant extra-solution activity in conjunction with the abstract idea. Retrieving data from memory and/or receiving or transmitting data over a network are a well-known and conventional functions. See MPEP § 2106.05(d)(i)&(iv). Therefore, claim 9 is ineligible. Regarding Original Claim 10, Step 2A Prong 1: Claim 10, which incorporates the rejection of claim 1, doesn’t recite an abstract idea. Step 2A Prong 2: The judicial exception is not integrated into a practical application. wherein the sensor data comprises an oil temperature of an engine. (This limitation is part of the receiving step which was considered to be insignificant extra solution activity. The claim adds on the data gathering step by defining the type of data being used. This additional limitation amounts to generally linking the use of a judicial exception to a particular technological environment or field of use, as discussed in MPEP § 2106.05(h).) Step 2B: the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As explained above, the additional element amounts to generally linking the use of a judicial exception to a particular technological environment or field of use by defining the type of data involved. Therefore, the additional element does not add a meaningful limitation or provide an inventive concept. Therefore, claim 10 is ineligible. Regarding Claim 11, Step 2A Prong 1: Claim 11, which incorporates the rejection of claim 1, recites further limitation such as: wherein the sensor prediction value is a predicted temperature and the sensor data is an actual temperature. (That is part of the Abstract idea of the mental processes and/or mathematical concept outlined in claim 1. The claim introduce the temperature prediction value, and determining the validity of the temperature reading by comparing it the determined values which can be practically performed in the human mind with the aid of pen and paper. See MPEP § 2106.04(a)(2)(I) & (III).) Step 2A Prong 2: The claim does not recite additional element that integrates the judicial exception into a practical application. Step 2B: The claim does not recite additional elements that amount to significantly more than the judicial exception. Therefore, claim 11 is ineligible. Regarding Previously Presented Claim 12, The claim recites similar limitations as corresponding claim 1. Therefore, the same subject matter eligibility analysis (including the abstract idea) that was utilized under step for claim 1, as described above, is equally applicable to claim 12. The only difference is that claim 1 is drawn to a system, and claim 12 is drawn a method. Therefore, claim 12 is ineligible. Regarding Original Claim 13, The claim recites similar limitations as corresponding claim 2. Therefore, the same subject matter eligibility analysis including (abstract idea) that was utilized for claim 2, as described above, is equally applicable to claim 13. Therefore, claim 13 is ineligible. Regarding Original Claim 14, The claim recites similar limitations as corresponding claim 3. Therefore, the same subject matter eligibility analysis including (abstract idea) that was utilized for claim 3, as described above, is equally applicable to claim 14. Therefore, claim 14 is ineligible. Regarding Original Claim 15, The claim recites similar limitations as corresponding claim 4. Therefore, the same subject matter eligibility analysis including (abstract idea) that was utilized for claim 4, as described above, is equally applicable to claim 15. Therefore, claim 15 is ineligible. Regarding Previously Presented Claim 16, The claim recites similar limitations as corresponding claim 5. Therefore, the same subject matter eligibility analysis including (abstract idea) that was utilized for claim 5, as described above, is equally applicable to claim 16. Therefore, claim 16 is ineligible. Regarding Original Claim 17, The claim recites similar limitations as corresponding claim 6. Therefore, the same subject matter eligibility analysis including (abstract idea) that was utilized for claim 6, as described above, is equally applicable to claim 17. Therefore, claim 17 is ineligible. Regarding Previously Presented Claim 18, The claim recites similar limitations as corresponding claim 1. Therefore, the same subject matter eligibility analysis ( including the abstract idea) that was utilized for claim 1, as described above, is equally applicable to claim 18. The only difference is that claim 1 is drawn to a system, and claim 18 is drawn to a non-transitory computer readable medium. Therefore, claim 18 is ineligible. Regarding Original Claim 19, The claim recites similar limitations as corresponding claim 2. Therefore, the same subject matter eligibility analysis (including the abstract idea) that was utilized for claim 2, as described above, is equally applicable to claim 19. Therefore, claim 19 is ineligible. Regarding Previously Presented Claim 20, The claim recites similar limitations as corresponding claim 6. Therefore, the same subject matter eligibility analysis (including the abstract idea) that was utilized for claim 6, as described above, is equally applicable to claim 20. Therefore, claim 20 is ineligible. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1-5, 8-16, and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Arbogast et al. (Pub. No.: US 20120150334 A1) in view of Madasu et al., (Pub. No.: US 20210201160 A1). Regarding Previously Presented Claim 1, Arbogast discloses the following: A computing device comprising at least one processor and a memory storing instructions, wherein the at least one processor is configured to receive instructions from the memory and execute the instructions, the execution of the instructions causing the at least one processor to: (Arbogast, [0010] “FIG. 2 illustrates a computing system used to provide an integrated fault detection and analysis tool, …” [0007] “a system having a processor and a memory storing a monitoring application, which, when executed by the processor, performs an operation for sensor fault detection and analysis.”) receive sensor data from at least one sensor for a system, the sensor data characterizing one or more values for the at least one sensor; (Arbogast, [0005] “The method may generally include receiving a sensor data value for each of one or more sensors, each monitoring an aspect of an industrial process, passing at least some of the sensor data values to at least a physical modeling (PM) tool and a neural network modeling tool.” [0032] “FIG. 3 illustrates a method 300 for an integrated fault detection and analysis tool application to identify sensor faults, according to one embodiment of the invention. As shown, the method 300 begins at step 305 where the integrated analysis tool 224 receives raw sensor data values for a current time period (shown in FIG. 2 as raw sensor data 232). Once received, at step 310, the integrated analysis tool 224 passes the raw sensor data values to both the FPM tool 222 and the neural network tool 226.” [0024] “The sensor status database 132 provides a computing system configured with a database application itself configured to receive and store a current value for each sensor in the production facility 101 and the pipeline 105. As new sensor data is received from the production facility 101 and the pipeline 105, data from the sensor status database 132 may be archived in the historical status database 134. In one embodiment, the monitoring server 140 may be configured to monitor the sensor values received from the production facility 101 and the pipeline 105 and identify when a sensor fault has occurred.”) generate a first value based on execution of a first model that operates on the sensor data and wherein the first value characterizes a relationship between inputs to the system and outputs from the system; (Arbogast, Fig. 5B, [0027] “FPM tool 222 provides an application configured to simulate the operations of a specific industrial facility. As noted above, the FPM may model the operations of a set of ASUs. In such a case, the FPM use a set of mass and energy balance equations to model the flows of the specific ASU network at a production facility. Thus, for a given set of inputs, the FPM tool 222 can determine what the value for a given sensor should be. …” [0038] “As shown, the raw sensor data 232 is first passed to the FPM tool 222, which, in response determines a predicted value 552 for each sensor within the scope of the FPM tool 222….” Further described in [0032]. [0027] “If the raw sensor data 232 for a given senor is different than the value from the FPM modeling tool 222, monitoring system 140 may determine whether an actual malfunction (or other configuration problem) has occurred within an ASU or whether the sensor has experienced a sensor fault. That is, the monitoring system 140 may determine whether the ASU is operating normally and the sensor has malfunctioned or whether the ASU is operating abnormally and the sensor is reporting an accurate (but problematic) sensor value that may result in a plant trip.” Further see [0003].) [Examiner’s Note: the FPM tool is a first principles model that is based on physics and mathematical principles interpreted as the first model, and the predicted output value of the FPM model correspond to the “first value”.] generate a second value based on execution of a second model that operates on the first value and the sensor data; (Arbogast, Fig. 5B, [0029] “Similarly, the neural network tool 226 may be configured to predict the value of one sensor, based on the observed values of other sensors. However, the neural network tool 226 operates by learning through observation rather than being built up from mathematical models….” [0032] Once received, at step 310, the integrated analysis tool 224 passes the raw sensor data values to both the FPM tool 222 and the neural network tool 226. In response, at step 315, the FPM tool 222 and the neural network tool 226 determine a predicted value for each sensor and a corresponding fault state….” [0039] “….the results of from the FPM tool 222 are passed to the data integration tool 550. The data integration tool 550 interleaves the validated values from the FPM tool 222 and raw sensor data 232 and passes them to the neural network tool 226. Specifically, the in-scope validated values and any out- of-scope raw values (i.e., raw sensor data for sensors not modeled by the FPM tool 222) are combined and provided to the neural network tool 226. …., Once received, the neural network tool 226 determines a fault state and a predicted value for each sensor and passes output 560 to the integrated analysis tool 224.”) [Examiner’s Note: The output of the neural network (i.e., second value), where the neural network (i.e., second model) obtains the validated data from the FPM model (i.e., first value) and raw sensor data.] generate a sensor prediction value for the at least one sensor based on the first value and the second value; (Arbogast, Fig. 5B, [0030] “…the integrated analysis tool 224 provides a software application configured to receive raw sensor data 232 and the predicted values from FPM tool 222 and the neural network tool 225.” [0037] “....the FPM tool 222 and neural network tool 226 generate a predicted value for each individual sensor, based on the values reported by other sensors. As shown, the output 505 provides a fault state of true/false and a predicted value for each sensor. …, Once received, the integrated analysis tool 224 determines whether to replace any of the raw sensor data 232 with the predicted values or to tag some data values as being validated by the FPM tool 222 and/or neural network tool 226. ”[0039] “… As described above, the integrated analysis tool 224 may then provide feedback including the fault states and validated values 565 to the plant, where controllers can operate using the validated raw sensor data or the predicted/estimated values made by the FPM tool 222 and or the neural network tool 226.”) [Examiner’s Note: the prediction output from the integrated analysis tool which integrates the predicted values generated by the FPM and the neural network interpreted as the “sensor prediction value”.] determine whether the sensor data is valid based on the sensor prediction value. (Arbogast, Fig. 5B, [0030] “…. In the event of a discrepancy between the predicted values and the raw sensor data, the integrated analysis tool 224 may determine whether a sensor is believed to be experiencing a sensor fault. In such a case, the integrated analysis tool 224 may be further configured to replace the raw sensor data 232 for a believed-to-be-malfunctioning sensor with the value from the FPM tool 222 (or neural network 226).” [0037] “…the integrated analysis tool 224 determines whether to replace any of the raw sensor data 232 with the predicted values or to tag some data values as being validated by the FPM tool 222 and/or neural network tool 226.”) [Examiner’s Note: the integrated analysis tool determines whether the sensor data is at fault, needs to be replaced, or tagged as validated which would read on assessing sensor data validity based on the prediction values.] and generate an output signal characterizing whether the sensor data is valid based on the determination. (Arbogast, [0039] “As described above, the integrated analysis tool 224 may then provide feedback including the fault states and validated values 565 to the plant, where controllers can operate using the validated raw sensor data or the predicted/estimated values made by the FPM tool 222 and or the neural network tool 226.”) As outlined above, Arbogast defines the second model as neural network model that is being trained and retrained for sensor data fault detection and analysis. Arbogast does not appear to explicitly suggest that the neural network (i.e., second model) comprises coefficients trained to minimize a difference between generated second values and inputted sensor values characterized by corresponding inputted sensor data. However, it would have been obvious in view of Madasu. Hereinafter, Arbogast in view of Madasu teaches: wherein the second model comprises coefficients trained to minimize a difference between generated second values and inputted sensor values characterized by corresponding inputted sensor data; (Madasu, [0029] “The trained autoencoder 126 can detect the outliers 130 by comparing the sensor data 121 with the reconstructed reduced-noise sensor data 124. For instance, the trained autoencoder 126 may compare each data point of the reconstructed reduced-noise sensor data 124 which it generates with the corresponding data point of the sensor data 121 to determine if a difference between the data points exceeds a reconstruction difference threshold. If the difference exceeds the reconstruction difference threshold, the trained autoencoder 126 indicates that the data point of the sensor data 121 and/or the reconstructed reduced-noise sensor data 124 is as an outlier.” [0065] “FIG. 9 depicts an example plot comparing actual sensor data with reconstructed reduced-noise sensor data as determined by an autoencoder which was trained with reduced-noise sensor data output from a PDNN during training.”) Arbogast and Madasu are related art because they are from the same field of endeavor and their disclosure generally relates to sensor data outlier detection and analysis. Accordingly, it would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to incorporate the physics-influenced deep neural network for sensor outlier detection as taught by Madasu. One would have been motivated to make such a combination in order to incorporate a physics-based model into a deep neural network for quantitatively differentiating between various physical processes and distinguishing relevant physical phenomena from irrelevant processes and system noise. Doing so would improve the predictive accuracy of the PDNN for generating a reduced-noise representation of sensor data (Madasu [0017]). Regarding Original Claim 2, Arbogast in view of Madasu teaches the elements of claim 1 as outlined above, and further teaches: wherein the first model is a physics-based model that operates on the sensor data and is based on at least one mathematical relationship between inputs to the system and outputs from the system, and the second model is a machine learning model that operates on the first value. (Arbogast, Fig. 5B, [0015] “…. a neural network based fault detection and analysis tool may be integrated with a physical (or semi-physical) model (PM) of a production plant. For example, a first principles modeling (FPM) tool may be used to provide a theoretical model of plant behavior. Similarly, the PM may provide an empirical model based upon a physical understanding that plant operational variables are related, but use an empirical model based upon data gathered through observation. For convenience, embodiments of the invention are described using a (FPM) based fault detection and analysis tool. A FPM based tool provides an a priori mathematical model of plant processes, e.g., a set of mass/energy balance equations modeling the systems at a specific plant.” [0016] “…Similarly the neural network based tool takes all of the sensor values as inputs and predicts each of their values. However, any validated sensor signals from the first principles model based tool would be in place of the raw sensor signals. Thus, the neural network would make its predictions using the available validated signals combined with raw sensor data.”) [Examiner’s Note: the FPM tool is a physics-based model and the neural network is a machine learning model that at least operates based the output of the FPM (validated sensor values e.g., the observed sensor values). Note that the FPM tool operates on sensor data by using mathematical models of plant processes. Specifically provides a theoretical model of plant behavior, which is based on “first principles” including (e.g., mass/energy balance equations).] Regarding Original Claim 3, Arbogast in view of Madasu teaches the elements of claim 2 as outlined above, and further teaches: wherein the physics-based model comprises a first weight and the machine learning model comprises a second weight, wherein the computing device is configured to train the first weight and the second weight based on the sensor prediction value and the sensor data. (Madasu, [0021]-[0025] “The PDNN 110 evaluates reduced-noise sensor data 109 output during training with a cost function which incorporates a physics-based model. A physics-based model can be incorporated in the cost function used to evaluate output of the PDNN 110 because a physics-based model is similar to a regularization component which may conventionally be used to evaluate output of a deep neural network during training Generally, the physics-based cost function can be represented as shown in Equation 1, where C represents a cost function value, P represents a solution to a physics-based model, and D represents a data-based value.” [0025] “The physics-based network trainer 120 can train the PDNN 110 iteratively based on evaluation of the reduced-noise sensor data 109 by using the physics-based cost function represented in Equation 2. For instance, the PDNN 110 may establish a cost value threshold. If the value of the physics-based cost function calculated based on one or more sensor data 108 inputs and reduced-noise sensor data 109 outputs exceeds the threshold, the weights associated with neuron connections of the PDNN 110 can be adjusted before re-computing the reduced-noise sensor data 109 outputs. Training of the PDNN 110 can advance to the next set of one or more sensor data 108 inputs based on the value of the physics-based cost function satisfying the cost value threshold.” [0030] “As depicted in FIG. 1, the network trainer 120 trains the PDNN 110 to denoise sensor data 108, where the PDNN 110 leverages a physics-based model and a physics-based cost function which incorporates the physics-based model. During training, the PDNN 110 generates and outputs reduced-noise sensor data 109. Training the PDNN 110 with the sensor data 108 generates a trained PDNN 216 which is deployed to a PDNN production system 212.”) [Examiner’s Note: Madasu describes the process of integrating a physics-based model and deep neural network where the cost function is used for training involves both components. The integrated model (PDNN) is trained using sensor data and the optimized using the cost function. The parameters of the PDNN are being adjusted iteratively to minimize the noise in the sensor data, further see [0036]-[0037]. The disclosed statements interpreted as “training the first weight and the second weight based on the sensor prediction value and the sensor data.”] Regarding Previously Presented Claim 4, Arbogast in view of Madasu teaches the elements of claim 2 as outlined above, and further teaches: wherein the at least one processor is further configured to execute instructions to receive model input data, and wherein the physics-based model operates on the model input data. (Arbogast, [0024] “As new sensor data is received from the production facility 101 and the pipeline 105, data from the sensor status database 132 may be archived in the historical status database 134. …, the monitoring server 140 may be configured to monitor the sensor values received from the production facility 101 and the pipeline 105 and identify when a sensor fault has occurred.” [0027] “FPM tool 222 provides an application configured to simulate the operations of a specific industrial facility. …, Thus, for a given set of inputs, the FPM tool 222 can determine what the value for a given sensor should be. If the raw sensor data 232 for a given senor is different than the value from the FPM modeling tool 222, monitoring system 140 may determine whether an actual malfunction (or other configuration problem) has occurred within an ASU or whether the sensor has experienced a sensor fault.”) [Examiner’s Note: the sensor measurement data input to the FPM model would read on the “model input data”.] Regarding Original Claim 5, Arbogast in view of Madasu teaches the elements of claim 4 as outlined above, and further teaches: wherein the model input data comprises timeseries data of prior sensor oil temperature readings of an engine, data identifying the engine's fuel consumption, data identifying the engine's coolant temperature, data identifying the mass flow rate of the oil in the engine, data identifying the mass flow rate of the coolant in the engine, and data identifying the speed of the engine's radiator fan. (Arbogast, [0021]-[0022] “…. The sensors 103 may be configured to monitor a variety of aspects of the ASUs 102. For example, sensors measure input/output product flow rates, temperatures and pressures, compressor motor temperatures, energy consumption. In this example, product generated by the ASUs may be transported over pipeline network 105. The pipeline network 105 includes three compressor stations 110, 115, and 120. Each of the compressor stations 110, 115, and 120 may include one or more compressors used to maintain the gas pressure present in pipeline 105. Additionally, compressor stations 110, 115, and 120 may include sensor equipment used to monitor aspects of the operational state of pipeline 105. For a pressurized gas pipeline, a wide variety of compressor parameters may be monitored including, for example, inlet gas pressure, outlet gas pressure, gas temperature, cooling liquid temperature, flow rates, and power consumption, among others. Of course, for other applications of the invention, the sensors or monitoring equipment may be selected to suit the needs of a particular case. The monitoring may be dynamic (i.e., “real-time”), or periodic where an operational parameter of the pipeline is sampled (or polled) at periodic intervals. From the pipeline 105, product may be delivered to customer sites 122 or stored in storage tanks 124.”) [Note: the sensor measurement data would read on the time-series data.] Regarding Previously Presented Claim 8, Arbogast in view of Madasu teaches the elements of claim 1 as outlined above, and further teaches: wherein determining whether the sensor data is valid comprises: determining whether the sensor prediction value is within a confidence interval; determining that the sensor data is valid when the sensor prediction value is within the confidence interval; and determining that the sensor data is invalid when the sensor prediction value is not within the confidence interval. (Arbogast, [0016] “The FPM based tool then generates a predicted value for each modeled sensor. If the sensor value matches the modeled value (within a user specified margin), the sensor value is considered to be validated, i.e., that the sensor has reported an accurate value. Similarly the neural network based tool takes all of the sensor values as inputs and predicts each of their values.” [0028] “If the raw sensor data 232 for the second and third sensors is inconsistent with the predictive values, then the integrated analysis tool 224 may determine that the first sensor has experienced a sensor fault.” [0031] “assume the raw sensor data agrees with the value from the FPM tool 222 (within user specified tolerances). In such a case the raw sensor data is referred to as “validated” data, as the FPM tool has validated that the sensor has reported an accurate value.” [0036] “FPM tool 222 and/or the neural network tool 224 (by more than a user-specified margin). In response, the integrated analysis tool 224 replaces the sensor data for sensor 450 with the predicted value. If the predicted value is within an acceptable operating range, then the controller systems operate using the predicted/estimated value in place of the faulty signal. However, if the predicted/estimated values indicate that the device monitored by sensor 450 is operating outside of the acceptable operating range, then a plant trip occurs at 470.”) [Examiner’s Note: the user-specified tolerance and acceptable operating rang would serve as the confidence interval.] Regarding Previously Presented Claim 9, Arbogast in view of Madasu teaches the elements of claim 1 as outlined above, and further teaches: receive current sensor data for the at least one sensor; (Arbogast, [0024] “… a computing system configured with a database application itself configured to receive and store a current value for each sensor in the production facility 101 and the pipeline 105.”) determine an error value based on the sensor prediction value and the current sensor data; and determine at least one adjustment to a weight applied by the first model based on the error value. (Madasu, [0032] “The trained autoencoder 226 can detect the outliers 230 by comparing the sensor data 214 with the reconstructed reduced-noise sensor data 224. For instance, the trained autoencoder 226 may compare each data point of the reconstructed reduced-noise sensor data 224 which it generates with a corresponding data point from the sensor data 214 to determine if a difference between the data points exceeds a reconstruction difference threshold. If the difference exceeds the reconstruction difference threshold, the trained autoencoder 226 indicates that the data point has been identified as an outlier.” [0037] “For example, the reconstruction error threshold may indicate a reconstruction error which, when exceeded, results in adjusting the weights of the autoencoder neuron connections before redetermining the reconstructed reduced-noise sensor data, where the same input may be used after the weights are adjusted.” Further described in [0025]-[0026] & [0029].) Regarding Original Claim 10, Arbogast in view of Madasu teaches the elements of claim 1 as outlined above, and further teaches: wherein the sensor data comprises an oil temperature of an engine. (Arbogast, [0021]-[0022] “The sensors 103 may be configured to monitor a variety of aspects of the ASUs 102. For example, sensors measure input/output product flow rates, temperatures and pressures, compressor motor temperatures, energy consumption. …., For a pressurized gas pipeline, a wide variety of compressor parameters may be monitored including, for example, inlet gas pressure, outlet gas pressure, gas temperature, cooling liquid temperature, flow rates, and power consumption, among others.”) [Examiner’s Note: the compressor motor temperatures including the liquid temperature and others would include the oil temperature of an engine.] Regarding Original Claim 11, Arbogast in view of Madasu teaches the elements of claim 1 as outlined above, and further teaches: wherein the sensor prediction value is a predicted temperature and the sensor data is an actual temperature. (Arbogast, [0005] “The method may generally include receiving a sensor data value for each of one or more sensors, each monitoring an aspect of an industrial process, passing at least some of the sensor data values to at least a physical modeling (PM) tool and a neural network modeling tool. The PM tool and the neural network tool are each configured to determine a predicted value for each passed sensor data value.” [0021] “The sensors 103 may be configured to monitor a variety of aspects of the ASUs 102. For example, sensors measure input/output product flow rates, temperatures and pressures, compressor motor temperatures, energy consumption.” [0028] “… the FPM modeling tool can predict what pressure and/or temperature would need to be present at the second and third sensor point before the value at the first sensor would be observed. …., Similarly, the neural network tool 226 may be configured to predict the value of one sensor, based on the observed values of other sensors.”) [Examiner’s Note: the predicted values generated by the FPM and neural network (a predicted temperature) and the raw sensor including the actual sensor measurement (the actual temperature).] Regarding Previously Presented Claim 12, the claim recites similar limitation as corresponding claim 1 and is rejected for similar reasons as claim 1 using similar teachings and rationale. Claim 1 is directed to a system, and claim 9 is directed to: A method…. (Arbogast also discloses “a computer-implemented method for sensor fault detection and analysis.”, see [Abstract] and [0005].) Regarding Original Claim 13, the claim recites similar limitation as corresponding claim 2 and is rejected for similar reasons as claim 2 using similar teachings and rationale. Regarding Original Claim 14, the claim recites similar limitation as corresponding claim 3 and is rejected for similar reasons as claim 3 using similar teachings and rationale. Regarding Original Claim 15, the claim recites similar limitation as corresponding claim 4 and is rejected for similar reasons as claim 4 using similar teachings and rationale. Regarding Previously Presented Claim 16, the claim recites similar limitation as corresponding claim 5 and is rejected for similar reasons as claim 5 using similar teachings and rationale. Regarding Previously Presented Claim 18, the claim recites similar limitation as corresponding claim 1 and is rejected for similar reasons as claim 1 using similar teachings and rationale. Claim 1 is directed to a system, and claim 18 is directed to: A non-transitory computer readable medium having instructions stored thereon, wherein the instructions, when executed by at least one processor, …. (Arbogast also discloses “A computer-readable storage medium containing a program, which, when executed on a processor, performs an operation for sensor fault detection and analysis.” See [0006]). Regarding Original Claim 19, the claim recites similar limitation as corresponding claim 2 and is rejected for similar reasons as claim 2 using similar teachings and rationale. Claim(s) 6, 17, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Arbogast in view of Madasu as outlined above, and further in view of Kapoor et al., (Pub. No.: US 20180005118 A1). Regarding Previously Presented Claim 6, Arbogast in view of Madasu teaches the elements of claim 1 as outlined above, and further teaches: As noted above, Arbogast describes a computing system that use an FPM tool (physical-based model) and neural network to predict sensor data obtained from one or more sensors. The sensor data are obtained from first sensor and a second sensor (see [0028] & [0034]). Thus, Arbogast teaches “a first sensor and a second sensor, the first model is a first classifier that operates on first sensor data from the first sensor, and the second model is a final classifier”. Arbogast in view of Madasu does not appear to explicitly teach: determine a third value based on execution of a second classifier that operates on second sensor data from the second sensor; determine the second value based on execution of the final classifier that operates on the first value and the third value. However, Kapoor, in combination with Arbogast and Madasu, teaches the limitations: wherein the at least one sensor comprises a first sensor and a second sensor, the first model is a first classifier that operates on first sensor data from the first sensor, and the second model is a final classifier, the at least one processor further configured to execute the instructions to: determine a third value based on execution of a second classifier that operates on second sensor data from the second sensor; and determine the second value based on execution of the final classifier that operates on the first value and the third value. (Kapoor, [0048]-[0049] “In an exemplary scenario, sensor data acquired by one of the sensors 302-304 can be used by one of the classifiers 306-308 to generate a prediction concerning a phenomenon. Following this example, sensor data acquired by a first sensor (e.g., the sensor 1 302) can be used by a first classifier (e.g., the classifier 1 306) to generate a prediction concerning a first phenomenon. Further following this example, sensor data acquired by a second sensor (e.g., the sensor M 304) can be used by a second classifier (e.g., the classifier N 308) to generate a prediction concerning a second phenomenon, where the first and second phenomena differ. …., Pursuant to this example, sensor data acquired by a first sensor (e.g., the sensor 1 302) can be used by a first classifier (e.g., the classifier 1 306) to generate a first prediction concerning a phenomenon, and sensor data acquired by a second sensor (e.g., the sensor M 304) can be used by a second classifier (e.g., the classifier N 308) to generate a second prediction concerning the same phenomenon. As an illustration, an ultrasound sensor and a camera can both be utilized to capture sensor data concerning a location of a wall in an environment (e.g., a relative location of the wall from a current location of the cyber-physical system), with the sensor data from the ultrasound sensor being used by a first classifier and the sensor data from the camera being used by a second classifier. Accordingly, predictions from the first classifier and the second classifier can be combined to get a fused prediction concerning the location of the wall (e.g., the ultrasound sensor or the camera may be more prone to error at different points).”) [Examiner’s Note: the reference describes how multiple classifiers (308-308) use sensor data (302-304) to generate predictions which correspond to the first and second classifiers operates on the first and second sensor data to determine first and third value. The control system, which combines outputs from multiple classifiers to generate a fused perdition (i.e., final classifier aggregates the first and second values to determine second value). Thus, the combined output is interpreted as the “second value”.] Accordingly, it would have been obvious to a one having ordinary skill in the art, before the effective filing date of the claimed invention, having the combination of Arbogast and Madasu before them, to incorporate the fused prediction method using multiple classifiers as taught by Kapoor. One would have been motivated to make such a combination in order to assess the cyber-physical system and the environment in which the cyber-physical system operates to generate the control inputs, while mitigating detrimental impact to safety resulting from the uncertainty in the prediction(s). Doing so would enhance the safety of the cyber-physical systems (Kapoor [0031]). Regarding Original Claim 17, the claim recites similar limitation as corresponding claim 6 and is rejected for similar reasons as claim 6 using similar teachings and rationale. Regarding Previously Presented Claim 20, the claim recites similar limitation as corresponding claim 6 and is rejected for similar reasons as claim 6 using similar teachings and rationale. Claim(s) 7 is rejected under 35 U.S.C. 103 as being unpatentable over the combination of Arbogast, Madasu and Kapoor as outlined above, and further in view of Weichenberger et al., (Pub. No.: US 20210325864 A1). Regarding Amended Claim 7, the combination of Arbogast, Madasu and Kapoor teaches the elements of claim 6 as outlined above. As explained above, Kapoor teaches applying one or more sensor data (first and second sensor data) to multiple trained classifiers (first and second) to generate prediction outputs (first and second output data). Furthermore, Kapoor teaches applying the first and second outputs to control system (final classifier) to generate a combined results (fused prediction). Kapoor, teaches: train the first classifier and second classifier with first system data …(Kapoor, [0043] “The classifiers 306-308 can be inferred from observed training data. Formally, given a set of training data points …etc.” [0045] “The system 300 further includes the control system 120 that generates the control inputs 106 for the cyber-physical system based on a combination of outputs from the classifiers 306-308. The synthesis component 126 of the control system 120 can synthesize the control inputs 106 that optimize the cost function 122 and satisfy the constraints 124, where the constraints 124 are based on the outputs from the classifiers 306-308. The constraints 124 can include Boolean operators and/or temporal operators that specify how the outputs from the classifiers 306-308 are combined such that safety of the cyber-physical system can be maintained as a result of the synthesized control inputs 106.” [0110] “The computing device 900 additionally includes a data store 908 that is accessible by the processor 902 by way of the system bus 906. The data store 908 may include executable instructions, predictions, cost functions, constraints, states, a probabilistic framework, sensor data, control inputs, training data for classifiers, etc.”) apply the trained first and second classifiers to the first and second sensor data to generate first and second output data (Kapoor, [0047] “Following this example, sensor data acquired by a first sensor (e.g., the sensor 1 302) can be used by a first classifier (e.g., the classifier 1 306) to generate a prediction concerning a first phenomenon. Further following this example, sensor data acquired by a second sensor (e.g., the sensor M 304) can be used by a second classifier (e.g., the classifier N 308) to generate a prediction concerning a second phenomenon, where the first and second phenomena differ.”) train the final classifier with the first output data and the second output data. (Kapoor, [0048] “… predictions from the first classifier and the second classifier can be combined to get a fused prediction ….” [0045] “The system 300 further includes the control system 120 that generates the control inputs 106 for the cyber-physical system based on a combination of outputs from the classifiers 306-308. The synthesis component 126 of the control system 120 can synthesize the control inputs 106 that optimize the cost function 122 and satisfy the constraints 124, where the constraints 124 are based on the outputs from the classifiers 306-308. The constraints 124 can include Boolean operators and/or temporal operators that specify how the outputs from the classifiers 306-308 are combined such that safety of the cyber-physical system can be maintained as a result of the synthesized control inputs 106.”) The combination of Arbogast, Madasu and Kapoor does not appear to explicitly teach: train the first classifier with first system data corresponding to a first operating regime of the system; train the second classifier with second system data corresponding to a second operating regime of the system; However, Weichenberger, in combination Arbogast, Madasu and Kapoor, teaches the limitations: train the first classifier with first system data corresponding to a first operating regime of the system; train the second classifier with second system data corresponding to a second operating regime of the system; (Weichenberger, [0031] “In some implementations, the control system may train a first prediction model to identify a particular hazardous condition when operating under a particular operating state and a second prediction model, that is different from the first prediction model, to identify the hazardous condition when operating under a second operating state that is different from the first operating state. Additionally, or alternatively, the control system may specifically configure the one or more prediction models to monitor for a particular hazardous condition.” Further described in [0030]-[0031].) Accordingly, it would have been obvious to a one having ordinary skill in the art, before the effective filing date of the claimed invention, having the combination of Arbogast, Madasu and Kapoor before them, to incorporate the control system that trains one or more prediction models for predicting one or more corresponding hazardous conditions as taught by Weichenberger. One would have been motivated to make such a combination in order to utilize a real-time optimization engine that enables anomaly forecasting for detection of certain hazardous conditions associated with an operation of the distillation column. Doing so would prevent or reduce the likelihood that the hazardous condition occurs within a threshold period of time (Weichenberger [0032]). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: (Pub. No.: US 10753192 B2) – “David Milton Eslinger” relates to “State estimation and run life prediction for pumping system.” (Pub. No.: US 20210182693 A1) – “Jonathan L. Herlocker” relates to “Method for physical system anomaly detection.” (Pub. No.: US 20190347488 A1) – “Fabian Timm” relates to “Determining a state of the surrounding area of a vehicle, using linked classifiers.” Any inquiry concerning this communication or earlier communications from the examiner should be directed to SADIK ALSHAHARI whose telephone number is (703)756-4749. The examiner can normally be reached Monday - Friday, 9 a.m. 6 p.m. ET. Examiner interviews are available via telephone, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Li Zhen can be reached on (571) 272-3768. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /S.A.A./Examiner, Art Unit 2121 /Li B. Zhen/Supervisory Patent Examiner, Art Unit 2121
Read full office action

Prosecution Timeline

Aug 06, 2021
Application Filed
Oct 17, 2024
Non-Final Rejection — §101, §103
Apr 23, 2025
Response Filed
Jun 17, 2025
Final Rejection — §101, §103
Dec 23, 2025
Request for Continued Examination
Jan 07, 2026
Response after Non-Final Action
Mar 08, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596930
SENSOR COMPENSATION USING BACKPROPAGATION
2y 5m to grant Granted Apr 07, 2026
Patent 12493786
Visual Analytics System to Assess, Understand, and Improve Deep Neural Networks
2y 5m to grant Granted Dec 09, 2025
Patent 12462199
ADAPTIVE FILTER BASED LEARNING MODEL FOR TIME SERIES SENSOR SIGNAL CLASSIFICATION ON EDGE DEVICES
2y 5m to grant Granted Nov 04, 2025
Patent 12437199
Activation Compression Method for Deep Learning Acceleration
2y 5m to grant Granted Oct 07, 2025
Patent 12430552
Processing Data Batches in a Multi-Layer Network
2y 5m to grant Granted Sep 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
35%
Grant Probability
82%
With Interview (+47.1%)
4y 5m
Median Time to Grant
High
PTA Risk
Based on 34 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month