Prosecution Insights
Last updated: April 19, 2026
Application No. 17/970,758

SYSTEM AND METHOD FOR PREDICTING MACHINE FAILURE

Non-Final OA §103
Filed
Oct 21, 2022
Examiner
NYAMOGO, JOSEPH A
Art Unit
2858
Tech Center
2800 — Semiconductors & Electrical Systems
Assignee
Wayne State University
OA Round
3 (Non-Final)
69%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
90 granted / 130 resolved
+1.2% vs TC avg
Strong +31% interview lift
Without
With
+31.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
30 currently pending
Career history
160
Total Applications
across all art units

Statute-Specific Performance

§101
1.4%
-38.6% vs TC avg
§103
80.2%
+40.2% vs TC avg
§102
12.6%
-27.4% vs TC avg
§112
5.1%
-34.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 130 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on January 16, 2026 has been entered. Response to Arguments Applicant's arguments filed January 16, 2026 have been fully considered but they are not persuasive. In response to Applicant's argument on page 9 pertaining to “Mccarson discloses cleaning data in general with a sensor interfacer 204. Mccarson fails to disclose a first amount of data transmitted to a low fidelity model, and a second amount of data transmitted to a high fidelity model, where the first amount is greater (e.g., and real-time data) than the second amount.”. The Examiner respectfully disagrees. The Examiner does not rely on Mccarson to disclose, a first amount of data transmitted to a low fidelity model, and a second amount of data transmitted to a high fidelity model, where the first amount is greater (e.g., and real-time data) than the second amount. The Examiner relies on Kale. Kale discloses a first amount of data (Fig. 3, ¶ 80 stream of real time sensor data) transmitted to a low fidelity model, and a second amount of data (Fig. 3, ¶ 80 transmit data) transmitted to a high fidelity model, where the first amount is greater (e.g., and real-time data) than the second amount. In response to Applicant's argument on page 9 pertaining to “for example, new claim 21 recites in part that "the second computing device receives CAD data associated with the machine in determining the machine failure prediction". (See, e.g., para. [0037]). None of the cited references disclose or teach using CAD data in determining the machine failure prediction. Instead, the cited references describe using data from sensor to generate a physical model and comparing model data. Therefore, new claim 21 is patentable for at least these reasons.”. The Examiner respectfully disagrees. The models disclosed by the prior art are inherently Computer-Aided Design (CAD) models of physical objects. The data is therefore refers to CAD data. Gandhi teaches, the second computing device receives CAD data (Fig. 1. Col. 5. Ln. 3 a model 102). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1 – 19, 21 are rejected under 35 U.S.C. 103 as being unpatentable over Mccarson et al (US 2019/0340843 A1) (herein after Mccarson) in view of Gandhi et al (US 10,295,965 B2) (herein after Gandhi), in view of Bergantz et al (US 2020/0324410 A1) (herein after Bergantz), and further in view of Kale et al. (US 2022/0032932 A1) (herein after Kale). Regarding Claim 1, Mccarson teaches, a method for predicting machine failure (Fig. 1, ¶ 50 method actions may be implemented; ¶ 12 downtime and/or damage to process control equipment), the method comprising: collecting real-time (Fig. 1, ¶ 11 sensors and/or actuators) data associated with a machine (Fig. 1, ¶ 12 process control equipment); inputting the real-time data associated with the machine into at least one low fidelity model (Fig. 1, ¶ 11 model based control) of a first computing device (Fig. 1, example edge node 120) to determine an interesting event (Fig. 1, ¶ 26 determines one or more patterns) associated with the machine; —. Mccarson fails to teach, — transmitting, via the first computing device, data associated with the interesting event to a second computing device; inputting the data associated with the interesting event into at least one high fidelity model of the second computing device to determine a machine failure prediction; transmitting, via the second computing device, the machine failure prediction to a third computing device; and displaying, via the third computing device, the machine failure prediction; wherein the interesting event includes a deviation from normal operation of the machine; and wherein a first amount of data received by the first computing device is greater than a second amount of data received by the second computing device. In analogous art, Gandhi teaches, — transmitting, via the first computing device, data associated with the interesting event to a second computing device (Fig. 1, processing device 150); inputting the data associated with the interesting event into at least one high fidelity model (Fig. 1, modeling engine 104) of the second computing device to determine a machine failure prediction (Fig. 1. Col. 7. Ln. 66 mechanical component is failing); transmitting, via the second computing device, the machine failure prediction to a third computing device (Fig. 1, user on the graphical display medium 154); and displaying, via the third computing device, the machine failure prediction (Fig. 1. Col. 5. Ln. 16 generate alerts 152). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Mccarson by combining the method for predicting machine failure taught by Mccarson with a method for, inputting the data associated with the machine into at least one low fidelity model of a first computing device to determine an interesting event associated with the machine; transmitting, via the first computing device, data associated with the interesting event to a second computing device; inputting the data associated with the interesting event into at least one high fidelity model of the second computing device to determine a machine failure prediction; transmitting, via the second computing device, the machine failure prediction to a third computing device; and displaying, via the third computing device, the machine failure prediction; taught by Gandhi for the benefit of minimal machine implementation cost and minimal continued maintenance [Gandhi: Col. 3, Ln. 33 – 36]. Mccarson in view of Gandhi fail to teach, — wherein the interesting event includes a deviation from normal operation of the machine; and wherein a first amount of data received by the first computing device is greater than a second amount of data received by the second computing device. In analogous art, Bergantz teaches, — wherein the interesting event (Fig. 7, ¶ 95 predictive data 768) includes a deviation from normal operation of the machine (Fig. 7, ¶ 95 predicted abnormality ( difference between directed position, and the actual location)); It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Mccarson in view of Gandhi by combining the manufacturing process and component replacement prediction method taught by Mccarson in view of Gandhi with a manufacturing process and component failure prediction method wherein, the interesting event includes a deviation from normal operation of the machine; taught by Bergantz for the benefit of performing corrective action (e.g. predicted operational maintenance, replacing components) to avoid the cost of unexpected component failure [Bergantz: ¶ 92 – 93]. Mccarson in view of Gandhi in view of Bergantz fail to teach, — and wherein a first amount of data received by the first computing device is greater than a second amount of data received by the second computing device. In analogous art, Kale teaches, and wherein a first amount of data (Fig. 3, ¶ 80 stream of real time sensor data) received by the first computing device (Fig. 3, ¶ 80 sensor(s) (e.g., 101)) is greater than a second amount of data (Fig. 3, ¶ 80 reduce or eliminate the need to transmit data) received by the second computing device (Fig. 3, ¶ 80 computer system (131)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Mccarson in view of Gandhi in view of Bergantz by combining the method performed by the first computing device and the second computing device taught by Mccarson in view of Gandhi in view of Bergantz with the method performed by a first computing device and a second computing device, wherein a first amount of data received by the first computing device is greater than a second amount of data received by the second computing device; taught by Kale for the benefit of providing inference results from sensor data in order to transmit the sensor data between two computing devices with reduced bandwidth [Kale: ¶ 61]. Regarding Claim 2, Mccarson in view of Gandhi in view of Bergantz in view of Kale teach the limitations of claim 1, which this claim depends on. Mccarson further teaches, the method of claim 1, wherein: the first computing device includes an edge computing device (Fig. 1, example edge node 120); the second computing device includes a remote computing device (Fig. 13, ¶ 73 external machines (e.g., computing devices of any kind); Note: Fig 13 is part of Fig 1, see ¶ 67); — Mccarson, Bergantz, and Kale fail to teach — and the third computing device includes a user device. Gandhi further teaches, — and the third computing device includes a user device (Fig. 1, user on the graphical display medium 154). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Mccarson in view of Gandhi in view of Bergantz in view of Kale by combining the method for predicting machine failure taught by Mccarson in view of Gandhi in view of Bergantz in view of Kale with a method wherein, the third computing device includes a user device; the second computing device includes a remote computing device; taught by Gandhi for the benefit of minimal machine implementation cost and minimal continued maintenance [Gandhi: Col. 3, Ln. 33 – 36]. Regarding Claim 3, Mccarson in view of Gandhi in view of Bergantz in view of Kale teach the limitations of claim 1, which this claim depends on. Mccarson further teaches, the method of claim 1, wherein: the machine is used in connection with a manufacturing station (Fig. 1, ¶ 11 manufacturing stage); the data associated with the machine includes heterogeneous data (Fig. 1, ¶ 26 time window, one or more patterns); and the heterogeneous data includes timestamped data and parameter data (Fig. 1, ¶ 26 time window, one or more patterns). Regarding Claim 4, Mccarson in view of Gandhi in view of Bergantz in view of Kale teach the limitations of claim 1, which this claim depends on. Mccarson further teaches, the method of claim 1, wherein, prior to inputting the data (Fig. 2, ¶ 25 retrieved sensor data) associated with the machine into the at least one low fidelity model of the first computing device, the method includes: determining, via the first computing device, which of the data associated with the machine is unnecessary data (Fig. 1, ¶ 25 duplicate data); and eliminating, via the first computing device, the unnecessary data (Fig. 1, ¶ 25 clean the retrieved sensor data; duplicate data to be removed). Regarding Claim 5, Mccarson in view of Gandhi in view of Bergantz in view of Kale teach the limitations of claim 1, which this claim depends on. Mccarson further teaches, the method of claim 1, including determining, via the first computing device, a first pattern (Fig. 1, ¶ 26 reference patterns) associated with performance of the machine. Regarding Claim 6, Mccarson in view of Gandhi in view of Bergantz in view of Kale teach the limitations of claim 1, which this claim depends on. Mccarson further teaches, the method of claim 5, including determining, via the first computing device, a second pattern (Fig. 1, ¶ 26 data patterns) associated with the performance of the machine. Regarding Claim 7, Mccarson in view of Gandhi in view of Bergantz in view of Kale teach the limitations of claim 1, which this claim depends on. Mccarson further teaches, the method of claim 6, including comparing, via the first computing device, the first pattern and the second pattern; and wherein the deviation from normal operation of the interesting event (Fig. 1, ¶ 26 data patterns are compared, against one or more reference patterns) includes a difference (Fig. 1, ¶ 26 within a particular standard deviation) between the first pattern and the second pattern. Regarding Claim 8, Mccarson in view of Gandhi in view of Bergantz in view of Kale teach the limitations of claim 1, which this claim depends on. Mccarson further teaches, the method of claim 1, including generating, via the second computing device, a digital twin (Fig. 1, ¶ 17 digital twin 122) of the machine; and displaying, via the third computing device, at least a portion of the digital twin (Fig. 1, user on the graphical display medium 154). Regarding Claim 9, Mccarson in view of Gandhi in view of Bergantz in view of Kale teach the limitations of claim 1, which this claim depends on. Mccarson, Bergantz, and Kale fail to teach, the method of claim 1, including generating, via the second computing device, a high fidelity learned model; and storing, via the second computing device, the high fidelity learned model in a model archive connected to the second computing device. Gandhi further teaches, the method of claim 1, including generating, via the second computing device, a high fidelity learned model (Fig. 1. Col. 7. Ln. 54 update the model 102); and storing, via the second computing device, the high fidelity learned model in a model archive (Fig. 1. Col. 7. Ln. 48-49 store it in a database) connected to the second computing device. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Mccarson in view of Gandhi in view of Bergantz in view of Kale by combining the method for predicting machine failure taught by Mccarson in view of Gandhi in view of Bergantz in view of Kale with a method including generating, via the second computing device, a high fidelity learned model; and storing, via the second computing device, the high fidelity learned model in a model archive connected to the second computing device; taught by Gandhi for the benefit of minimal machine implementation cost and minimal continued maintenance [Gandhi: Col. 3, Ln. 33 – 36]. Regarding Claim 10, Mccarson in view of Gandhi in view of Bergantz in view of Kale teach the limitations of claim 9, which this claim depends on. Mccarson, Bergantz, and Kale fail to teach, the method of claim 9, wherein generating the high fidelity learned model is triggered via the first computing device determining an additional interesting event. Gandhi further teaches, the method of claim 9, wherein generating the high fidelity learned model is triggered (Fig. 1. Col. 5. Ln. 26-27 model 102 may be implemented in computer software) via the first computing device determining an additional interesting event (Fig. 1. Col. 6. Ln. 1-2 identifies observations ( e.g., asset state vectors) that). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Mccarson in view of Gandhi in view of Bergantz in view of Kale by combining the method for predicting machine failure taught by Mccarson in view of Gandhi in view of Bergantz in view of Kale with a method wherein, generating the high fidelity learned model is triggered via the first computing device determining an additional interesting event; taught by Gandhi for the benefit of minimal machine implementation cost and minimal continued maintenance [Gandhi: Col. 3, Ln. 33 – 36]. Regarding Claim 11, Mccarson in view of Gandhi in view of Bergantz in view of Kale teach the limitations of claim 9, which this claim depends on. Mccarson, Bergantz, and Kale fail to teach, the method of claim 9, including updating, via the second computing device, the low fidelity model or generating an additional low fidelity model in connection with generating the high fidelity learned model. Gandhi further teaches, 11. the method of claim 9, including updating, via the second computing device, the low fidelity model (Fig. 1. Col. 7. Ln. 54 update the model 102) or generating an additional low fidelity model in connection with generating the high fidelity learned model (Fig. 1. Col. 5. Ln. 58-59 send information to the model that is used to modify the model 102). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Mccarson in view of Gandhi in view of Bergantz in view of Kale by combining the method for predicting machine failure taught by Mccarson in view of Gandhi in view of Bergantz in view of Kale a method including updating, via the second computing device, the low fidelity model or generating an additional low fidelity model in connection with generating the high fidelity learned model; taught by Gandhi for the benefit of minimal machine implementation cost and minimal continued maintenance [Gandhi: Col. 3, Ln. 33 – 36]. Regarding Claim 12, Mccarson in view of Gandhi in view of Bergantz in view of Kale teach the limitations of claim 9, which this claim depends on. Mccarson, Bergantz, and Kale fail to teach, 12. The method of claim 9, including populating, via the second computing device, the model archive with a plurality of high fidelity learned models. Gandhi further teaches, 12. The method of claim 9, including populating, via the second computing device, the model archive (Fig. 1. Col. 7. Ln. 48-49 store it in a database)with a plurality of high fidelity learned models (Fig. 1. Col. 5. Ln. 58-59 send information to the model that is used to modify the model 102). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Mccarson in view of Gandhi in view of Bergantz in view of Kale by combining the method for predicting machine failure taught by Mccarson in view of Gandhi in view of Bergantz in view of Kale with a method including populating, via the second computing device, the model archive with a plurality of high fidelity learned models; taught by Gandhi for the benefit of minimal machine implementation cost and minimal continued maintenance [Gandhi: Col. 3, Ln. 33 – 36]. Regarding Claim 13, Mccarson teaches, a system for predicting machine failure (Fig. 1, cyber physical system (CPS) 100; ¶ 12 downtime and/or damage to process control equipment), comprising: a controller (Fig. 2, sensor interfacer 204; Note: Fig 2 is part of Fig 1, see ¶ 18) for controlling a machine of a manufacturing station (Fig. 1, ¶ 11 manufacturing stage); a plurality of sensors (Fig. 1, sensors 114) disposed proximate the machine, the sensors communicatively coupled to the controller (Fig. 2, ¶ 19 FIG. 2 is communicatively connected (and/or interconnected)); a plugin device (Fig. 2, node interfacer 216) communicatively coupled to the controller; a first computing device (Fig. 1, example edge node 120) communicatively coupled to the plugin device, the plugin device transmits real-time (Fig. 1, ¶ 11 sensors and/or actuators) data associated with the machine to the first computing device (Fig. 2, ¶ 27 example node interfacer 216 transmits the, to other manufacturing cells (process control nodes)), and the first computing device executes at least one low fidelity model (Fig. 1, ¶ 11 model based control) to determine an interesting event (Fig. 1, ¶ 26 determines one or more patterns) associated with the machine; a second computing device (Fig. 13, ¶ 73 external machines (e.g., computing devices of any kind); Note: Fig 13 is part of Fig 1, see ¶ 67) communicatively coupled to the first computing device, the first computing device transmits data associated with the interesting event to the second computing device (Fig. 13, ¶ 73 facilitate exchange of data with external machines), —. Mccarson fails to teach, — and the second computing device executes at least one high fidelity model to determine a machine failure prediction; and a third computing device communicatively coupled to the second computing device, the second computing device transmits the machine failure prediction to the third computing device, and the third computing device displays the machine failure prediction; wherein the interesting event includes a deviation from normal operation of the manufacturing station and/or the machine; and wherein an amount of data that the first computing device transmits to the second computing device is less than an amount of data that the first computing device receives from the plugin device. In analogous art, Gandhi teaches, — and the second computing device (Fig. 1, processing device 150) executes at least one high fidelity model (Fig. 1, modeling engine 104) to determine a machine failure prediction (Fig. 1. Col. 7. Ln. 66 mechanical component is failing); and a third computing device (Fig. 1, user on the graphical display medium 154) communicatively coupled to the second computing device, the second computing device transmits the machine failure prediction to the third computing device, and the third computing device displays the machine failure prediction (Fig. 1. Col. 5. Ln. 16 generate alerts 152). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Mccarson by combining the second computing device taught by Mccarson with a second computing device taught by Gandhi wherein, the second computing device executes at least one high fidelity model to determine a machine failure prediction; and a third computing device communicatively coupled to the second computing device, the second computing device transmits the machine failure prediction to the third computing device, and the third computing device displays the machine failure prediction; taught by Gandhi for the benefit of minimal machine implementation cost and minimal continued maintenance [Gandhi: Col. 3, Ln. 33 – 36]. Mccarson in view of Gandhi fail to teach, — wherein the interesting event includes a deviation from normal operation of the manufacturing station and/or the machine; and wherein an amount of data that the first computing device transmits to the second computing device is less than an amount of data that the first computing device receives from the plugin device. In analogous art, Bergantz teaches, — wherein the interesting event (Fig. 7, ¶ 95 predictive data 768) includes a deviation from normal operation of the manufacturing station and/or the machine (Fig. 7, ¶ 95 predicted abnormality (difference between directed position, and the actual location)); It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Mccarson in view of Gandhi by combining the manufacturing process and component replacement prediction first computing device taught by Mccarson in view of Gandhi with a manufacturing process and component replacement prediction first computing device taught by Bergantz for the benefit of performing corrective action (e.g. predicted operational maintenance, replacing components) to avoid the cost of unexpected component failure [Bergantz: ¶ 92 – 93]. Mccarson in view of Gandhi in view of Bergantz— and wherein an amount of data that the first computing device transmits to the second computing device is less than an amount of data that the first computing device receives from the plugin device. In analogous art, Kale teaches,— and wherein an amount of data (Fig. 3, ¶ 80 reduce or eliminate the need to transmit data) that the first computing device (Fig. 3, ¶ 80 sensor(s) (e.g., 101)) transmits to the second computing device (Fig. 3, ¶ 80 computer system (131)) is less than an amount of data (Fig. 3, ¶ 80 stream of real time sensor data) that the first computing device receives from the plugin device. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Mccarson in view of Gandhi in view of Bergantz by combining the by the first computing device and the second computing device taught by Mccarson in view of Gandhi in view of Bergantz with a first computing device and a second computing device, wherein an amount of data that the first computing device transmits to the second computing device is less than an amount of data that the first computing device receives from the plugin device; taught by Kale for the benefit of providing inference results from sensor data in order to transmit the sensor data between two computing devices with reduced bandwidth [Kale: ¶ 61]. Regarding Claim 14, Mccarson in view of Gandhi in view of Bergantz in view of Kale teach the limitations of claim 13, which this claim depends on. Mccarson further teaches, the system of claim 13, wherein: the first computing device includes an edge computing device (Fig. 1, example edge node 120) disposed proximate the machine; the second computing device includes a remote computing device (Fig. 13, ¶ 73 external machines (e.g., computing devices of any kind); Note: Fig 13 is part of Fig 1, see ¶ 67) that is not disposed proximate the machine, the second computing device is communicatively coupled to the first computing device via a cloud server (Fig. 13, ¶ 67 a server; Note: Fig 13 is part of Fig 1, see ¶ 67); — Gandhi further teaches, — and the third computing device includes a user device (Fig. 1, user on the graphical display medium 154) and the third computing device is communicatively coupled to the second computing device via the cloud server. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Mccarson in view of Gandhi in view of Bergantz in view of Kale by combining the first computing device and second computing device taught by Mccarson in view of Gandhi in view of Bergantz in view of Kale with a first computing device and second computing device respectively taught by Gandhi wherein, the third computing device includes a user device and the third computing device is communicatively coupled to the second computing device via the cloud server; taught by Gandhi for the benefit of minimal machine implementation cost and minimal continued maintenance [Gandhi: Col. 3, Ln. 33 – 36]. Regarding Claim 15, Mccarson in view of Gandhi in view of Bergantz in view of Kale teach the limitations of claim 13, which this claim depends on. Mccarson further teaches, the system of claim 13, wherein, prior to the first computing device executing the at least one low fidelity to determine the interesting event, the first computing device determines which of the data associated with the machine is unnecessary data (Fig. 1, ¶ 25 duplicate data) and the first computing device eliminates the unnecessary data (Fig. 1, ¶ 25 clean the retrieved sensor data; duplicate data to be removed). Regarding Claim 16, Mccarson in view of Gandhi in view of Bergantz in view of Kale teach the limitations of claim 13, which this claim depends on. Mccarson further teaches, the system of claim 13, wherein: the first computing device determines a first pattern (Fig. 1, ¶ 26 reference patterns) associated with performance of the machine; the first computing device determines a second pattern (Fig. 1, ¶ 26 data patterns) associated with the performance of the machine; and wherein the deviation from normal operation of the interesting event (Fig. 1, ¶ 26 data patterns are compared, against one or more reference patterns) includes a difference (Fig. 1, ¶ 26 within a particular standard deviation) between the first pattern and the second pattern. Regarding Claim 17, Mccarson in view of Gandhi in view of Bergantz in view of Kale teach the limitations of claim 13, which this claim depends on. Mccarson further teaches, the system of claim 13, wherein the second computing device generates a digital twin (Fig. 1, ¶ 17 digital twin 122) of the machine; and the third computing device displays at least a portion of the digital twin (Fig. 1, user on the graphical display medium 154). Regarding Claim 18, Mccarson in view of Gandhi in view of Bergantz in view of Kale teach the limitations of claim 13, which this claim depends on. Mccarson, Bergantz, and Kale fail to teach, the system of claim 13, wherein the second computing device generates a high fidelity learned model via inputting the data associated with the interesting event into the at least one high fidelity model; and the second computing device stores the high fidelity learned model in a model archive connected to the second computing device. Gandhi further teaches, the system of claim 13, wherein the second computing device generates a high fidelity learned model (Fig. 1. Col. 7. Ln. 54 update the model 102) via inputting the data associated with the interesting event into the at least one high fidelity model; and the second computing device stores the high fidelity learned model in a model archive (Fig. 1. Col. 7. Ln. 48-49 store it in a database) connected to the second computing device. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Mccarson in view of Gandhi in view of Bergantz in view of Kale by combining the first computing device and second computing device taught by Mccarson in view of Gandhi in view of Bergantz in view of Kale with a first computing device and second computing device respectively taught by Gandhi wherein, the second computing device generates a high fidelity learned model via inputting the data associated with the interesting event into the at least one high fidelity model; and the second computing device stores the high fidelity learned model in a model archive connected to the second computing device; taught by Gandhi for the benefit of minimal machine implementation cost and minimal continued maintenance [Gandhi: Col. 3, Ln. 33 – 36]. Regarding Claim 19, Mccarson in view of Gandhi in view of Bergantz in view of Kale teach the limitations of claim 18, which this claim depends on. Mccarson, Bergantz, and Kale fail to teach, the system of claim 18, wherein at least one of: generating the high fidelity learned model is triggered via the first computing device determining an additional interesting event; and the second computing device populates the model archive with a plurality of high fidelity learned models. Gandhi further teaches, the system of claim 18, wherein at least one of: generating the high fidelity learned model is triggered (Fig. 1. Col. 5. Ln. 26-27 model 102 may be implemented in computer software) via the first computing device determining an additional interesting event (Fig. 1. Col. 6. Ln. 1-2 identifies observations (e.g., asset state vectors) that) ; and the second computing device populates the model archive (Fig. 1. Col. 7. Ln. 48-49 store it in a database) with a plurality of high fidelity learned models (Fig. 1. Col. 5. Ln. 58-59 send information to the model that is used to modify the model 102). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Mccarson in view of Gandhi in view of Bergantz in view of Kale by combining the first computing device and second computing device taught by Mccarson in view of Gandhi in view of Bergantz in view of Kale with a first computing device and second computing device respectively taught by Gandhi wherein, generating the high fidelity learned model is triggered via the first computing device determining an additional interesting event; and the second computing device populates the model archive with a plurality of high fidelity learned models; taught by Gandhi for the benefit of minimal machine implementation cost and minimal continued maintenance [Gandhi: Col. 3, Ln. 33 – 36]. Regarding Claim 21, Mccarson in view of Gandhi in view of Bergantz in view of Kale teach the limitations of claim 1, which this claim depends on. Mccarson, Bergantz, and Kale fail to teach 21. (New) the method of claim 1, wherein the second computing device receives CAD data associated with the machine in determining the machine failure prediction. Gandhi further teaches, 21. (New) the method of claim 1, wherein the second computing device receives CAD data (Fig. 1. Col. 5. Ln. 3 a model 102) associated with the machine in determining the machine failure prediction. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Mccarson in view of Gandhi in view of Bergantz in view of Kale by combining the second computing device taught by Mccarson in view of Gandhi in view of Bergantz in view of Kale with a second computing device taught by Gandhi, wherein the second computing device executes at least one high fidelity model to determine a machine failure prediction; and a third computing device communicatively coupled to the second computing device, the second computing device transmits the machine failure prediction to the third computing device, and the third computing device displays the machine failure prediction; taught by Gandhi for the benefit of minimal machine implementation cost and minimal continued maintenance [Gandhi: Col. 3, Ln. 33 – 36]. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. LAVID BEN LULU et al. (US 2021/0157310 A1) teaches, a system for predicting machine failure (Fig. 1, network diagram 100, ¶ 37 failure predictions). Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSEPH O. NYAMOGO whose telephone number is (469)295-9276. The examiner can normally be reached 9:00 A to 5:00 P CT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, EMAN ALFAKAWI can be reached at 571-272-4448. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JOSEPH O. NYAMOGO/ Examiner Art Unit 2858 /EMAN A ALKAFAWI/Supervisory Patent Examiner, Art Unit 2858 2/27/2026
Read full office action

Prosecution Timeline

Oct 21, 2022
Application Filed
Apr 16, 2025
Non-Final Rejection — §103
Jul 22, 2025
Response Filed
Oct 17, 2025
Final Rejection — §103
Dec 17, 2025
Response after Non-Final Action
Jan 16, 2026
Request for Continued Examination
Jan 26, 2026
Response after Non-Final Action
Feb 20, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12601778
FIELD-BIASED NONLINEAR OPTICAL METROLOGY USING CORONA DISCHARGE SOURCE
2y 5m to grant Granted Apr 14, 2026
Patent 12562600
Foreign Object Detection Testing for Wireless Chargers
2y 5m to grant Granted Feb 24, 2026
Patent 12562568
A LOAD BANK SYSTEM AND METHOD THEREOF TO GENERATE LARGE NUMBER OF DISCRETE LOADING STEPS
2y 5m to grant Granted Feb 24, 2026
Patent 12535358
SYSTEMS, METHODS, AND COMPUTER PROGRAM PRODUCTS FOR MULTI-MODEL EMISSION DETERMINATIONS
2y 5m to grant Granted Jan 27, 2026
Patent 12510600
BATTERY DEGRADATION DETERMINATION SYSTEM, BATTERY DEGRADATION DETERMINATION APPARATUS, AND BATTERY DEGRADATION DETERMINATION METHOD
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
69%
Grant Probability
99%
With Interview (+31.0%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 130 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month