Prosecution Insights
Last updated: April 19, 2026
Application No. 17/991,524

Machine Learning Model-Based Anomaly Prediction and Mitigation

Final Rejection §101§103
Filed
Nov 21, 2022
Examiner
NGUYEN, CATHERINE MARIE
Art Unit
2114
Tech Center
2100 — Computer Architecture & Software
Assignee
Disney Enterprises Inc.
OA Round
2 (Final)
89%
Grant Probability
Favorable
3-4
OA Rounds
2y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 89% — above average
89%
Career Allow Rate
8 granted / 9 resolved
+33.9% vs TC avg
Strong +50% interview lift
Without
With
+50.0%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 1m
Avg Prosecution
13 currently pending
Career history
22
Total Applications
across all art units

Statute-Specific Performance

§101
13.7%
-26.3% vs TC avg
§103
41.9%
+1.9% vs TC avg
§102
15.4%
-24.6% vs TC avg
§112
27.4%
-12.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 9 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 are pending for examination. This Office Action is FINAL. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to (an) abstract idea(s) without significantly more. Claims 1, 8, 15 Claims 1, 8, and 15 recite: a machine learning (ML) model receive a plurality of contextual data samples, each of the plurality of contextual data samples including a portion of the digital sensor data and a descriptive label; search a database, using a predetermined matching criterion, for a first data pattern matching the portion of the digital sensor data, the database storing historical digital sensor data generated by the plurality of sensors; determine, when searching detects the first data pattern, whether there is a correlation between the first data pattern and an anomalous event; generate, when determining determines the correlation, training data including a label identifying the anomalous event and the first data pattern to provide one of a plurality of training data samples, wherein the plurality of training data samples describe a plurality of anomalous events corresponding respectively to the first data pattern; and train the ML model, using the plurality of training data samples, to provide a trained predictive ML model… …configured to predict the plurality of anomalous events perform… anomaly prediction in real-time with respect to receiving additional digital sensor data generated by the plurality of sensors… …automated anomaly prediction… using the trained predictive ML model… Step 1: is the claim to a process, machine, manufacture, or composition of matter? Yes: Claim 1 is a machine Claim 8 is a process Claim 15 is a manufacture Step 2A, Prong I: does the claim recite an abstract idea, law of nature, or natural phenomenon? Yes, (an) abstract idea(s). The ‘ML model’ limitation in #1 above, as claimed and under BRI, is a mathematical concept that is defined as mathematical relationships, mathematical formulas or equations, and mathematical calculations. For example, “ML model” in the context of this claim encompasses a mathematical formula (Page 5, lines 20-22: “machine learning model” may refer to a mathematical model). The ‘search and matching’ limitation in #3 above, as claimed and under BRI, is a mental process that covers performance of the limitation in the mind. For example, “search and matching” in the context of this claim encompasses a person making an observation and judgement. The ‘determine’ limitation in #4 above, as claimed and under BRI, is a mental process that covers performance of the limitation in the mind. For example, “determine” in the context of this claim encompasses a person making a judgement. The ‘predict’ limitation in #7 above, as claimed and under BRI, is a mental process that covers performance of the limitation in the mind. For example, “predict” in the context of this claim encompasses a person making a judgement. The ‘predict’ limitation in #8 above, as claimed and under BRI, is a mental process that covers performance of the limitation in the mind. For example, “predict” in the context of this claim encompasses a person making a judgement and forming an opinion. Step 2A, Prong II: does the claim recite additional elements that integrate the judicial exception into a practical application? No. The ‘receive’ limitation in #2 above, as claimed and under BRI, is an additional element that is insignificant extra-solution activity. For example, “receive” in the context of this claim encompasses mere data gathering. See MPEP 2106.05(g). The ‘generate’ limitation in #5 above, as claimed and under BRI, is an additional element that is insignificant extra-solution activity recited at a high level of generality. For example, “generate” in the context of this claim encompasses mere data gathering. See MPEP 2106.05(g). The ‘train’ limitation in #6 above, as claimed and under BRI, is an additional element that is mere instructions to apply an exception recited at a high level of generality. For example, “train” in the context of this claim encompasses merely applying the ML model to execute the abstract idea (predict in #7 above). See MPEP 2106.05(f). The ‘automated’ limitation in #9 above, as claimed and under BRI, is an additional element that is mere instructions to apply an exception. For example, “automated” in the context of this claim encompasses merely applying the abstract idea (predict in #8 above) in a generic computer environment (Instant Spec: Page 5, lines 12-19). See MPEP 2106.05(f). The ‘using’ limitation in #10 above, as claimed and under BRI, is an additional element that is mere instructions to apply an exception. For example, “using” in the context of this claim encompasses applying the ML model to abstract idea (predict in #7 above). See MPEP 2106.05(f). Additionally, one or more of the claims recite the following elements: System (Claim 1) Hardware processor (Claim 1, 8, 15) System memory (Claim 1, 8) Software code (Claim 1, 8, 15) Computer-readable non-transitory storage medium (Claim 15) A plurality of sensors and digital sensor data These additional elements are recited at a high level of generality (i.e., as generic computer components) such that they amount to no more than components comprising mere instructions to apply the exception. Accordingly, these additional elements do not integrate the abstract idea(s) into a practical application because they do not impose any meaningful limits on practicing the abstract idea(s). Furthermore, regarding digital sensors, MPEP 2106.05(h) – example vi states “Limiting the abstract idea of collecting information, analyzing it, and displaying certain results of the collection and analysis to data related to the electric power grid, because limiting application of the abstract idea to power-grid monitoring is simply an attempt to limit the use of the abstract idea to a particular technological environment.” Likewise, the sensors in the instant application limits the abstract ideas recited above to data related to digital sensors. Step 2B: does the claim recite additional elements that amount to significantly more than the judicial exception? No. As discussed above with respect to integration of the abstract idea(s) into a practical application, the aforementioned additional elements amount to no more than components comprising mere instructions to apply the exception. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. With regards to #2 and #5, per MPEP 2106.05(d)(II), the courts have recognized the following computer functions as well-understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity: Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); but see DDR Holdings, LLC v. Hotels.com, L.P., 773 F.3d 1245, 1258, 113 USPQ2d 1097, 1106 (Fed. Cir. 2014) ("Unlike the claims in Ultramercial, the claims at issue here specify how interactions with the Internet are manipulated to yield a desired result‐‐a result that overrides the routine and conventional sequence of events ordinarily triggered by the click of a hyperlink." (emphasis added)); Electronic recordkeeping, Alice Corp. Pty. Ltd. v. CLS Bank Int'l, 573 U.S. 208, 225, 110 USPQ2d 1984 (2014) (creating and maintaining "shadow accounts"); Ultramercial, 772 F.3d at 716, 112 USPQ2d at 1755 (updating an activity log); Additionally, per MPEP 2106.05(d) and Berkheimer v. HP, Inc., 881 F.3d 1360, 1368, 125 USPQ2d 1649, 1654 (Fed. Cir. 2018), the following prior art(s) demonstrates that the limitation(s) in #5-6 is/are well-understood, routine, conventional activity: Refaat et al. (US 20220405618 A1) – [0010]: conventional approach of generating a large amount of ground truth labels for training data based on what actually happened in a future time period. An auto-labeling approach can be used to label training examples automatically by searching for what happened in the logs that record the history of what the agent ended doing in the future time period. Babu et al. (US 20210126931 A1) – [0003]: conventional anomaly detection systems are developed based on supervised and unsupervised ML techniques. Supervised learning techniques are trained using labeled examples of normal and anomalous datasets. The trained system classifies the incoming dataset into normal or anomaly class based on the labeled examples… The conventional anomaly detection systems are trained on historic data… Claims 2, 9, 16 Claims 2, 9, and 16 recite: receive additional digital sensor data, the additional digital sensor data including a second data pattern matching the first data pattern predict… an occurrence of the one of the plurality of anomalous events using the trained predictive ML model and based on the second data pattern Step 2A, Prong I: does the claim recite an abstract idea, law of nature, or natural phenomenon? Yes, (an) abstract idea(s). In addition to the abstract idea(s) recited in the parent claims, the current claims further include: The ‘predict’ limitation in #12 above, as claimed and under BRI, is a mental process that covers performance of the limitation in the mind. For example, “predict” in the context of this claim encompasses a person making a judgement. Step 2A, Prong II: does the claim recite additional elements that integrate the judicial exception into a practical application? No. The ‘receive’ limitation in #11 above, as claimed and under BRI, is an additional element that is insignificant extra-solution activity. For example, “receive” in the context of this claim encompasses mere data gathering. See MPEP 2106.05(g). The ‘using’ limitation in #11 above, as claimed and under BRI, is an additional element that is mere instructions to apply an exception. For example, “using” in the context of this claim encompasses merely applying the ML model to the abstract idea (predict in #12 above). See MPEP 2106.05(f). Step 2B: does the claim recite additional elements that amount to significantly more than the judicial exception? No. With regards to #11, per MPEP 2106.05(d)(II), the courts have recognized the following computer functions as well-understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity: Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); but see DDR Holdings, LLC v. Hotels.com, L.P., 773 F.3d 1245, 1258, 113 USPQ2d 1097, 1106 (Fed. Cir. 2014) ("Unlike the claims in Ultramercial, the claims at issue here specify how interactions with the Internet are manipulated to yield a desired result‐‐a result that overrides the routine and conventional sequence of events ordinarily triggered by the click of a hyperlink." (emphasis added)); Additionally, per MPEP 2106.05(d) and Berkheimer v. HP, Inc., 881 F.3d 1360, 1368, 125 USPQ2d 1649, 1654 (Fed. Cir. 2018), the following prior art(s) demonstrates that the limitation(s) in #12 is/are well-understood, routine, conventional activity: Babu et al. (US 20210126931 A1) – [0003]: conventional anomaly detection systems are developed based on supervised and unsupervised ML techniques. Supervised learning techniques are trained using labeled examples of normal and anomalous datasets. The trained system classifies the incoming dataset into normal or anomaly class based on the labeled examples… The conventional anomaly detection systems are trained on historic data… Claims 3, 10, 17 Regarding Claims 3, 10, and 17, further limitations about the data utilized (additional raw data) are further limitations of the abstract idea. These limitations are considered part of the mental process, and their inclusion does not push the complexity of the process beyond what a human may perform using pen and paper (see MPEP 2106.04(a)(2)). This is not an additional element that is evaluated under Step 2A, Prong II or Step 2B. Claims 4, 11, 18 Claims 4, 11, and 18 recite: Identify… a solution used previously to mitigate or eliminate the anomalous event to provide one of a plurality of solutions corresponding respectively to the plurality of anomalous events; …when determining determines the correlation… output in real-time with respect to receiving the additional digital sensor data, when predicting predicts the occurrence of the one of the plurality of anomalous events, the solution corresponding to the one of the plurality of anomalous events Step 2A, Prong I: does the claim recite an abstract idea, law of nature, or natural phenomenon? Yes, (an) abstract idea(s). In addition to the abstract idea(s) recited in the parent claims, the current claims further recite: The ‘identify’ limitation in #12 above, as claimed and under BRI, is a mental process that covers performance of the limitation in the mind. For example, “identify” in the context of this claim encompasses a person making a judgement. The ‘determining’ limitation in #13 above, as claimed and under BRI, is a mental process that covers performance of the limitation in the mind. For example, “determining” in the context of this claim encompasses a person making a judgement. Step 2A, Prong II: does the claim recite additional elements that integrate the judicial exception into a practical application? No. The ‘output’ limitation in #14 above, as claimed and under BRI, is an additional element that is insignificant extra-solution activity. For example, “output” in the context of this claim encompasses merely displaying data. See MPEP 2106.05(g). Step 2B: does the claim recite additional elements that amount to significantly more than the judicial exception? No. With regards to #number, per MPEP 2106.05(d)(II), the courts have recognized the following computer functions as well-understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity: Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); but see DDR Holdings, LLC v. Hotels.com, L.P., 773 F.3d 1245, 1258, 113 USPQ2d 1097, 1106 (Fed. Cir. 2014) ("Unlike the claims in Ultramercial, the claims at issue here specify how interactions with the Internet are manipulated to yield a desired result‐‐a result that overrides the routine and conventional sequence of events ordinarily triggered by the click of a hyperlink." Electronic recordkeeping, Alice Corp. Pty. Ltd. v. CLS Bank Int'l, 573 U.S. 208, 225, 110 USPQ2d 1984 (2014) (creating and maintaining "shadow accounts"); Ultramercial, 772 F.3d at 716, 112 USPQ2d at 1755 (updating an activity log); Presenting offers and gathering statistics, OIP Techs., 788 F.3d at 1362-63, 115 USPQ2d at 1092-93; Claims 5, 12, 19 Regarding Claims 5, 12, and 19, further limitations about the data utilized (analog sensor data) are further limitations of the abstract idea. These limitations are considered part of the mental process, and their inclusion does not push the complexity of the process beyond what a human may perform using pen and paper (see MPEP 2106.04(a)(2)). This is not an additional element that is evaluated under Step 2A, Prong II or Step 2B. Claims 6, 13 Regarding Claims 6 and 13, further limitations about the data utilized (sensor data) are further limitations of the abstract idea. These limitations are considered part of the mental process, and their inclusion does not push the complexity of the process beyond what a human may perform using pen and paper (see MPEP 2106.04(a)(2)). This is not an additional element that is evaluated under Step 2A, Prong II or Step 2B. Claims 7, 14 Claims 7 and 14 recite: wherein the ML model comprises a transformer network. Step 2A, Prong I: does the claim recite an abstract idea, law of nature, or natural phenomenon? Yes, (an) abstract idea(s). In addition to the abstract idea(s) of the parent claims: The ‘transformer network’ limitation in #15 above, as claimed and under BRI, is a mathematical concept defined as mathematical relationships, mathematical formulas or equations, and mathematical calculations. For example, “transformer network” in the context of this claim encompasses a mathematical formula. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-6, 8-13, and 15-19 are rejected under 35 U.S.C. 103 as being unpatentable over Burch et al. (US 20220027230 A1, hereinafter ) in view of Li et al. (US 20230025946 A1, hereinafter “Li”), in further view of Moritz et al. (US 20220108698 A, hereinafter “Moritz”). Regarding Claim 1, Burch discloses a system ([0025]) comprising: a plurality of sensors configured to generate sensor data ([0011]; [0013]: individual physical sensors generate time-series trace data. Trace data defined as the collection of sensor traces for all important sensors identified for a particular processing instance); a hardware processor ([0025]: processor-based models that use state-of-the-art processor capabilities. Hence, requires a processor to operate the models); and a system memory storing a software code and a machine learning (ML) model ([0024]-[0025]: processor-based models for trace analysis (ML algorithms) coded with various programming languages and software modules. Obvious that such code for the ML algorithms is stored in memory); the hardware processor configured to execute the software code to: receive a plurality of contextual data samples, each of the plurality of contextual data samples including a portion of the sensor data and a descriptive label ([0017]-[0018]: ML model configured to detect anomalies using data from window analysis. Traces defined (labeled) as anomaly windows 115, 125, 135, 136. [0011]: trace data are portions of sensor data); for each of the plurality of contextual data samples: search a database, using a predetermined matching criterion, for a first data pattern matching the portion of the digital sensor data, the database storing historical sensor data generated by the plurality of sensors (Fig. 4 and [0022]: Step 304-314: trace data is received into a predictive model. Searches through a database having past trace data to find anomalies having the same feature associated with the current anomaly. [0011]: trace data referred to as sensor traces measured by physical sensors); determine, when searching detects the first data pattern, whether there is a correlation between the first data pattern and an anomalous event (Fig. 4 and [0022]: Step 316-318: determines likelihood of whether or not the current anomaly can be accurately classified in accordance to those past anomalies to correlate the type of anomaly, its root cause, and action steps to correct); train the ML model, using the plurality of training data samples, to provide a trained predictive ML model configured to predict the plurality of anomalous events ([0017]: multi-class ML model trained on datasets to detect anomalous behavior in the trace data. [0022]-[0023]: further training shown when current anomaly does not match the historic anomaly database, the current anomaly is stored for future reference and the database is updated if a root cause and corrective actions are thereafter determined for subsequent trace data patterns, effectively training the predictive model to learn the previously-unmatched anomaly); and perform automated anomaly prediction in real-time with respect to receiving additional sensor data generated by the plurality of sensors, using the trained predictive ML model ([0017]: multi-class ML model trained on datasets to detect anomalous behavior in trace data. [0023]: predictive model receives trace data, detects anomalous patterns, and performs pattern matching against a database of past trace data to determine historic anomalous patterns for root cause analysis and perform corrective actions [0022]: database includes the added anomaly from a previous trace iteration, thus the trace data described in [0023] is an additional sensor data generated by important physical sensors ([0011], [0013])). Burch does not disclose: generate, when determining determines the correlation, training data including a label identifying the anomalous event and the first data pattern to provide one of a plurality of training data samples, wherein the plurality of training data samples describe a plurality of anomalous events corresponding respectively to the first data pattern; However, Li teaches: generate, when determining determines the correlation, training data including a label identifying the anomalous event and the first data pattern to provide one of a plurality of training data samples, wherein the plurality of training data samples describe a plurality of anomalous events corresponding respectively to the first data pattern ([0124]: after receiving the first network traffic, the network protection device matches the first key data extracted from the first network device with the attack signature in the signature database. If it is determined, based on the matching result, that the first network traffic is aggressive, the first network traffic is used as a black sample to train the target attack detection model and is added to the sample set. Then, signatures of all samples in the sample set are extracted, and the extracted signature is used for model training, to obtain the target attack detection model. Thus, generates training data including a label identifying that the event is anomalous (black sample) and the first data pattern (matching attack signature) to provide training data samples, which describe aggressive network events (anomalies) corresponding to attack signatures); train the ML model, using the plurality of training data samples, to provide a trained predictive ML model configured to predict the plurality of anomalous events ([0124]: extracted signatures of the sample set are used for model training to obtain the target attack detection model to detect (predict) whether a second network traffic is aggressive); and perform automated anomaly prediction in real-time with respect to receiving additional sensor data generated by the plurality of sensors, using the trained predictive ML model ([0124]: updated target attack detection model performs attack detection on a second network traffic). Therefore, it would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to which said subject matter pertains to combine Burch and Li by implementing the black sample training taught by Li. One of ordinary skill in the art would be motivated to make this modification in order to update the attack (anomaly) detection model to improve confidence in correctly detecting anomalous behavior (Li: [0010]) Burch in view of Li does not teach: …digital sensor data… However, Moritz teaches: …digital sensor data… and additional digital sensor data ([0093], [0096]: production line 1102 uses sensors to collect data. The sensor may be digital sensors, analog sensors, and combination thereof. Part of the collected data may be stored as ML training data for anomaly detection, and another part may be used as operation time data to detect anomaly after training) Therefore, it would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to which said subject matter pertains to combine Burch, Li, and Moritz by perform a simple substitution of ne known element (Burch: [0011]; [0022]: generic physical sensors and generic sensor data for anomaly detection) for another (Moritz: [0093]: digital sensors, analog sensors, and combination thereof and digital, analog, and combination thereof sensor data for anomaly detection). One of ordinary skill in the art would be motivated to make this modification in order to obtain predictable results (sensor data for anomaly detection). Regarding Claim 2, Burch in view of Li, in further view of Moritz teaches the system of claim 1, as referenced above, wherein the hardware processor is further configured to execute the software code to: receive the additional digital sensor data, the additional digital sensor data including a second data pattern matching the first data pattern (Burch: [0023]: receive trace data, detect an anomalous pattern, and compare features of the detected anomalous pattern with features of prior anomalous patterns stored in a database of past trace data to determine matching features. [0022]: database contains seen past anomalies or a new anomaly that is later added to the database from a previous prediction iteration. [0019]-[0020]: feed key features of each anomaly type into the model to search the database and identify one or more prior anomalies as most like the current anomaly. Anomaly types include Type I, Type II, and Type I and II. Therefore, trace data obtained in [0023] may be additional sensor data such that (1) during a first iteration in [0022], a new anomaly is added to the database (e.g, Type I), (2) during a second iteration in [0022], trace data is received and matched with the added anomaly (e.g, Type 1), (3) during a third iteration in [0023], trace data is received and matched with the added anomaly (e.g, Type I and II)); and predict, using the trained predictive ML model and based on the second data pattern, an occurrence of one of the plurality of anomalous events (Burch: [0023]: if detected anomalous pattern features match features from database, predictive model retrieves one or more root causes for the anomaly. I.e., predicts that the detected anomaly is similar to a specific, previous anomaly of the total past anomaly stored in the database). Regarding Claim 3, Burch in view of Li, in further view of Moritz teaches the system of claim 2, as referenced above, wherein the additional digital sensor data comprises time series data ( PNG media_image1.png 537 690 media_image1.png Greyscale Moritz: Fig. 11; [0093]: training data pool measured by digital sensors shown in a graph, clearly over a period of time). Regarding Claim 4, Burch in view of Li, in further view of Moritz teaches the system of claim 2, as referenced above, wherein the hardware processor is further configured to execute the software code to: for each of the plurality of contextual data samples: identify, when determining determines the correlation, a solution used previously to mitigate or eliminate the anomalous event, to provide one of a plurality of solutions corresponding respectively to the plurality of anomalous events (Burch: Fig. 4 and [0022]: Step 316-318: after determining the current anomaly is likely classified in accordance to past anomalies, the action steps to correct the anomaly can be retried from the database. Each database entry contains its respective corrective action, hence one corrective action out of the total number of corrective actions corresponding to past anomalies is provided); and output in real-time with respect to receiving the additional digital sensor data, when predicting predicts the occurrence of the one of the plurality of anomalous events, the solution corresponding to the one of the plurality of anomalous events (Burch: Fig. 4 and [0022]: Step 320: in response to receiving a second trace data (see above), the prediction model performs/outputs the appropriate corrective action corresponding to the identified past anomaly of the total number of past anomaly patterns stored in the database. Moritz: [0093]: additional (operation time) digital sensor data. See Claim 1 substitution above). Regarding Claim 5, Burch in view of Li, in further view of Moritz teaches the system of claim 1, as referenced above, wherein the digital sensor comprises time series data ( PNG media_image2.png 581 763 media_image2.png Greyscale Moritz: Fig. 11; [0093]: training data pool measured by digital sensors shown in a graph, clearly over a period of time). Regarding Claim 6, Burch in view of Li, in further view of Moritz teaches the system of claim 1, as referenced above, further comprising another plurality of sensors configured to generate analog sensor data, wherein the plurality of contextual data samples further include the analog sensor data generated by the another plurality of sensors (Moritz: [0093]-[0094]: production line 1102 uses sensors to collect data. The sensor may be digital sensors, analog sensors, and combination thereof. The collected data serves two purposes, including storing some of data in training data pool 1104 used as training data. Training data pool 1104 can include labeled data tagged as anomalous or normal). Regarding Claim 8, the system of Claim 1 performs the same steps as the method of Claim 8, and Claim 8 is rejected using the same art and rationale set forth above in the rejection of Claim 1 by the teachings of Burch in view of Li, in further view of Moritz. Regarding Claim 9, Burch in view of Li, in further view of Moritz teaches the method of Claim 8 above. The system of Claim 2 performs the same steps as the method of Claim 9, and Claim 9 is rejected using the same art and rationale set forth above in the rejection of Claim 2 by the teachings of Burch in view of Li, in further view of Moritz. Regarding Claim 10, Burch in view of Li, in further view of Moritz teaches the method of Claim 9 above. The system of Claim 3 performs the same steps as the method of Claim 10, and Claim 10 is rejected using the same art and rationale set forth above in the rejection of Claim 3 by the teachings of Burch in view of Li, in further view of Moritz. Regarding Claim 11, Burch in view of Li, in further view of Moritz teaches the method of Claim 9 above. The system of Claim 4 performs the same steps as the method of Claim 11, and Claim 11 is rejected using the same art and rationale set forth above in the rejection of Claim 4 by the teachings of Burch in view of Li, in further view of Moritz. Regarding Claim 12, Burch in view of Li, in further view of Moritz teaches the method of Claim 8 above. The system of Claim 5 performs the same steps as the method of Claim 12, and Claim 12 is rejected using the same art and rationale set forth above in the rejection of Claim 5 by the teachings of Burch in view of Li, in further view of Moritz. Regarding Claim 13, Burch in view of Li, in further view of Moritz teaches the method of Claim 8 above. The system of Claim 6 performs the same steps as the method of Claim 13, and Claim 13 is rejected using the same art and rationale set forth above in the rejection of Claim 6 by the teachings of Burch in view of Li, in further view of Moritz. Regarding Claim 15, the system of Claim 1 performs the same steps as the medium of Claim 15, and Claim 15 is rejected using the same art and rationale set forth above in the rejection of Claim 1 by the teachings of Burch in view of Li, in further view of Moritz. Burch further discloses a computer-readable non-transitory storage medium having stored thereon a software code, which when executed by a hardware processor performs a method ([0025]: CPU, RAM, Python coding language for ML models. Obvious that a computer-readable non-transitory storage medium (e.g., RAM, etc.) is required to store Python code for execution by a processor (e.g., CPU) to operate ML models), Regarding Claim 16, Burch in view of Li, in further view of Moritz teaches the medium of Claim 15 above. The system of Claim 2 performs the same steps as the medium of Claim 16, and Claim 16 is rejected using the same art and rationale set forth above in the rejection of Claim 2 by the teachings of Burch in view of Li, in further view of Moritz. Regarding Claim 17, Burch in view of Li, in further view of Moritz teaches the medium of Claim 16 above. The system of Claim 3 performs the same steps as the medium of Claim 17, and Claim 17 is rejected using the same art and rationale set forth above in the rejection of Claim 3 by the teachings of Burch in view of Li, in further view of Moritz. Regarding Claim 18, Burch in view of Li, in further view of Moritz teaches the medium of Claim 16 above. The system of Claim 4 performs the same steps as the medium of Claim 18, and Claim 18 is rejected using the same art and rationale set forth above in the rejection of Claim 4 by the teachings of Burch in view of Li, in further view of Moritz. Regarding Claim 19, Burch in view of Li, in further view of Moritz teaches the medium of Claim 15 above. The system of Claim 5 performs the same steps as the medium of Claim 19, and Claim 19 is rejected using the same art and rationale set forth above in the rejection of Claim 5 by the teachings of Burch in view of Li, in further view of Moritz. Claims 7, 14, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Burch in view of Li, in view of Moritz, in further view of Milton (US 20200017117 A1). Burch in view of Li, in further view of Moritz teaches the system of claim 1, as referenced above. Burch in view of Li, in view of Moritz does not teach: wherein the ML model comprises a transformer network. However, Milton teaches: wherein the ML model comprises a transformer network ([0072]: ML method to perform dimension/feature-reducing tasks such as by using a network in the form of an autoencoder, wherein an autoencoder is a neural network system comprising an encoder layer and decoder layer). Therefore, it would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to which said subject matter pertains to combine Burch, Li, Moritz, and Milton by implementing the autoencoder taught by Milton as part of the feature engineering taught by Burch ([0018]). One of ordinary skill in the art would be motivated to make this modification in order to reduce unprocessed LIDAR (sensor) data into a reduced form that requires less data to represent and less noise than unprocessed LIDAR/sensor data (Milton: [0072]). Regarding Claim 14, Burch in view of Li, in view of Moritz teaches the method of Claim 8 above. The system of Claim 7 performs the same steps as the method of Claim 14, and Claim 14 is rejected using the same art and rationale set forth above in the rejection of Claim 7 by the teachings of Burch in view of Li, in view of Moritz, in further view of Milton. Regarding Claim 20, Burch in view of Li, in view of Moritz teaches the medium of Claim 15 above. The system of Claim 7 performs the same steps as the medium of Claim 20, and Claim 20 is rejected using the same art and rationale set forth above in the rejection of Claim 7 by the teachings of Burch in view of Li, in view of Moritz, in further view of Milton. Response to Arguments Applicant's arguments filed 12/17/2025 regarding 35 U.S.C. 101 have been fully considered but they are not persuasive. Claim 17 has been amended to overcome the claim objection. The previous objection has been withdrawn. Regarding 35 U.S.C. 101, Applicant argues: Page 14, para. 3: “Here, as in Enfish and McRo, currently amended independent claim 1 recites how the system improves anomaly detection by using specific processing of sensor generated digital sensor data and a particular ML training and deployment pipeline to reduce false positives and provide forward-looking, real-time anomaly prediction. Currently amended independent claim 1 thus focuses on a particular implementation of ML-based anomaly detection in a sensor monitored environment, not on machine learning or prediction in the abstract.” Page 15, para. 2: “The disclosure provided by Applicant expressly notes that the volume and complexity of this sensor data "defy the capacity of the human mind to interpret, even with the assistance of the processing and memory resources of a general purpose computer." (See page 5, lines 4-6 of the present application.) Consequently, the actions recited by currently amended independent claim 1 cannot reasonably be performed as mere mental steps.” Page 16, para. 2: “Currently amended independent claim 1 is analogous in that currently amended independent claim 1 recites a specific way of generating training data and deploying an ML model in a concrete technological context, i.e., sensor-based anomaly prediction, without reciting underlying mathematical functions. As recent USPTO guidance notes, one option for ML model related claims is to recite a method for training an ML model that enables it to perform a respective task more effectively, as exemplified by Example 39. Applicant respectfully submits that currently amended independent claim 1 accomplishes this for sensor-based anomaly prediction systems.” Page 17, para. 3: “For example, currently amended independent claim 1 recites "a plurality of sensors configured to generate digital sensor data" and requires that the contextual data samples include "the digital sensor data [generated by the plurality of sensors]." The database to be searched must store "historical digital sensor data generated by the plurality of sensors," and the correlation step requires determining whether the first data pattern correlates with an anomalous event. These actions are not generic data manipulations, but are rather tied to sensor measurements in a control system, and they implement a specific process for relating live sensor patterns to historical anomalous events.” Page 17, para. 4 – Page 18, para. 1: “The action of generating training data includes applying a label identifying an anomalous event and the corresponding first data pattern such that "the plurality of training data samples describe a plurality of anomalous events corresponding respectively to the first data pattern." This step encodes real-world anomalies into training data suitable for ML model training, based on correlations discovered from historical sensor data and events. The approach claimed by currently amended independent claim 1 is analogous to Example 39, described above, in which transformations and training sets are defined to improve neural network facial detection. The USPTO treated those limitations as part of a concrete technical solution rather than as insignificant extra-solution activity.” Page 18, para. 3: “Applicant notes that the "July 2024 Subject Matter Eligibility Examples," promulgated by the USPTO emphasizes that claims reciting a particular downstream technical use of the output of an AI model, such as an ML model, can reflect a practical application and be eligible at Step 2A, Prong Two. Here, automated real-time anomaly prediction is precisely such a downstream technical use.” Page 20, para. 2-3: “The present application distinguishes these recited features from conventional systems that flag faults merely by comparing current sensor readings to fixed "normal" ranges, produce significant false positives and reactive alarms, lack preemptive anomaly prediction, and rely on human expertise that is neither scalable nor easily preserved. In other words, the claimed system is not a generic computer performing generic data operations, rather, currently amended independent claim 1 is drawn to a specific, ML-based anomaly prediction system that uses a particular combination of sensor hardware, historical sensor and event data structures, contextual labeling, correlation, ML training, and real-time deployment to achieve a technological improvement. There is no evidence of record that the particular combination of features affirmatively required by currently amended independent claim 1 is well-understood, routine, and conventional.” Examiner respectfully disagrees. Regarding A, “using specific processing of sensor generated digital sensor data” merely applies the abstract ideas of searching, determining, and predicting to a particular field of use. Per MPEP 2106.05(h), “a claim directed to a judicial exception cannot be made eligible ‘simply by having the applicant acquiesce to limiting the reach of the patent for the formula to a particular technological use.’” Further, the courts have described the limitation in Example vi as merely indicating a field of use or technical environment in which to apply a judicial exception (“Limiting the abstract idea of collecting information, analyzing it, and displaying certain results of the collection and analysis to data related to the electric power grid, because limiting application of the abstract idea to power-grid monitoring is simply an attempt to limit the use of the abstract idea to a particular technological environment, Electric Power Group, LLC v. Alstom S.A., 830 F.3d 1350, 1354, 119 USPQ2d 1739, 1742 (Fed. Cir. 2016);”). Likewise, the inclusion of digital sensors and digital sensor data does not alter the abstract ideas of searching, determining, and predicting data other than limiting the data type is to be applied in a sensor-based control system. Moreover, the “particular ML training and deployment pipeline to reduce false positives” is not reflected in the claims. The “train the ML model” and “automated anomaly prediction…using the trained predictive ML model” (deployment) limitations in Claim 1 are recited at a high level of generality that encompasses mere instructions to apply the exception. It is unclear how the ML is trained and how the prediction is performed other than using training samples and the trained model, respectively. See MPEP 2106.05(f), Intellectual Ventures I v. Capital One Fin. Corp., 850 F.3d 1332, 121 USPQ2d 1940 (Fed. Cir. 2017): “nothing in the claims indicated what specific steps were undertaken other than merely using the abstract idea in the context of XML documents. The court thus held the claims ineligible, because the additional limitations provided only a result-oriented solution and lacked details as to how the computer performed the modifications, which was equivalent to the words "apply it".” Regarding B, Claim 1 recites “a plurality of sensors configured to generate digital sensor data,” and the process of searching, determining, and predicting is performed on digital sensor data. However, the BRI of “a plurality of sensors” and “digital sensor data” may encompass two sensors generating two points (or a short string of) digital sensor data, which is manageable for a human to process mentally (i.e., search, determine, and predict with). The specification passage cited by the Applicant regarding the volume and complexity is not reflected in the claim nor an explicit definition of digital sensor data. Regarding C, please see A-B above. The specific way of generating training data and deploying the ML model (“generate”, “train”, and “perform” limitations in Claim 1) merely applies the abstract idea (searching, determining, predicting) and improves the overall abstract idea (ML model) rather than improving the technology (practical use after ML processing; how prediction is used to resolve the anomaly. Please note that although Claim 4 outputs the solution, the solution itself is not implemented). As claimed, the inclusion of sensor data merely limits the field of use to a sensor-based system rather than improving the sensor technology. Regarding D, please see A above regarding sensor data and field of use. Regarding E, please see C above regarding improvement to the abstract idea. Regarding F, please see C above regarding improvement to the abstract idea. Regarding G, please see the 101: Step 2B analysis above, including Berkheimer’s analysis. As claimed, the “training” and “performing automatic prediction” limitations in Claim 1 lack details regarding how the training and prediction is done (other than what the steps use) such that they are not drawn to a “specific, ML-based anomaly prediction system” and are well-understood, routine, and conventional. For at least the reasons described above, the 101 rejection of Claims 1-20 are maintained. Regarding 35 U.S.C. 103, Applicant’s arguments with respect to claims 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CATHERINE MARIE NGUYEN whose telephone number is (571)272-6160. The examiner can normally be reached M-F 7:30 AM - 4:30 PM ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ASHISH THOMAS can be reached at (571) 272-0631. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /C.M.N./Examiner, Art Unit 2114 /ASHISH THOMAS/Supervisory Patent Examiner, Art Unit 2114
Read full office action

Prosecution Timeline

Nov 21, 2022
Application Filed
Sep 10, 2025
Non-Final Rejection — §101, §103
Dec 17, 2025
Response Filed
Mar 05, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596608
ROW MAPPING IN A MEMORY CONTROLLER
2y 5m to grant Granted Apr 07, 2026
Patent 12572442
COMPUTATIONAL PROBE AUTO-TUNING
2y 5m to grant Granted Mar 10, 2026
Patent 12566650
COMPUTING SYSTEM WITH EVENT PREDICTION MECHANISM AND METHOD OF OPERATION THEREOF
2y 5m to grant Granted Mar 03, 2026
Patent 12561232
SEEDING CONTRADICTION AS A FAST METHOD FOR GENERATING FULL-COVERAGE TEST SUITES
2y 5m to grant Granted Feb 24, 2026
Patent 12481337
METHOD FOR RESETTING BASEBOARD MANAGEMENT CONTROLLER AND POWER MANAGEMENT SYSTEM
2y 5m to grant Granted Nov 25, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
89%
Grant Probability
99%
With Interview (+50.0%)
2y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 9 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month