Prosecution Insights
Last updated: April 19, 2026
Application No. 18/194,549

ARTIFICIAL INTELLIGENCE BASED FAULT DETECTION FOR INDUSTRIAL SYSTEMS

Non-Final OA §101§102§103
Filed
Mar 31, 2023
Examiner
ARJOMANDI, NOOSHA
Art Unit
2166
Tech Center
2100 — Computer Architecture & Software
Assignee
Aitomatic, Inc.
OA Round
1 (Non-Final)
86%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
96%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
547 granted / 635 resolved
+31.1% vs TC avg
Moderate +10% lift
Without
With
+9.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
15 currently pending
Career history
650
Total Applications
across all art units

Statute-Specific Performance

§101
19.4%
-20.6% vs TC avg
§103
44.1%
+4.1% vs TC avg
§102
20.6%
-19.4% vs TC avg
§112
4.8%
-35.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 635 resolved cases

Office Action

§101 §102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The instant application having application number 18/194,549, filed on March 31, 2023, has claims 1-20 pending in this application. Information Disclosure Statement The information disclosure statement (IDS) submitted on April 17, 2023. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim 1 recites limitations including: A computer-implemented method for fault detection comprising: receiving time series data comprising a sequence of data points each data point associated with a time value; identifying a data point of the time series data that represents an anomaly; providing information describing the data point representing the anomaly to a knowledge model, wherein the knowledge model is a rule-based model; providing information describing the data point representing the anomaly to a machine learning based model; executing the knowledge model to generate a first output indicating whether the data point represents a fault; executing the machine learning based model to generate a second output indicating whether the data point represents a fault; providing the first output and the second output to an ensemble model configured to combine results of the knowledge model and the machine learning based model; executing the ensemble model to determine a final output based on a combination of the first output and the second output, the final output indicating whether the data point represents a fault; and sending the final output. These limitations describe mental processes and mathematical concepts, including the evaluation, classification, and combination of data using rule-based logic and machine learning algorithms. Methods of organizing, analyzing, and evaluating information using mathematical models are abstract ideas. See MPEP 2106.04(a); Electric Power Group, LLC v. Alstom S.A., 830 F.3d 1350 (Fed. Cir. 2016). Accordingly, claim 1 recites a judicial exception. 2. No Integration into a Practical Application Under Step 2A, Prong 2, the claim does not include additional elements that integrate the abstract idea into a practical application. The claim merely uses a generic computer to receive data, perform anomaly detection, execute mathematical models, and output a result. There is no recited improvement to the functioning of a computer, no specific technological environment, and no particular machine or transformation. The recited “ensemble model,” “knowledge model,” and “machine learning based model” are described at a high level and operate in a generic computing environment. Merely applying an abstract idea on a computer does not amount to integration into a practical application. See Alice Corp. v. CLS Bank, 573 U.S. 208 (2014). 3. Lack of “Significantly More” (No Inventive Concept) Under Step 2B, the claim does not include additional elements that amount to “significantly more” than the abstract idea itself. The additional elements—such as providing data to models, executing the models, and combining outputs—are routine, conventional activities performed by generic computer components. The claim does not recite any unconventional hardware, any specific computer architecture, or any improvement to model training or computational efficiency. Using machine learning models and rule-based models in an ensemble is a well-understood, routine, and conventional technique. Thus, the claim as a whole amounts to no more than an instruction to implement the abstract idea on a generic computer. Claims 8 and 15 are rejected under the same rationale as claim 1 above. Claim 2 is dependent on claim 1 and includes all the limitations of claim 1. Therefore, claim 2 recites the same abstract idea of claim 1. The claim recites the additional limitations of “wherein the time series data represents sensor data collected from sensors”, which is further elaborating on the abstract idea, and therefore it does not amount to significantly more. Same rationale applies to claims 9 and 16. Claim 3 is dependent on claim 1 and includes all the limitations of claim 1. Therefore, claim 3 recites the same abstract idea of claim 1. The claim recites the additional limitations of “wherein identifying the data point of the time series data that represents the anomaly is performed by executing a variational autoencoder”, which is further elaborating on the abstract idea, and therefore it does not amount to significantly more. Same rationale applies to claims 10 and 17. Claim 4 is dependent on claim 1 and includes all the limitations of claim 1. Therefore, claim 4 recites the same abstract idea of claim 1. The claim recites the additional limitations of “wherein determining the final output by the ensemble model comprises: receiving a first measure of accuracy of the first output generated by the knowledge model; receiving a second measure of accuracy of the second output generated by the machine learning based model; and determining the final output based on the combination of the first output and the second output based on at least one of the first measure of accuracy or the second measure of accuracy”, where the adding the information as presently presented is further elaborating on the abstract idea, and where the receiving a first measure of accuracy steps, receiving a second measure of accuracy steps, and etermining the final output based on the combination of the first output and the second output, which are considered to be insignificant extra-solution activity, (See MPEP 2106.05(g)), and recognized by the courts as well-understood, routine, and conventional activities when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity (See MPEP 2106.05(d)(II)(i) Receiving or transmitting data over a network, e.g., using the Internet to gather data, buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); (v) Presenting offers and gathering statistics, OIP Techs., 788 F.3d at 1362-63, 115 USPQ2d at 1092-93)). Therefore, the limitations do not amount to significantly more than the abstract idea. Same rationale applies to claims 11 and 18. Claim 5 is dependent on claim 1 and includes all the limitations of claim 1. Therefore, claim 5 recites the same abstract idea of claim 1. The claim recites the additional limitations of “wherein the final output is a weighted aggregate of the first output and the second output, wherein a weight of each of the first output and the second output is determined based on a measure of accuracy of corresponding output.”, which is further elaborating on the abstract idea, and therefore it does not amount to significantly more. Same rationale applies to claim 12. Claim 6 is dependent on claim 1 and includes all the limitations of claim 1. Therefore, claim 6 recites the same abstract idea of claim 1. The claim recites the additional limitations of “wherein determining the final output by the ensemble model comprises: responsive to determining the final output based on the first output of the knowledge model, using the final output for training of the machine learning based model.”, which is further elaborating on the abstract idea, and therefore it does not amount to significantly more. Same rationale applies to claims 13 and 19. Claim 7 is dependent on claim 1 and includes all the limitations of claim 1. Therefore, claim 7 recites the same abstract idea of claim 1. The claim recites the additional limitations of “generating synthetic data as additional training data for the machine learning based model using the final output.” , which is further elaborating on the abstract idea, and therefore it does not amount to significantly more. Same rationale applies to claims 14 and 20. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 4-5, 11-12 and 18 are rejected under 35 USC 102(a) (1) as being anticipated by PAUL (US 2020/0116522 A1) (hereinafter Paul). As per claims 1, 8 and 15, Paul discloses receiving time series data comprising a sequence of data points each data point associated with a time value [FIG. 2 is a figure showing a specific example in which the anomaly detection apparatus 1 according to the first embodiment creates an anomaly detection model. In the example of FIG. 2, the sensor data holder 6 supplies, at time t1, initial training data composed of normal data 1 and abnormal data 1, and supplies, at time t2, training data 2 composed of normal data 2 and abnormal data 2, and supplies, at time t3, training data 3 composed of normal data 3 and abnormal data 3, and supplies, at time t4, training data 4 composed of normal data 4 and abnormal data 4, and incrementally supplies, at time t5, training data 5 composed of normal data 5 and abnormal data 5, to the preprocessor 2, paragraph 48]; identifying a data point of the time series data that represents an anomaly [The disclosed embodiments provide several advancements in the technological art, particularly computerized and cloud-based systems in which one device (e.g., first server 120) performs an anomaly detection process that accesses via network 110 time series and or other data stored in one or more databases 124, 144 or under the control of a second server 140 and/or user device 150, paragraph 48]; providing information describing the data point representing the anomaly to a knowledge model, wherein the knowledge model is a rule-based model [The model-group learner/updater 3 uses the initial training data to learn models created by all techniques (the knowledge model having a rule-based model), to calculate decision accuracies, paragraph 50]; providing information describing the data point representing the anomaly to a machine learning based model [Fig. 2, creating an anomaly detection model]; executing the knowledge model to generate a first output indicating whether the data point represents a fault [The model selector 4 selects the best model A2(t1), as the anomaly detection model, from among the models {A1(t1), A2(t1), A3(t1), A4(t1)} created using the unsupervised learning techniques, because of a higher average decision accuracy of these models, paragraph 51]; executing the machine learning based model to generate a second output indicating whether the data point represents a fault [The metamodel can be created utilizing majority voting, an OR rule or a rule using genetic programming. In the majority voting, if the decision result of a lot of candidate models is abnormal, test data is determined to be abnormal. In the OR rule, if the decision result of one or more candidate models is abnormal, test data is determined to be abnormal., paragraph 43]; providing the first output and the second output to an ensemble model configured to combine results of the knowledge model and the machine learning based model [a final applied model can be created in view of a plurality of candidate models of high decision accuracies, so that anomaly detection of sensor data can be performed more accurately, paragraph 107]; executing the ensemble model to determine a final output based on a combination of the first output and the second output, the final output indicating whether the data point represents a fault [the accuracy calculator 9 calculates decision accuracies of the plurality of candidate models. The model updater 10 updates the plurality of candidate models based on the decision accuracies calculated by the accuracy calculator 9 and new sensor data which has been determined to be normal or abnormal. The model updater 10 may update the plurality of candidate models based on at least either of new sensor data which has been determined to be normal or abnormal by the knowledge of an expert and new sensor data which has been determined to be normal or abnormal based on an anomaly detection model in addition to the knowledge of the expert, paragraph 36] ; and sending the final output [the applied-model group selector 22 selects applied model groups of higher decision accuracies from the candidate model groups to create a new applied model (metamodel) using the selected applied model groups and stores the new applied model in the applied model holder, Fig. 6, paragraph 102]. As per claims 2, 9 and 16, Paul discloses, wherein the time series data represents sensor data collected from sensors [The sensor data may include time-series, paragraph 32]. As per claims 3, 10 and 17, Paul discloses, wherein identifying the data point of the time series data that represents the anomaly is performed by executing a variational autoencoder [Sensor data from various sensors may be input in real time to the anomaly detection , paragraph 31 and Fig. 7]. As per claims 6, 13 and 19, Paul discloses wherein determining the final output by the ensemble model comprises: responsive to determining the final output based on the first output of the knowledge model, using the final output for training of the machine learning based model [the accuracy calculator 9 calculates decision accuracies of the plurality of candidate models. The model updater 10 updates the plurality of candidate models based on the decision accuracies calculated by the accuracy calculator 9 and new sensor data which has been determined to be normal or abnormal. The model updater 10 may update the plurality of candidate models based on at least either of new sensor data which has been determined to be normal or abnormal by the knowledge of an expert and new sensor data which has been determined to be normal or abnormal based on an anomaly detection model in addition to the knowledge of the expert, paragraph 36]. As per claims 7, 14 and 20, Paul discloses further comprising: generating synthetic data as additional training data for the machine learning based model using the final output [the technique selector 42 may utilize a genetic algorithm to select an optimum technique, so that the fitness becomes maximum in the case of creating a candidate model by applying each of a plurality of techniques to each data group classified by the group maker, paragraph 117]. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 4-5, 11-12 and 18 are rejected under 35 USC 103(a) as being unpatentable over by PAUL (US 2020/0116522 A1) (hereinafter Paul) in view of HOSHEN et al. (US 2022/0253699 A1) (hereinafter Hoshen). As per claims 4, 11 and 18, Claim 4, is rejected based on the same rationale as claim 1 above. However Paul does not disclose wherein determining the final output by the ensemble model comprises: receiving a first measure of accuracy of the first output generated by the knowledge model; receiving a second measure of accuracy of the second output generated by the machine learning based model; and determining the final output based on the combination of the first output and the second output based on at least one of the first measure of accuracy or the second measure of accuracy. On the other hand, Hoshen discloses wherein determining the final output by the ensemble model comprises: receiving a first measure of accuracy of the first output generated by the knowledge model; receiving a second measure of accuracy of the second output generated by the machine learning based model; and determining the final output based on the combination of the first output and the second output based on at least one of the first measure of accuracy or the second measure of accuracy [a target data instance may be transformed using transformations set T(, 1), T(, 2) . . . T(, M). In some embodiments, a trained machine learning model of the present disclosure may be applied to each of the transformations of the target data instance, to predict the respective transformations applied to the target data instance. In some embodiments, the classification probability represents a likelihood of accurately predicting a transformation applied to the target data instance. In some embodiments, an aggregated value of all classification probabilities may be indicative of a of a normality or anomaly of a target data point. In some embodiments, the aggregate of all classification probabilities may comprise an anomaly score, paragraph 78]. Both reference Paul and Hoshen are in the field of endeavor of anomaly detection. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine an anomaly detection model from among candidate models based on the decision accuracies of the candidate models as taught by Paul with the predicting a transformation from set applied to a target data instance as disclosed in Hoshen so that the anomaly conventionally undetectable can be correctly detected, so that the anomaly detection accuracy can be improved. As per claims 5 and 12, the rejection of claim 5 is incorporated by claim 1 above. However Paul does not disclose wherein the final output is a weighted aggregate of the first output and the second output, wherein a weight of each of the first output and the second output is determined based on a measure of accuracy of corresponding output. On the other hand, Hoshen discloses wherein the final output is a weighted aggregate of the first output and the second output, wherein a weight of each of the first output and the second output is determined based on a measure of accuracy of corresponding output [FIGS. 3A-3D show plots of the number of auxiliary tasks vs. the anomaly detection accuracy (measured by F1) with respect to each dataset, paragraph 114, Figs. 3-4]. Both reference Paul and Hoshen are in the field of endeavor of anomaly detection. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine an anomaly detection model from among candidate models based on the decision accuracies of the candidate models as taught by Paul with the predicting a transformation from set applied to a target data instance as disclosed in Hoshen so that the anomaly conventionally undetectable can be correctly detected, so that the anomaly detection accuracy can be improved. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to NOOSHA ARJOMANDI whose telephone number is (571)272-9784. The examiner can normally be reached on (571)272-9784. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Robert Beausoliel can be reached on (571)272-3645. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. November 22, 2025 /NOOSHA ARJOMANDI/Primary Examiner, Art Unit 2167
Read full office action

Prosecution Timeline

Mar 31, 2023
Application Filed
Nov 22, 2025
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596732
GENERATIVE ARTIFICIAL INTELLIGENCE (AI) CONSTRUCTION SPECIFICATION INTERFACE
2y 5m to grant Granted Apr 07, 2026
Patent 12591555
SYSTEM AND METHODS FOR LIVE DATA MIGRATION
2y 5m to grant Granted Mar 31, 2026
Patent 12587510
SYSTEMS AND METHODS FOR MANAGED DATA TRANSFER
2y 5m to grant Granted Mar 24, 2026
Patent 12580782
SYSTEMS AND METHODS FOR PROCESSING BLOCKCHAIN TRANSACTIONS
2y 5m to grant Granted Mar 17, 2026
Patent 12572812
GRAPH NEURAL NETWORKS FOR PARTICLE ACCELERATOR FACILITIES
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
86%
Grant Probability
96%
With Interview (+9.9%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 635 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month