Prosecution Insights
Last updated: April 19, 2026
Application No. 17/694,294

Training Dataset, Training and Artificial Neural Network for the State Estimation of a Power Network

Non-Final OA §101§102
Filed
Mar 14, 2022
Examiner
ALHIJA, SAIF A
Art Unit
2186
Tech Center
2100 — Computer Architecture & Software
Assignee
Siemens Aktiengesellschaft
OA Round
3 (Non-Final)
72%
Grant Probability
Favorable
3-4
OA Rounds
4y 1m
To Grant
90%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
425 granted / 588 resolved
+17.3% vs TC avg
Strong +18% interview lift
Without
With
+18.2%
Interview Lift
resolved cases with interview
Typical timeline
4y 1m
Avg Prosecution
44 currently pending
Career history
632
Total Applications
across all art units

Statute-Specific Performance

§101
24.3%
-15.7% vs TC avg
§103
27.3%
-12.7% vs TC avg
§102
23.6%
-16.4% vs TC avg
§112
14.3%
-25.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 588 resolved cases

Office Action

§101 §102
DETAILED ACTION 1. Claims 1-11, and 13-14 have been presented for examination. Claim 12 has been cancelled. Notice of Pre-AIA or AIA Status 2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . PRIORITY 3. Acknowledgment is made of applicant's claim for foreign priority under 35 U.S.C. 119(a)-(d), specifically to application EP 21162647.8 filed 03/15/2021. Response to Arguments 4. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 2/16/26 has been entered. i) Following Applicants amendments and arguments the previously presented 101 rejection is MAINTAINED. Specifically, Applicants point to page 10 of their specification to argue their alleged improvement and that the claims recite a concrete application that improves the performance of a specific technology. However as noted in the previous office action and reiterated here the amendments continue to merely recite training a machine model which are mere instructions to implement an abstract idea using a computer in its ordinary capacity, or merely uses the computer as a tool to perform the identified abstract idea. See MPEP (2106.05(f)) This amounts to mere instructions to apply the exception (MPEP 2106.05(f)). Further, the steps of generating a security report in view of its broadest reasonable interpretation encompasses a determination that can be made in the mind or with pencil and paper. As such the 101 rejection is MAINTAINED. ii) Applicants argue that the prior art reference Wang “evaluator 320 does not determine an error corresponding to the recited first state based on the performance of the first training dataset. Rather, the evaluator tests various performance metrics of the classifier, looking for one that outperforms the applied classifier. There is no teaching in Wang equivalent to the recited error, much less a defined error limit. As clarified in the amendments presented herein, the error indicates "the respective first state corresponds to an underrepresented event and/or scenario" in the first training dataset.” Examiner notes that Applicants do not explain how the broadest reasonable interpretation of the claimed “error” is not read on by the citations of Wang. Specifically “[0058] Trained classifier performance evaluator 320 may evaluate different trained models (e.g., classifiers) using field data. During an evaluation, e.g., each candidate trained classifier may be operated in parallel with an applied classifier by taking the same input data and generating the predicted data. The evaluation may also be performed using historical data generated by applied classifier performance evaluator 310. Performance metrics in terms of speed, accuracy, robustness for each trained classifier may be evaluated and the best metrics may be selected as the candidate to replace the applied classifier performance evaluator 310, for example.” The Examiner noted in the previous office action that the first state is evaluated by the applied classifier performance evaluator to determine the “best metrics” and replace the values, which reads on the broadest reasonable interpretation of the error as claimed. Applicants then amended the error in the claim to recite “the error indicating the respective first state corresponds to an underrepresented event and/or scenario;” The Examiner contends that the error is further seen in at least Wang [0057] “A model structure and parameters may be determined by different optimization algorithms guided by an optimization objective function such as empirical risk minimization or structural risk minimization” as well as [0066] “Various outputs are shown in FIG. 4, such as a transformer health index, instrument pre-failure, instrument drifting, loose connection, arrester pre-failure, breaker mis-operation, bad data, and unclassified anomaly alarm, to name just a few examples among many” and [0119] “perform one or more data conditioning operations on input data and may also generate a multi-class classifier based on the conditioned data. Processor 1220 may also classify power system related data from field devices to generate state of substation system, and component, and an unclassified state, for example. Transmitter 1215 may transmit one or more messages, such as one or more alerts, based on calculations by processor 1205. For example, if processor 1205 identifies an anomaly such as an asset or sensor which has failed or is about to fail, an alert, such as a message, may be transmitted to computing device tasked with managing operation of that asset or sensor.” Specifically the Examiner notes the recited “risk minimization” and various other failures, alarms, and alerts reads on the claimed underrepresented event and/or scenario. See specification of the instant application bottom of page 10 noting that critical events are analogous to underrepresented event and/or scenario. The Examiner notes that Applicants should consider the broadest reasonable interpretation of the claimed terms particularly the terms “error” and “underrepresented.” As such the prior art rejection is MAINTAINED. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 5. Claims 1-11 and 13-14 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e. abstract idea) without anything significantly more. i) In view of Step 1 of the analysis, claim(s) 1 and 13 are directed to a statutory category as a process which represents a statutory category of invention. Therefore, claims 1-11 and 13-14 are directed to patent eligible categories of invention. ii) In view of Step 2A, Prong One, claims 1 and 13 recite the abstract idea of creating a training set for an artificial neural network which constitutes an abstract idea based on Mental Processes based on concepts performed in the human mind, or with the aid of pencil and paper as well as and alternatively as Mathematical Concepts in view of the recited mathematical formulas or equations as well as calculations. As to the limitation in claim 13 of “using an artificial neural network with measurement values associated with the power network as inputs and the estimated state of the power network to be determined by the state estimation as output;” would be analogous to a person evaluating measured values of a power network system and thus fall under Mental Processes. In addition, the steps could constitute Mathematical Concepts due to the calculation of state estimation as recited. Thus, the claims recite the abstract idea of a Mental Process performed in the human mind, or with the aid of pencil and paper, as well as and alternatively as Mathematical Concepts. As to the limitation in claim 13 of “using the estimated state of the power network to identify at least one application- critical case and automatically implement at least one response selected from the group consisting of: generating a security report, controlling a voltage, and implementing a corrective security constrained optimal power flow (SCOPF);” would be analogous to a person evaluating measured values of a power network system and generating a security report which could correspond to a mental determination and report and thus fall under Mental Processes. Thus, the claims recite the abstract idea of a Mental Process performed in the human mind, or with the aid of pencil and paper. As to the limitation in claim 13 of “wherein the artificial neural network is trained with a second training dataset, said second training dataset comprising a plurality of training pairs, wherein each of the training pairs is formed by a measurement dataset and an associated state of the power network, and the respective measurement dataset comprises complex apparent powers associated with the power network;” would be analogous to a person evaluating measured values of a power network system and thus fall under Mental Processes. Thus, the claims recite the abstract idea of a Mental Process performed in the human mind, or with the aid of pencil and paper. As to the limitation in claim 13, and similarly recited in claim 1, of “wherein creating the second data set includes: identifying a set including at least a first training pair and an associated first measurement dataset and first state, wherein each member of set includes the associated first state having an error greater than or equal to a defined error limit compared with a training of the artificial neural network using the first training dataset, the error indicating the respective first state corresponds to an underrepresented event and/or scenario;” would be analogous to a person evaluating measured values of a power network system and comparing to a threshold and thus fall under Mental Processes. Thus, the claims recite the abstract idea of a Mental Process performed in the human mind, or with the aid of pencil and paper. As to the limitation in claim 13, and similarly recited in claim 1, of “for each member of the set; calculating a second state of the power network using a load flow calculation using a complex apparent power modified in comparison with the complex apparent power of the respective first measurement dataset; calculating a second measurement dataset from the calculated second state using a measurement model of the power network;” would be analogous to a person calculating values of a power network system based on given measurements and thus fall under Mental Processes. In addition, the steps could constitute Mathematical Concepts due to the calculation of state of the power network as recited. Thus, the claims recite the abstract idea of a Mental Process performed in the human mind, or with the aid of pencil and paper, as well as and alternatively as Mathematical Concepts. As to the limitation in claim 13, and similarly recited in claim 1, of “creating the second training dataset from the first training dataset by adding a second training pair formed from the second measurement dataset and the associated second state.” would be analogous to a person evaluating and putting together data values of a power network system based on given measurements and thus fall under Mental Processes. Thus, the claims recite the abstract idea of a Mental Process performed in the human mind, or with the aid of pencil and paper. As to the limitation in claim 1, of “generating the state estimation using the artificial neural network.” would be analogous to a person evaluating and putting together data values of a power network system based on given measurements and thus fall under Mental Processes. Thus, the claims recite the abstract idea of a Mental Process performed in the human mind, or with the aid of pencil and paper. Dependent claims 2-11 and 14 further narrow the abstract ideas, identified in the independent claims. iii) In view of Step 2A, Prong Two, the judicial exception is not integrated into a practical application. As to claim 1, the limitation “using the second training dataset to train an artificial neural network to estimate the state of the power network” and “wherein the artificial neural network is trained with a second training dataset” the claims merely use a computer device as a tool to perform the abstract idea. (MPEP 2106.05(f)) Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a mental process) does not integrate a judicial exception into a practical application. (MPEP 2106.05(f)(2)) Therefore, the claim as a whole does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements, when considered alone or in combination, do not amount to significantly more than the judicial exception. Therefore, the judicial exception is not integrated into a practical application. Dependent claims 2-11 and 14 further narrow the abstract ideas, identified in the independent claims and do not introduce further additional elements for consideration beyond those addressed above. iv) In view of Step 2B, claims 1 and 13 do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As to the limitation of claim 1, “using the second training dataset to train an artificial neural network to estimate the state of the power network” and “wherein the artificial neural network is trained with a second training dataset”, the claims merely use a computer device as a tool to perform the abstract idea. (MPEP 2106.05(f)) Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a mental process) does not integrate a judicial exception into a practical application. (MPEP 2106.05(f)(2)) Therefore, the claim as a whole does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements, when considered alone or in combination, do not amount to significantly more than the judicial exception. As stated in Section I.B. of the December 16, 2014 101 Examination Guidelines, “[t]o be patent-eligible, a claim that is directed to a judicial exception must include additional features to ensure that the claim describes a process or product that applies the exception in a meaningful way, such that it is more than a drafting effort designed to monopolize the exception.” v) The dependent claims include the same abstract ideas recited as recited in the independent claims, and merely incorporate additional details that narrow the abstract ideas and fail to add significantly more to the claims. Dependent claim 2 further defines the type of calculations of respective claim 1 which merely narrows the abstract idea identified as a mental process and/or mathematical concepts. Dependent claim 3 further defines the type of calculations and data used for calculation of respective claim 1 which merely narrows the abstract idea identified as a mental process and/or mathematical concepts. Dependent claim 4 further defines the type of calculations and data used for calculation of respective claim 1 which merely narrows the abstract idea identified as a mental process and/or mathematical concepts. Dependent claim 5 further defines the type of calculations and data used for calculation of respective claim 1 which merely narrows the abstract idea identified as a mental process and/or mathematical concepts. Dependent claim 6 further defines the type of calculations and data used for calculation of respective claim 1 which merely narrows the abstract idea identified as a mental process and/or mathematical concepts. Dependent claim 7 further defines the type of calculations and data used for calculation of respective claim 1 which merely narrows the abstract idea identified as a mental process and/or mathematical concepts. Dependent claim 8 further defines the training datasets, type of calculations, and data used for calculation of respective claim 1 which merely narrows the abstract idea identified as a mental process and/or mathematical concepts. Dependent claim 9 further defines the type of calculations of respective claim 1 which merely narrows the abstract idea identified as a mental process and/or mathematical concepts. Dependent claim 10 further defines the type of calculations and data used for calculation of respective claim 1 which merely narrows the abstract idea identified as a mental process and/or mathematical concepts. Dependent claim 11 further defines the type of calculations and data used for calculation of respective Dependent claim 14 further defines the type of calculations and data used for calculation of respective claim 1 which merely narrows the abstract idea identified as a mental process and/or mathematical concepts. Appropriate correction is required. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 6. Claims 1-11, and 13-14 are rejected under 35 U.S.C. 102(a)(2) as being clearly anticipated by Wang et al. U.S. Patent Publication No. 20200293032, hereafter Wang. Regarding Claim 1: The reference discloses A method for state estimation of a power network from a provided first training dataset, including creating a second training dataset comprising a plurality of training pairs, wherein each of the training pairs is formed by a measurement dataset and an associated state of the power network, and the respective measurement dataset comprises complex apparent powers associated with the power network, the method comprising: identifying a set including at least a first training pair and an associated first measurement dataset and first state, (“[0057] Classifier trainer 318 may split data into training, validation, and testing datasets. Classifier trainer 318 may also access a machine learning algorithm repository (e.g., in data store 324) to select a specific machine learning algorithm, such as extreme learning machine, support vector machine, K nearest neighbor, convolutional neural network, similarity learning, decision trees, linear discriminant analysis, naive Bayes, logistic regression and linear regression, random forests, and/or ensembles of classifiers, to name just a few examples among many. Such algorithms may learn/infer a function (e.g., defined by a model structure and/or parameters) which maps an input to an output based on example input-output pairs in a training and validation dataset, which may be used for mapping new data input.” This section reads on the claimed training pair, measurement dataset, and first state.) wherein each member of set includes the associated first state having an error greater than or equal to a defined error limit compared with a training of the artificial neural network using the first training dataset, the error indicating the respective first state corresponds to an underrepresented event and/or scenario; (“[0058] Trained classifier performance evaluator 320 may evaluate different trained models (e.g., classifiers) using field data. During an evaluation, e.g., each candidate trained classifier may be operated in parallel with an applied classifier by taking the same input data and generating the predicted data. The evaluation may also be performed using historical data generated by applied classifier performance evaluator 310. Performance metrics in terms of speed, accuracy, robustness for each trained classifier may be evaluated and the best metrics may be selected as the candidate to replace the applied classifier performance evaluator 310, for example.” The first state is evaluated by the applied classifier performance evaluator to determine the “best metrics” and replace the values which reads on an error as claimed. See also [0057] “A model structure and parameters may be determined by different optimization algorithms guided by an optimization objective function such as empirical risk minimization or structural risk minimization” as well as [0066] “Various outputs are shown in FIG. 4, such as a transformer health index, instrument pre-failure, instrument drifting, loose connection, arrester pre-failure, breaker mis-operation, bad data, and unclassified anomaly alarm, to name just a few examples among many” and [0119] “perform one or more data conditioning operations on input data and may also generate a multi-class classifier based on the conditioned data. Processor 1220 may also classify power system related data from field devices to generate state of substation system, and component, and an unclassified state, for example. Transmitter 1215 may transmit one or more messages, such as one or more alerts, based on calculations by processor 1205. For example, if processor 1205 identifies an anomaly such as an asset or sensor which has failed or is about to fail, an alert, such as a message, may be transmitted to computing device tasked with managing operation of that asset or sensor.” Specifically the Examiner notes the recited “risk minimization” and various other failures, alarms, and alerts reads on the claimed underrepresented event and/or scenario. See specification of the instant application bottom of page 10 noting that critical events are analogous to underrepresented event and/or scenario.) for each member of the set, calculating a second state of the power network using a load flow calculation (“[0054] A training data generator 312 may collect and select useful data samples associated with health, mis-operation, degradation, failure, and/or a pre-failure state for a substation system and/or component. Such data may be collected from simulation results based on user-specified or system and component level failure modes which may generate the above-mentioned system state, for example. A power system simulator such as Power Systems Computer Aided Design (PSCAD), Positive Sequence Load Flow (PSLF), Transient Simulation (TSAT), and/or Power System Simulation for Engineering (PSS/E) may be part of training data generator 312, for example.”) using a complex apparent power modified in comparison with the complex apparent power of the respective first measurement dataset; (“[0050] Detector 302 may sense, detect, and/or measure power system component conditions from data sources, such as from one or more PMUs, a frequency monitoring network (FNET), a frequency disturbance recorder, an intelligent equipment device, a digital fault recorder at a subsecond rate (1-60 ms), or from remote terminal units (RTUs), or digital control systems on the order of 1-10 seconds, for example. Such data may comprise information relating to operating voltage(s) (e.g., single phase, multi-phase), load current(s) (e.g., single phase, multi-phase), apparent power and load factor…” Also, “[0058] Trained classifier performance evaluator 320 may evaluate different trained models (e.g., classifiers) using field data. During an evaluation, e.g., each candidate trained classifier may be operated in parallel with an applied classifier by taking the same input data and generating the predicted data. The evaluation may also be performed using historical data generated by applied classifier performance evaluator 310. Performance metrics in terms of speed, accuracy, robustness for each trained classifier may be evaluated and the best metrics may be selected as the candidate to replace the applied classifier performance evaluator 310, for example.” The applied classifier performance evaluator is used to determine the “best metrics” and replace the values which reads on the comparison as claimed.) calculating a second measurement dataset from the calculated second state using a measurement model of the power network; (“[0057] Classifier trainer 318 may split data into training, validation, and testing datasets. Classifier trainer 318 may also access a machine learning algorithm repository (e.g., in data store 324) to select a specific machine learning algorithm, such as extreme learning machine, support vector machine, K nearest neighbor, convolutional neural network, similarity learning, decision trees, linear discriminant analysis, naive Bayes, logistic regression and linear regression, random forests, and/or ensembles of classifiers, to name just a few examples among many. Such algorithms may learn/infer a function (e.g., defined by a model structure and/or parameters) which maps an input to an output based on example input-output pairs in a training and validation dataset, which may be used for mapping new data input.” This section reads on the claimed second measurement dataset as the prior art maps new data input as it updates.) creating the second training dataset from the first training dataset by adding a second training pair formed from the second measurement dataset and the associated second state; (“[0057] Classifier trainer 318 may split data into training, validation, and testing datasets. Classifier trainer 318 may also access a machine learning algorithm repository (e.g., in data store 324) to select a specific machine learning algorithm, such as extreme learning machine, support vector machine, K nearest neighbor, convolutional neural network, similarity learning, decision trees, linear discriminant analysis, naive Bayes, logistic regression and linear regression, random forests, and/or ensembles of classifiers, to name just a few examples among many. Such algorithms may learn/infer a function (e.g., defined by a model structure and/or parameters) which maps an input to an output based on example input-output pairs in a training and validation dataset, which may be used for mapping new data input.” This section reads on the claimed second measurement dataset as the prior art maps new data input as it updates.) using the second training dataset to train an artificial neural network to estimate the state of the power network; and (“[0080] FIG. 6 illustrates an embodiment 600 of a neural network for determining a classifier for an EFSMS. For example, embodiment 600 includes various input layer nodes (e.g., listed at input parameter nodes 610-619), various hidden layer nodes (e.g., listed at hidden layer nodes 640-649), and an Artificial neural network (ANN) output node 660.” Examiner notes EFSMS (Extremely Fast Substation Monitoring System) represents a power network.) generating the state estimation using the artificial neural network. (“[0080] FIG. 6 illustrates an embodiment 600 of a neural network for determining a classifier for an EFSMS. For example, embodiment 600 includes various input layer nodes (e.g., listed at input parameter nodes 610-619), various hidden layer nodes (e.g., listed at hidden layer nodes 640-649), and an Artificial neural network (ANN) output node 660.” Examiner notes EFSMS (Extremely Fast Substation Monitoring System) represents a power network.) Regarding Claim 2: The reference discloses The method as claimed in claim 1, further comprising generating the training pairs of the first training dataset using a load flow calculation. (“[0054] A training data generator 312 may collect and select useful data samples associated with health, mis-operation, degradation, failure, and/or a pre-failure state for a substation system and/or component. Such data may be collected from simulation results based on user-specified or system and component level failure modes which may generate the above-mentioned system state, for example. A power system simulator such as Power Systems Computer Aided Design (PSCAD), Positive Sequence Load Flow (PSLF), Transient Simulation (TSAT), and/or Power System Simulation for Engineering (PSS/E) may be part of training data generator 312, for example.” The PSLF system reads on load flow calculations. See also, “[0102] A measurement device 1120 shown in FIG. 11 may obtain, monitor or facilitate the determination of electrical characteristics associated with the power grid system (e.g., the electrical power system), which may comprise, for example, power flows, voltage, current, harmonic distortion, frequency, real and reactive power, power factor, fault current, and phase angles.”) Regarding Claim 3: The reference discloses The method as claimed in claim 2, further comprising generating the training pairs from synthetically generated complex apparent powers using a respective load flow calculation; (“[0054] A training data generator 312 may collect and select useful data samples associated with health, mis-operation, degradation, failure, and/or a pre-failure state for a substation system and/or component. Such data may be collected from simulation results based on user-specified or system and component level failure modes which may generate the above-mentioned system state, for example. A power system simulator such as Power Systems Computer Aided Design (PSCAD), Positive Sequence Load Flow (PSLF), Transient Simulation (TSAT), and/or Power System Simulation for Engineering (PSS/E) may be part of training data generator 312, for example.” The PSLF system reads on load flow calculations. See also, “[0102] A measurement device 1120 shown in FIG. 11 may obtain, monitor or facilitate the determination of electrical characteristics associated with the power grid system (e.g., the electrical power system), which may comprise, for example, power flows, voltage, current, harmonic distortion, frequency, real and reactive power, power factor, fault current, and phase angles.”) wherein the complex apparent powers are generated from historical and/or synthetic time series for generation and consumption. ([0049], “Off-line modeling module 350 may perform training of a data collection, data conditioning and augmentation, classifier model setup, residual measurement, and/or evaluation, for example.” Which teaches that the data can be generated from off line data sources which reads on historical and/or synthetic time series. See also “[0056] Training data augmenter 316 may increase a training data size feeding to a machine learning based classifier to avoid overfitting and improve a generalization capability. Data warping, slicing, jittering, scaling, down sampling, over sampling on an original data space may be used, for example. Alternatively, a time series dataset may be converted to image or symbols, and then different image transformation approaches may be performed, such as rotation, flip, color variation, and/or noise, to name just a few examples among many.” See also “[0050] Detector 302 may sense, detect, and/or measure power system component conditions from data sources, such as from one or more PMUs, a frequency monitoring network (FNET), a frequency disturbance recorder, an intelligent equipment device, a digital fault recorder at a subsecond rate (1-60 ms), or from remote terminal units (RTUs), or digital control systems on the order of 1-10 seconds, for example. Such data may comprise information relating to operating voltage(s) (e.g., single phase, multi-phase), load current(s) (e.g., single phase, multi-phase), apparent power and load factor…”) Regarding Claim 4: The reference discloses The method as claimed in claim 1, further comprising modifying the complex apparent power of the first measurement dataset using an addition of normally distributed random numbers. (“[0057] Classifier trainer 318 may split data into training, validation, and testing datasets. Classifier trainer 318 may also access a machine learning algorithm repository (e.g., in data store 324) to select a specific machine learning algorithm, such as extreme learning machine, support vector machine, K nearest neighbor, convolutional neural network, similarity learning, decision trees, linear discriminant analysis, naive Bayes, logistic regression and linear regression, random forests, and/or ensembles of classifiers, to name just a few examples among many. Such algorithms may learn/infer a function (e.g., defined by a model structure and/or parameters) which maps an input to an output based on example input-output pairs in a training and validation dataset, which may be used for mapping new data input.” The recited random forests methodology would read on the claimed “normally distributed random numbers.”) Regarding Claim 5: The reference discloses The method as claimed in claim 1, further comprising modifying the complex apparent power of the first measurement dataset using a scaling. (“[0056] Training data augmenter 316 may increase a training data size feeding to a machine learning based classifier to avoid overfitting and improve a generalization capability. Data warping, slicing, jittering, scaling, down sampling, over sampling on an original data space may be used, for example.” The reference recites scaling as an aspect of the training data.) Regarding Claim 6: The reference discloses The method as claimed in claim 1, further comprising modifying one of the complex apparent powers of the first measurement dataset if its amount is greater than or equal to a defined threshold value. (“[0058] Trained classifier performance evaluator 320 may evaluate different trained models (e.g., classifiers) using field data. During an evaluation, e.g., each candidate trained classifier may be operated in parallel with an applied classifier by taking the same input data and generating the predicted data. The evaluation may also be performed using historical data generated by applied classifier performance evaluator 310. Performance metrics in terms of speed, accuracy, robustness for each trained classifier may be evaluated and the best metrics may be selected as the candidate to replace the applied classifier performance evaluator 310, for example.” The applied classifier performance evaluator is used to determine the “best metrics” and replace the values which reads on the comparison as claimed. Also with respect to testing of threshold values, [0062] “Report generator 326 may also generate alarms indicating an abnormal condition (e.g., fault, power system parameter outside of predefined threshold parameter value or range of parameter values, etc.), using a visual, audio, and/or vibrational indicator, e.g., which is detectable via other senses (e.g., touch).”) Regarding Claim 7: The reference discloses The method as claimed in claim 1, further comprising determining errors associated with the states of the first training dataset by training the artificial neural network with the first training dataset. (“[0058] Trained classifier performance evaluator 320 may evaluate different trained models (e.g., classifiers) using field data. During an evaluation, e.g., each candidate trained classifier may be operated in parallel with an applied classifier by taking the same input data and generating the predicted data. The evaluation may also be performed using historical data generated by applied classifier performance evaluator 310. Performance metrics in terms of speed, accuracy, robustness for each trained classifier may be evaluated and the best metrics may be selected as the candidate to replace the applied classifier performance evaluator 310, for example.” The first state is evaluated by the applied classifier performance evaluator to determine the “best metrics” and replace the values which reads on an error as claimed. Also, “[0080] FIG. 6 illustrates an embodiment 600 of a neural network for determining a classifier for an EFSMS. For example, embodiment 600 includes various input layer nodes (e.g., listed at input parameter nodes 610-619), various hidden layer nodes (e.g., listed at hidden layer nodes 640-649), and an Artificial neural network (ANN) output node 660.”) Regarding Claim 8: The reference discloses The method as claimed in claim 7, characterized in that the first training dataset is divided into two partial training datasets in order to determine the errors, wherein the first partial training dataset is used to train the artificial neural network, and the second partial training dataset is used to evaluate the states determined by the training. (“[0057] Classifier trainer 318 may split data into training, validation, and testing datasets…” This reads on the two datasets training and validation) Regarding Claim 9: The reference discloses The method as claimed in claim 1, further comprising forming the complex apparent powers using active powers and reactive powers associated with the network nodes. (“[0102] A measurement device 1120 shown in FIG. 11 may obtain, monitor or facilitate the determination of electrical characteristics associated with the power grid system (e.g., the electrical power system), which may comprise, for example, power flows, voltage, current, harmonic distortion, frequency, real and reactive power, power factor, fault current, and phase angles.” The real and reactive powers recited reads on the claim active/reactive powers. Also, “[0080] FIG. 6 illustrates an embodiment 600 of a neural network for determining a classifier for an EFSMS. For example, embodiment 600 includes various input layer nodes (e.g., listed at input parameter nodes 610-619), various hidden layer nodes (e.g., listed at hidden layer nodes 640-649), and an Artificial neural network (ANN) output node 660.” Examiner notes EFSMS (Extremely Fast Substation Monitoring System) represents a power network and various types of nodes.) Regarding Claim 10: The reference discloses The method as claimed in claim 1, further comprising forming the first measurement dataset and/or second measurement dataset using voltages, currents, active powers, and/or reactive powers associated with the network nodes and/or with lines of the power network. (“[0102] A measurement device 1120 shown in FIG. 11 may obtain, monitor or facilitate the determination of electrical characteristics associated with the power grid system (e.g., the electrical power system), which may comprise, for example, power flows, voltage, current, harmonic distortion, frequency, real and reactive power, power factor, fault current, and phase angles.” The real and reactive powers recited reads on the claim active/reactive powers. Also, “[0080] FIG. 6 illustrates an embodiment 600 of a neural network for determining a classifier for an EFSMS. For example, embodiment 600 includes various input layer nodes (e.g., listed at input parameter nodes 610-619), various hidden layer nodes (e.g., listed at hidden layer nodes 640-649), and an Artificial neural network (ANN) output node 660.” Examiner notes EFSMS (Extremely Fast Substation Monitoring System) represents a power network and various types of nodes.) Regarding Claim 11: The reference discloses The method as claimed in claim 1, further comprising forming the state of the power network using voltages and angles on one or more network nodes of the power network. (“[0065] FIG. 4 illustrates an embodiment 400 a system diagram of a EFSMS 410 and corresponding inputs 405 and outputs 415 according to an embodiment. As illustrated, various inputs may include PMU data (30-60 Hz), SCADA data (e.g., at 2-4 seconds), weather data, DGA data, and PD monitor data, for example. PMU data may include three phase current magnitude, three phase current phase angle, three phase voltage magnitude, three phase voltage phase angle, frequency, and frequency delta, for example. SCADA data may include voltage magnitude, current magnitude, transformer (Xfmr) tap position, digital inputs (e.g., circuit breaker (CB) status), and digital outputs (e.g., trips/alarms), for example.” Also, “[0080] FIG. 6 illustrates an embodiment 600 of a neural network for determining a classifier for an EFSMS. For example, embodiment 600 includes various input layer nodes (e.g., listed at input parameter nodes 610-619), various hidden layer nodes (e.g., listed at hidden layer nodes 640-649), and an Artificial neural network (ANN) output node 660.” Examiner notes EFSMS (Extremely Fast Substation Monitoring System) represents a power network and various types of nodes.)) Regarding Claim 13: The reference discloses A method for the state estimation of a power network, the method comprising: using an artificial neural network with measurement values associated with the power network as inputs and the estimated state of the power network to be determined by the state estimation as output; (“[0080] FIG. 6 illustrates an embodiment 600 of a neural network for determining a classifier for an EFSMS. For example, embodiment 600 includes various input layer nodes (e.g., listed at input parameter nodes 610-619), various hidden layer nodes (e.g., listed at hidden layer nodes 640-649), and an Artificial neural network (ANN) output node 660.” Examiner notes EFSMS (Extremely Fast Substation Monitoring System) represents a power network.) and using the estimated state of the power network to identify at least one application-critical case and automatically implement at least one response selected from the group consisting of: generating a security report, (“[0062] Report generator 326 may generate reports relating to status information relating to the power system component(s), on command (e.g., from a user). Report generator 326 may also generate reports automatically in response to detected event(s), or periodically, wherein the report may be generated and provided (e.g., transmitted) to a desired destination (e.g., a destination address such as an email address of an operator, etc.). Report generator 326 may also generate alarms indicating an abnormal condition (e.g., fault, power system parameter outside of predefined threshold parameter value or range of parameter values, etc.), using a visual, audio, and/or vibrational indicator, e.g., which is detectable via other senses (e.g., touch).”) controlling a voltage, ([0104] “SCADA component 1110 may also allow operators at a central control center to perform or facilitate management of energy flow in the power grid system. For example, operators may use a SCADA component (e.g., using a computer such as a laptop or desktop) to facilitate performance of certain tasks such opening or closing circuit breakers, or other switching operations which might divert the flow of electricity.”) and implementing a corrective security constrained optimal power flow (SCOPF); wherein the artificial neural network is trained with a second training dataset, said second training dataset comprising a plurality of training pairs, wherein each of the training pairs is formed by a measurement dataset and an associated state of the power network, and the respective measurement dataset comprises complex apparent powers associated with the power network; (“[0057] Classifier trainer 318 may split data into training, validation, and testing datasets. Classifier trainer 318 may also access a machine learning algorithm repository (e.g., in data store 324) to select a specific machine learning algorithm, such as extreme learning machine, support vector machine, K nearest neighbor, convolutional neural network, similarity learning, decision trees, linear discriminant analysis, naive Bayes, logistic regression and linear regression, random forests, and/or ensembles of classifiers, to name just a few examples among many. Such algorithms may learn/infer a function (e.g., defined by a model structure and/or parameters) which maps an input to an output based on example input-output pairs in a training and validation dataset, which may be used for mapping new data input.” This section reads on the claimed training pair, measurement dataset. (“[0050] Detector 302 may sense, detect, and/or measure power system component conditions from data sources, such as from one or more PMUs, a frequency monitoring network (FNET), a frequency disturbance recorder, an intelligent equipment device, a digital fault recorder at a subsecond rate (1-60 ms), or from remote terminal units (RTUs), or digital control systems on the order of 1-10 seconds, for example. Such data may comprise information relating to operating voltage(s) (e.g., single phase, multi-phase), load current(s) (e.g., single phase, multi-phase), apparent power and load factor…”) wherein creating the second data set includes: determining a first training pair and an associated first measurement dataset and first state, (“[0057] Classifier trainer 318 may split data into training, validation, and testing datasets. Classifier trainer 318 may also access a machine learning algorithm repository (e.g., in data store 324) to select a specific machine learning algorithm, such as extreme learning machine, support vector machine, K nearest neighbor, convolutional neural network, similarity learning, decision trees, linear discriminant analysis, naive Bayes, logistic regression and linear regression, random forests, and/or ensembles of classifiers, to name just a few examples among many. Such algorithms may learn/infer a function (e.g., defined by a model structure and/or parameters) which maps an input to an output based on example input-output pairs in a training and validation dataset, which may be used for mapping new data input.” This section reads on the claimed training pair, measurement dataset, and first state.) the first state having an error greater than or equal to a defined error limit compared with a training of the artificial neural network using the first training dataset, the error indicating the respective first state corresponds to an underrepresented event and/or scenario; (“[0058] Trained classifier performance evaluator 320 may evaluate different trained models (e.g., classifiers) using field data. During an evaluation, e.g., each candidate trained classifier may be operated in parallel with an applied classifier by taking the same input data and generating the predicted data. The evaluation may also be performed using historical data generated by applied classifier performance evaluator 310. Performance metrics in terms of speed, accuracy, robustness for each trained classifier may be evaluated and the best metrics may be selected as the candidate to replace the applied classifier performance evaluator 310, for example.” The first state is evaluated by the applied classifier performance evaluator to determine the “best metrics” and replace the values which reads on an error as claimed. See also [0057] “A model structure and parameters may be determined by different optimization algorithms guided by an optimization objective function such as empirical risk minimization or structural risk minimization” as well as [0066] “Various outputs are shown in FIG. 4, such as a transformer health index, instrument pre-failure, instrument drifting, loose connection, arrester pre-failure, breaker mis-operation, bad data, and unclassified anomaly alarm, to name just a few examples among many” and [0119] “perform one or more data conditioning operations on input data and may also generate a multi-class classifier based on the conditioned data. Processor 1220 may also classify power system related data from field devices to generate state of substation system, and component, and an unclassified state, for example. Transmitter 1215 may transmit one or more messages, such as one or more alerts, based on calculations by processor 1205. For example, if processor 1205 identifies an anomaly such as an asset or sensor which has failed or is about to fail, an alert, such as a message, may be transmitted to computing device tasked with managing operation of that asset or sensor.” Specifically the Examiner notes the recited “risk minimization” and various other failures, alarms, and alerts reads on the claimed underrepresented event and/or scenario. See specification of the instant application bottom of page 10 noting that critical events are analogous to underrepresented event and/or scenario.) calculating a second state of the power network using a load flow calculation (“[0054] A training data generator 312 may collect and select useful data samples associated with health, mis-operation, degradation, failure, and/or a pre-failure state for a substation system and/or component. Such data may be collected from simulation results based on user-specified or system and component level failure modes which may generate the above-mentioned system state, for example. A power system simulator such as Power Systems Computer Aided Design (PSCAD), Positive Sequence Load Flow (PSLF), Transient Simulation (TSAT), and/or Power System Simulation for Engineering (PSS/E) may be part of training data generator 312, for example.”) using a complex apparent power modified in comparison with the complex apparent power of the first measurement dataset; (“[0050] Detector 302 may sense, detect, and/or measure power system component conditions from data sources, such as from one or more PMUs, a frequency monitoring network (FNET), a frequency disturbance recorder, an intelligent equipment device, a digital fault recorder at a subsecond rate (1-60 ms), or from remote terminal units (RTUs), or digital control systems on the order of 1-10 seconds, for example. Such data may comprise information relating to operating voltage(s) (e.g., single phase, multi-phase), load current(s) (e.g., single phase, multi-phase), apparent power and load factor…” Also, “[0058] Trained classifier performance evaluator 320 may evaluate different trained models (e.g., classifiers) using field data. During an evaluation, e.g., each candidate trained classifier may be operated in parallel with an applied classifier by taking the same input data and generating the predicted data. The evaluation may also be performed using historical data generated by applied classifier performance evaluator 310. Performance metrics in terms of speed, accuracy, robustness for each trained classifier may be evaluated and the best metrics may be selected as the candidate to replace the applied classifier performance evaluator 310, for example.” The applied classifier performance evaluator is used to determine the “best metrics” and replace the values which reads on the comparison as claimed.) calculating the second measurement dataset from the calculated second state using a measurement model of the power network; (“[0057] Classifier trainer 318 may split data into training, validation, and testing datasets. Classifier trainer 318 may also access a machine learning algorithm repository (e.g., in data store 324) to select a specific machine learning algorithm, such as extreme learning machine, support vector machine, K nearest neighbor, convolutional neural network, similarity learning, decision trees, linear discriminant analysis, naive Bayes, logistic regression and linear regression, random forests, and/or ensembles of classifiers, to name just a few examples among many. Such algorithms may learn/infer a function (e.g., defined by a model structure and/or parameters) which maps an input to an output based on example input-output pairs in a training and validation dataset, which may be used for mapping new data input.” This section reads on the claimed second measurement dataset as the prior art maps new data input as it updates.) and creating the second training dataset from the first training dataset by adding a second training pair formed from the second measurement dataset and the associated second state. (“[0057] Classifier trainer 318 may split data into training, validation, and testing datasets. Classifier trainer 318 may also access a machine learning algorithm repository (e.g., in data store 324) to select a specific machine learning algorithm, such as extreme learning machine, support vector machine, K nearest neighbor, convolutional neural network, similarity learning, decision trees, linear discriminant analysis, naive Bayes, logistic regression and linear regression, random forests, and/or ensembles of classifiers, to name just a few examples among many. Such algorithms may learn/infer a function (e.g., defined by a model structure and/or parameters) which maps an input to an output based on example input-output pairs in a training and validation dataset, which may be used for mapping new data input.” This section reads on the claimed second measurement dataset as the prior art maps new data input as it updates.) Regarding Claim 14: The reference discloses The method as claimed in claim 13, further comprising using measured voltages, currents, active powers, and/or reactive powers associated with network nodes and/or with lines of the power network as measurement values. (“[0102] A measurement device 1120 shown in FIG. 11 may obtain, monitor or facilitate the determination of electrical characteristics associated with the power grid system (e.g., the electrical power system), which may comprise, for example, power flows, voltage, current, harmonic distortion, frequency, real and reactive power, power factor, fault current, and phase angles.” The real and reactive powers recited reads on the claim active/reactive powers. Also, “[0080] FIG. 6 illustrates an embodiment 600 of a neural network for determining a classifier for an EFSMS. For example, embodiment 600 includes various input layer nodes (e.g., listed at input parameter nodes 610-619), various hidden layer nodes (e.g., listed at hidden layer nodes 640-649), and an Artificial neural network (ANN) output node 660.” Examiner notes EFSMS (Extremely Fast Substation Monitoring System) represents a power network and various types of nodes.) Conclusion 7. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. 8. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. i) U.S. Patent Publication 20200252461 which teaches “[0034] Temporal discrete event analytics system 200 analyzes the discrete categorical event sequences collected from a complex system, such as a modern power or chemical plant.” ii) U.S. Patent 5625751 which teaches, as per the Abstract, “Analysis and evaluation of outage effects on the dynamic security of power systems is made with a neural network using composite contingency severity indices. A preferably small number of indices describes the power system characteristics immediately post-contingency. These indices are then used as classifiers of the safety of the power system. Using the values of the severity indices, an artificial neural network distinguishes between safe, stable contingencies and potentially unstable contingencies.” iii) Salazar, Harold, Ramón Gallego, and Rubén Romero. "Artificial neural networks and clustering techniques applied in the reconfiguration of distribution systems." IEEE Transactions on Power Delivery 21.3 (2006): 1735-1742 which teaches, as per the Abstract, “In this context, clustering techniques to determine the best training set for a single neural network with generalization ability are also presented. The proposed methodology was employed for solving two electrical systems and presented good results.” 9. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Saif A. Alhija whose telephone number is (571) 272-8635. The examiner can normally be reached on M-F, 10:00-6:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Renee Chavez, can be reached at (571) 270-1104. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300. Informal or draft communication, please label PROPOSED or DRAFT, can be additionally sent to the Examiners fax phone number, (571) 273-8635. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). SAA /SAIF A ALHIJA/Primary Examiner, Art Unit 2188
Read full office action

Prosecution Timeline

Mar 14, 2022
Application Filed
Jun 04, 2025
Non-Final Rejection — §101, §102
Aug 06, 2025
Response Filed
Oct 15, 2025
Final Rejection — §101, §102
Dec 12, 2025
Response after Non-Final Action
Feb 16, 2026
Request for Continued Examination
Feb 24, 2026
Response after Non-Final Action
Mar 06, 2026
Non-Final Rejection — §101, §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602570
MACHINE LEARNING DRIVEN DISPERSION CURVE PICKING
2y 5m to grant Granted Apr 14, 2026
Patent 12602525
ENHANCED TECHNIQUES FOR ANALYZING INDUCTION MOTORS
2y 5m to grant Granted Apr 14, 2026
Patent 12596203
Layering For Geomodeling
2y 5m to grant Granted Apr 07, 2026
Patent 12579345
OPTIMIZATION OF A DESIGN USING A PHYSICS SOLVER INTEGRATED WITH A NEURAL NETWORK
2y 5m to grant Granted Mar 17, 2026
Patent 12578501
METHODS AND SYSTEMS FOR SUBSURFACE MODELING EMPLOYING ENSEMBLE MACHINE LEARNING PREDICTION TRAINED WITH DATA DERIVED FROM AT LEAST ONE EXTERNAL MODEL
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
72%
Grant Probability
90%
With Interview (+18.2%)
4y 1m
Median Time to Grant
High
PTA Risk
Based on 588 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month