Prosecution Insights
Last updated: April 19, 2026
Application No. 17/981,241

COLLABORATIVE RESILIENT GRID RESPONSE FRAMEWORK CONSIDERING DER UNCERTAINTIES AND DYNAMICS

Non-Final OA §101§103
Filed
Nov 04, 2022
Examiner
TRAN, QUOC A
Art Unit
2145
Tech Center
2100 — Computer Architecture & Software
Assignee
Hitachi, Ltd.
OA Round
1 (Non-Final)
80%
Grant Probability
Favorable
1-2
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
590 granted / 735 resolved
+25.3% vs TC avg
Strong +29% interview lift
Without
With
+29.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
21 currently pending
Career history
756
Total Applications
across all art units

Statute-Specific Performance

§101
21.8%
-18.2% vs TC avg
§103
43.1%
+3.1% vs TC avg
§102
6.2%
-33.8% vs TC avg
§112
10.2%
-29.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 735 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This is Non-Final Office Action, in responses to Patent Application filed 11/04/2022. Claim(s) 1-18 are pending. Claim(s) 1, 7 and 13 is/are independent. In addition, in the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Information Disclosure Statement A signed and dated copy of applicant’s IDS, which was filed 02/07/2023 is/are attached to this Office Action. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim(s) 1-18 fail to recite statutory subject matter, as defined in 35 U.S.C. 101, because: The claimed invention is/are directed to a judicial exception (i.e., abstract idea) without significantly more. Step 1: YES (Claim(s) is/are process, machine, manufacture or composition of the matter). ... for “reducing a feeder system model” by node cell segmentation on feeder system according to system topology information and system operation characteristics to “generate a node cell” segmented distribution grid; “constructing observational data” from systemwide status information “aggregated” from nodes identified from the node cell segmented distribution grid to meet a “reinforcement learning (RL) policy network” input requirement; “training an RL policy” framework to “generate control actions” for controllable components of the system; and “executing the RL policy framework” to generate control actions for controllable nodes in the node cell segmented distribution grid... and therefore, fall into one of the four categories of patent eligible subject matter (process, machine, manufacture or composition of the matter). Step 2A : Prong One: ( whether a claim recites a judicial exception ?) the claim(s) recite process, machine, manufacture or composition of the matter... for “reducing a feeder system model” by node cell segmentation on feeder system according to system topology information and system operation characteristics to “generate a node cell” segmented distribution grid; “constructing observational data” from systemwide status information “aggregated” from nodes identified from the node cell segmented distribution grid to meet a “reinforcement learning (RL) policy network” input requirement; “training an RL policy” framework to “generate control actions” for controllable components of the system; and “executing the RL policy framework” to generate control actions for controllable nodes in the node cell segmented distribution grid...These limitation(s) recite mental processes and mathematical concepts (mathematical calculations)....since reducing a feeder system model by node cell segmentation according to system topology information and system operation characteristics...[required] constructing observational data ... from the node cell segmented distribution grid to meet a reinforcement learning (RL) policy network input requirement...that [Apply] the training of RL policy framework to generate control actions ”for controllable components of the system”; and “executing the RL policy framework to generate control actions for controllable nodes in the node cell segmented distribution grid.... This may be done in a way....that when information or an execution instruction is received by API unit, it may be communicated to one or more other units ... In some instances, logic unit may be configured to control the information flow among the units and direct the services provided by API unit(s) such as, input unit and/or output unit, ... For example, the flows of one or more processes or implementations may be controlled by logic unit alone or in conjunction with API unit(s). The input unit may be configured to obtain input for the calculations results from “Observation Data Construction Function” and the “RL training Process” and the” RL Policy Network” as describes in the Para(s) 80-96 and 97-101; which is required mathematical calculation(s)...then “APPLY IT”...to generate control actions for controllable nodes in the node cell segmented distribution grid...[see the USPGPUB 20240160801 in Para(s) 9-11, 38-41 and 80-96 and 97-101 for the above interpretations]....moreover, the claim(s) recite only the idea of a solution or outcome i.e., information “aggregated” from nodes identified from the node cell segmented distribution grid to meet a “reinforcement learning (RL) policy network” input requirement; “training an RL policy” framework to “generate control actions” for controllable components of the system; and “executing the RL policy framework” to generate control actions for controllable nodes in the node cell segmented distribution grid...... [“APPLY IT]. Step 2A : Prong Two: (Do the claim(s) recite “additional element(s) that integrate the “Judicial Exception” into “A Practical Application” ? The claim(s) recite additional limitation(s) such as ... “ feeder system/computer readable medium and apparatus/processor” for reducing a feeder system model... generate a node cell, and constructing observational data from systemwide status information ...aggregated from nodes identified from the node cell segmented distribution grid to meet a “reinforcement learning (RL) policy network” input requirement; “training an RL policy” framework to “generate control actions” for controllable components of the system; and “executing the RL policy framework” to generate control actions for controllable nodes in the node cell segmented distribution grid... These limitation(s) only recite a generic computer component(s) that only amounts to mere instructions to implement the abstract idea on a computer, and therefore, do not integrate the judicial exception into a practical application. (MPEP 2106.04(d), 2106.05(f)). Step 2B: (Whether a Claim Amounts to Significantly More) ? The claim(s) recite additional limitation(s) such as “ feeder system, computer readable medium and apparatus/processor” for reducing a feeder system model ...These limitation(s) only recite a generic computer component(s) that only amounts to mere instructions to implement the abstract idea on a computer, and therefore, do not amount to significantly more than the abstract idea itself (MPEP 2106.05, 2106.04(d) and 2106.05(f)). As to the dependent claim(s) 2-6, 8-12 and 14-18, further recite, addition limitation(s) such as, (electrical-formulated grid cell, autonomous microgrid, topologically-formulated community, or single-element node-cell), grouping system components according to node cell types, system fault information, distributed energy resource (DER), directly-controllable components status as aggregated from the systemwide status information, controllable nodes in the node cell segmented distribution grid, power flow simulation based on the control actions input and the systemwide status information, reward or penalty for the RL policy framework and updating the RL policy framework based on the corresponding reward or penalty ...etc., These limitation(s) only amounts to mere instructions to implement the abstract idea ...and do not include elements that amount to significantly more than the abstract idea and are also rejected under the same rational. Accordingly, claims 1-18 fail to recite statutory subject matter, as defined in 35 U.S.C. 101. Claims Rejection – 35 U.S.C. 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-18 rejected under 35 U.S.C. 103 as being unpatentable over Lee et al., (“US 20230074995 A1” filed 07/08/2022 [hereinafter “Lee”], in view of Donti et al., NPL (“Machine Learning for Sustainable Energy Systems” Published 08/04/2021 (total 29 pages) [hereinafter “Donti”]. Independent Claim 1, Lee teaches: A method, comprising: constructing observational data from systemwide status information aggregated from nodes identified from the node cell segmented distribution grid to meet a reinforcement learning (RL) policy network input requirement; training an RL policy framework to generate control actions for controllable components of the system; and executing the RL policy framework to generate control actions for controllable nodes in the node cell segmented distribution grid; (in Lee Para(s) 1, 5 and 66, and Fig. 4 and the Abstract, describing a method for generating a graph of a power distribution system by acquiring observations via measurement signals with respective nodes of the power distribution system. The graph uses a control policy trained by reinforcement learning to effect a change and output control actions for one or more status of controllable grid assets...wherein controlling a power distribution system having a number of nodes and controllable grid assets associated with at least some of the node includes acquiring observations via measurement signals associated with respective nodes and generating a graph representation of a system state based on the observations and topological information of the power distribution system. The topological information is used to determine edges defining connections between nodes. The observations are used to determine nodal features of respective nodes, which are indicative of a measured electrical quantity and a status of controllable grid assets associated with the respective node. The graph representation is processed using a reinforcement learned control policy to output a control action for effecting a change of status of one or more of the controllable grid assets, to regulate voltage and reactive power flow in the power distribution system based on a volt-var optimization objective..) PNG media_image1.png 756 1002 media_image1.png Greyscale It is noted, Lee discloses a graph representation is processed using a reinforcement learned control policy to output a control action for effecting a change of status of one or more of the controllable grid assets, to regulate voltage and reactive power flow in the power distribution system based on a volt-var optimization objective... However, Lee does not expressly teach, But the combination of Lee and Donti teach,... reducing a feeder system model by node cell segmentation on feeder system according to system topology information and system operation characteristics to generate a node cell segmented distribution grid. (In Donti Page 724-725 section 2.3.1 throught 2.3.5 and Page 728 in Paragraph(s) 1-3, describing Support vector machines (SVMs) are another type of classical ML model based on a linear model class (i.e., the prediction is a linear function of the input) or a nonlinear extension known as a kernel hypothesis class. These methods typically use a type of loss function called a (regularized) hinge loss, which gives them the geometric property of being a so called max-margin classifier... that reducing a feeder system model by node cell segmentation on feeder system according to system topology information and system operation characteristics to generate a node cell segmented distribution grid...) Accordingly, it would have been obvious to one having ordinary skill in the art at the time before the effective filing date of the claimed invention was made to modify Lee’s graph representation is processed using a reinforcement learned control policy to output a control action for effecting a change of status of one or more of the controllable grid assets, to regulate voltage and reactive power flow in the power distribution system based on a volt-var optimization objective, to include a means said, reducing a feeder system model by node cell segmentation on feeder system according to system topology information and system operation characteristics to generate a node cell segmented distribution grid as taught by Donti; that controlling a power distribution system having a number of nodes and controllable grid assets associated with at least some of the node includes acquiring observations via measurement signals associated with respective nodes and generating a graph representation of a system state based on the observations and topological information of the power distribution system. ... optimally dispatching controllable grid assets or actuators of a power distribution system to maintain voltage profile at the nodes as well as reduce power losses across the power distribution system...[in Lee the Abstract and Para 3]. It is noted the KSR ruling recommends references directed to similar subject matter to be combined. Claim 2, Lee and Donti further teach: wherein the nodes comprise one or more of an electrical-formulated grid cell, autonomous microgrid, topologically-formulated community, or single-element node-cell; (in Lee Para(s) 34-39, describing an autonomous microgrid, topologically-formulated community...) Claim 3, Lee and Donti further teach: wherein the reducing the feeder system model by node cell segmentation comprises grouping system components according to a plurality of node cell types; (In Donti Page 724-725 section 2.3.1 throught 2.3.5 and Page 728 in Paragraph(s) 1-3, further in view of Donti Page 721 Paragraph 4, i.e., reducing the feeder system model by node cell segmentation comprises grouping system components according to a plurality of node cell types...) Accordingly, it would have been obvious to one having ordinary skill in the art at the time before the effective filing date of the claimed invention was made to modify Lee’s graph representation is processed using a reinforcement learned control policy to output a control action for effecting a change of status of one or more of the controllable grid assets, to regulate voltage and reactive power flow in the power distribution system based on a volt-var optimization objective, to include a means said, wherein the reducing the feeder system model by node cell segmentation comprises grouping system components according to a plurality of node cell types as taught by Donti; that controlling a power distribution system having a number of nodes and controllable grid assets associated with at least some of the node includes acquiring observations via measurement signals associated with respective nodes and generating a graph representation of a system state based on the observations and topological information of the power distribution system. ... optimally dispatching controllable grid assets or actuators of a power distribution system to maintain voltage profile at the nodes as well as reduce power losses across the power distribution system...[in Lee the Abstract and Para 3]. It is noted the KSR ruling recommends references directed to similar subject matter to be combined. Claim 4, Lee and Donti further teach: wherein the observational data is constructed from system fault information, distributed energy resource (DER) cluster aggregation information, node cell aggregation information, and directly-controllable components status as aggregated from the systemwide status information; (in Lee Para(s) 1, 5, 66 and 30-39 and Fig(s) 2 and 4, describing the observational data is constructed from system fault information, distributed energy resource (DER) cluster aggregation information, node cell aggregation information, and directly-controllable components status as aggregated from the systemwide status information...) Claim 5, Lee and Donti further teach: wherein the training the RL policy framework comprises: generating, from input of the systemwide status information, the control actions for the controllable nodes in the node cell segmented distribution grid; executing a power flow simulation based on the control actions input and the systemwide status information; generating a reward or penalty for the RL policy framework based on output from the power flow simulation; and updating the RL policy framework based on the corresponding reward or penalty; (in Lee Para(s) 1, 5, 66 and 30-39 and 66-68, describing training the RL policy framework comprises: generating, from input of the systemwide status information, the control actions for the controllable nodes in the node cell segmented distribution grid; executing a power flow simulation based on the control actions input and the systemwide status information; generating a reward or penalty for the RL policy framework based on output from the power flow simulation; and updating the RL policy framework based on the corresponding reward or penalty...) Claim 6, Lee and Donti further teach: wherein the RL policy framework is deployed on a distribution management system (DMS) or an energy management application configured to restore grid service of a managed grid in response to an interruptive event; (in Lee Para(s) 66-70, describing the RL policy framework is deployed on a distribution management system (DMS) ... in response to an interruptive event...) Regarding Claim(s) 7-18 (respectively) is/are fully incorporated similar subject of claim(s) 1-6 and 1-6 (respectively) cited above. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Wang et al.,(“ US 20210133376 A1” filed 11/04/2020, relates to autonomous parameter calibration for a model of an electric power system includes inputting electric measurements, simulating the model with a set of parameters to generate a first simulated response, identifying a first and a second parameter in the set of parameters, the first parameter being responsible for a deviation of the first simulated response from the electric measurements, while the second parameter being not responsible to the deviation, generating an action corresponding to the first parameter by a DRL agent based on the deviation, modifying the first parameter by the generated action while leaving the second parameter unmodified, simulating the model again with the set of parameters including the modified first parameter and the unmodified second parameter to generate a second simulated response, evaluating a fitting error between the second simulated response and the electric measurements, and terminating the parameter calibration when the fitting error falls below a predetermined threshold... [the Abstract]. Narasimha et al.,(“ US 20230093673 A1” filed 09/23/2021, relates to the field of wireless network management, including reinforcement learning (RL) and graph neural network (GNN)-based resource management for wireless access network, wherein computing node to implement an RL management entity in an NG wireless network includes a NIC and processing circuitry coupled to the NIC. The processing circuitry is configured to generate a plurality of network measurements for a corresponding plurality of network functions. The functions are configured as a plurality of ML models forming a multi-level hierarchy. Control signaling from an ML model of the plurality is decoded, the ML model being at a predetermined level (e.g., a lowest level) in the hierarchy. The control signaling is responsive to a corresponding network measurement and at least second control signaling from a second ML model at a level that is higher than the predetermined level. A plurality of reward functions is generated for training the ML models, based on the control signaling from the MLO model at the predetermined level in the multi-level hierarchy ... [the Abstract and Para 1]. Yeh et al.,(“ US 20220014963 A1” filed 09/24/2021, relates to multi-access traffic management in edge computing environments, and in particular, artificial intelligence (AI) and/or machine learning (ML) techniques for multi-access traffic management. A scalable AI/ML architecture for multi-access traffic management is provided. Reinforcement learning (RL) and/or Deep RL (DRL) approaches that learn policies and/or parameters for traffic management and/or for distributing multi-access traffic through interacting with the environment are also provided. Deep contextual bandit RL techniques for intelligent traffic management for edge networks are also provided... [the Abstract]. Zhao et al., NPL (“Learning Sequential Distribution System Restoration via Graph-Reinforcement Learning” Published 2021 By IEEE 11 pages, relates to a distribution service restoration algorithm as a fundamental resilient paradigm for system operators provides an optimally coordinated, resilient solution to enhance the restoration performance. The restoration problem is formulated to coordinate distribution generators and controllable switches optimally. A model-based control scheme is usually designed to solve this problem, relying on a precise model and resulting in low scalability. To tackle these limitations, this work proposes a graph-reinforcement learning framework for the restoration problem. We link the power system topology with a graph convolutional network, which captures the complex mechanism of network restoration in power networks and understands the mutual interactions among controllable devices. Latent features over graphical power networks produced by graph convolutional layers are exploited to learn the control policy for network restoration using deep reinforcement learning. The solution scalability is guaranteed by modeling distributed generators as agents in a multi-agent environment and a proper pre-training paradigm ....(the Abstract). Any inquiry concerning this communication or earlier communications from the examiner should be directed to QUOC A TRAN whose telephone number is (571)272-8664. The examiner can normally be reached Monday-Friday 9am-5pm MT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Cesar Paula can be reached at 571-272-4128. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /QUOC A TRAN/Primary Examiner, Art Unit 2145
Read full office action

Prosecution Timeline

Nov 04, 2022
Application Filed
Jan 24, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586003
Method and Apparatus for Generating Operator
2y 5m to grant Granted Mar 24, 2026
Patent 12585951
METHOD AND ELECTRONIC DEVICE FOR GENERATING OPTIMAL NEURAL NETWORK (NN) MODEL
2y 5m to grant Granted Mar 24, 2026
Patent 12572772
SCALABLE DIGITAL TWIN SERVICE SYSTEM AND METHOD
2y 5m to grant Granted Mar 10, 2026
Patent 12561617
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Feb 24, 2026
Patent 12561610
METHOD AND APPARATUS FOR PRESENTING CANDIDATE CHARACTER STRING, AND METHOD AND APPARATUS FOR TRAINING DISCRIMINATIVE MODEL
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
80%
Grant Probability
99%
With Interview (+29.4%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 735 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month