Prosecution Insights
Last updated: April 19, 2026
Application No. 18/993,334

INCIDENT RESPONSE SYSTEM AND INCIDENT RESPONSE METHOD

Non-Final OA §101§102§103
Filed
Jan 10, 2025
Examiner
BOROWSKI, MICHAEL
Art Unit
3624
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Hitachi, Ltd.
OA Round
1 (Non-Final)
0%
Grant Probability
At Risk
1-2
OA Rounds
3y 0m
To Grant
0%
With Interview

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 12 resolved
-52.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
55 currently pending
Career history
67
Total Applications
across all art units

Statute-Specific Performance

§101
57.9%
+17.9% vs TC avg
§103
33.8%
-6.2% vs TC avg
§102
4.0%
-36.0% vs TC avg
§112
4.3%
-35.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 12 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections – 35 U.S.C. § 101 35 U.S.C. § 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 3-10, 12-13 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to non-statutory subject matter. The claims, 1, 3-10, 12-13 are directed to a judicial exception (i.e., law of nature, natural phenomenon, abstract idea) without providing significantly more. Step 1 Step 1 of the subject matter eligibility analysis per MPEP § 2106.03, required the claims to be a process, machine, manufacture or a composition of matter. Claims 1, 3-10, 12-13 are directed to a process (method), machine (system), which are statutory categories of invention. Step 2A Claims 1, 3-10, 12-13 are directed to abstract ideas, as explained below. Prong one of the Step 2A analysis requires identifying the specific limitation(s) in the claim under examination that the examiner believes recites an abstract idea, and determining whether the identified limitation(s) falls within at least one of the groupings of abstract ideas of mathematical concepts, mental processes, and certain methods of organizing human activity. Step 2A-Prong 1 The claims recite the following limitations that are directed to abstract ideas, which can be summarized as being directed to a method, the abstract idea, of developing an incident response for a power and distribution company by acquire information from previous incidents, predicting damage from a forecast event, generating a workflow, creating a response plan, evaluating plan feasibility, and providing the plan for coordination and action. Claim 10 discloses a method, comprising: An incident response method for responding to individual incidents, the incident response method comprising: storing processing workflows as playbooks for individual types of risks, (fundamental economic principles: mitigating risk, following rules or instructions, observation, evaluation, judgement, opinion), the processing workflows being response flows for incidents of risks; acquiring incident information (following rules or instructions, observation, evaluation, judgement, opinion), regarding an incident that has occurred or appears to occur, and extracting a corresponding one of the playbooks that is appropriate for the incident from the playbooks created for the individual types of risks; (fundamental economic principles: mitigating risk, following rules or instructions, observation, evaluation, judgement, opinion), generating the processing workflows appropriate for the individual incidents in accordance with the extracted playbook; (fundamental economic principles: mitigating risk, following rules or instructions, observation, evaluation, judgement, opinion), outputting a process for the incident; and in order to output the process for the incident, acquiring incident information regarding an incident that has occurred or appears to occur, predicting damage, and extracting the playbook appropriate for the incident according to the damage and the type of the incident; (fundamental economic principles: mitigating risk, following rules or instructions, observation, evaluation, judgement, opinion), generating the processing workflows appropriate for the individual incidents in accordance with the extracted playbook, (following rules or instructions, observation, evaluation, judgement, opinion), creating response plans according to the processing workflows, and evaluating whether the generated response plans are feasible; (fundamental economic principles: mitigating risk, following rules or instructions, observation, evaluation, judgement, opinion), and outputting the response plans for the incidents according to the evaluation, (following rules or instructions, observation, evaluation, judgement, opinion). Additional limitations employ the method and include, assigning weights regarding priority goals for a risk, (fundamental economic principles: mitigating risk, following rules or instructions, observation, evaluation, judgement, opinion – claim 3), holding individual processes as processing blocks, (following rules or instructions, observation, evaluation, judgement, opinion – claim 4), where the processing workflows include, for each incident, a prediction stage, a planning stage, and a verification / evaluation stage, (following rules or instructions, observation, evaluation, judgement, opinion – claim 5), where the processing workflows further include a data collection stage and an instruction stage, (following rules or instructions, observation, evaluation, judgement, opinion – claim 6), where for a given incident, the workflow generation section changes preconditions or priority goals that are to be used for individual processes in the processing workflows, (mitigating risk, following rules or instructions, observation, evaluation, judgement, opinion – claim 7), and the workflow generation section changes processing conditions that are set for the processing blocks, (following rules or instructions, observation, evaluation, judgement, opinion – claim 8), and when the playbook is to be created in advance for each type of risk, the playbook is created with reference to incidents that occurred in the past, (following rules or instructions, observation, evaluation, judgement, opinion – claim 9), and the playbook includes weights regarding priority goals, (fundamental economic principles: mitigating risk, following rules or instructions, observation, evaluation, judgement, opinion – claim 12), and when workflows are generated, preconditions or priority goals are changed, (fundamental economic principles: mitigating risk, following rules or instructions, observation, evaluation, judgement, opinion – claim 13). Each of these claimed limitations involve organizing human activity through fundamental economic principles: mitigating risk, following rules or instructions; and employing mental processes involving observation, evaluation, judgement, and opinion. Thus, the concepts set forth in claims 1, 3-10, 12-13 recite abstract ideas. Step 2A-Prong 2 As per MPEP § 2106.04, while the claims 1, 3-10, 12-13 recite additional limitations which are hardware or software elements such as a playbook database, a playbook selection section, a workflow generation section, a workflow engine section, a current situation / prediction section, a plan generation / evaluation section and an instruction / execution section, these limitations are not sufficient to qualify as a practical application being recited in the claims along with the abstract ideas since these elements are invoked as tools to apply the instructions of the abstract ideas in a specific technological environment. The mere application of an abstract idea in a particular technological environment and merely limiting the use of an abstract idea to a particular technological field do not integrate an abstract idea into a practical application (MPEP § 2106.05 (f) & (h)). Evaluated individually, the additional elements do not integrate the identified abstract ideas into a practical application. Evaluating the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. The claims do not amount to a “practical application” of the abstract idea because they neither (1) recite any improvements to another technology or technical field; (2) recite any improvements to the functioning of the computer itself; (3) apply the judicial exception with, or by use of, a particular machine; (4) effect a transformation or reduction of a particular article to a different state or thing; (5) provide other meaningful limitations beyond generally linking the use of the judicial exception to a particular technological environment. Accordingly, claims 1, 3-10, 12-13 are directed to abstract ideas. Step 2B Claims 1, 3-10, 12-13 do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements when considered both individually and as an ordered combination, do not amount to significantly more than the abstract idea. The analysis above describes how the claims recite the additional elements beyond those identified above as being directed to an abstract idea, as well as why identified judicial exception(s) are not integrated into a practical application. These findings are hereby incorporated into the analysis of the additional elements when considered both individually and in combination. For the reasons provided in the analysis in Step 2A, Prong 1, evaluated individually, the additional elements do not amount to significantly more than a judicial exception. Thus, taken alone, the additional elements do not amount to significantly more than a judicial exception. Evaluating the claim limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. In addition to the factors discussed regarding Step 2A, prong two, there is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely amount to instructions to implement the identified abstract ideas on a computer. Therefore, since there are no limitations in the claims 1, 3-10, 12-13 that transform the exception into a patent eligible application such that the claims amount to significantly more than the exception itself, the claims are directed to non-statutory subject matter and are rejected under 35 U.S.C. § 101. Claim Rejections 35 U.S.C. §102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1, 3-8, 10, 12-13 are rejected under 35 U.S.C. § 102(a)(1) as being taught by Saraiya, (US 20210406041 A1), hereafter Saraiya, “Analytics Dashboards for Critical Event Management Software Systems, and Related Software,” Regarding Claim 1, An incident response system that responds to individual incidents, Saraiya teaches, (Analytics dashboards for critical event management systems, [ ] which can then be used to improve response performance and/or to inform the generation of predictive models, [Abstract]), the incident response system comprising: a playbook database that stores processing workflows as playbooks for individual types of risks, the processing workflows being response flows for incidents of risks; (retrieving, from a datastore in memory of the computing system, data contained in an analytics table comprising values for a plurality of attributes of each of a plurality of stored critical events; [0004], to provide optimal response performance and allow users to efficiently and effectively manage responses critical events, [Abstract], and FIG. 8), a playbook selection section that acquires incident information regarding an incident that has occurred or appears to occur, and extracts a corresponding one of the playbooks that is appropriate for the incident from the playbooks created for the individual types of risks; (values for a plurality of attributes of each of a plurality of stored critical events; executing at least one pattern-recognition algorithm that operates on the data in the analytics table so as to identify one or more patterns within the plurality of attributes among the plurality of stored critical events; [0004]), a workflow generation section that generates the processing workflows appropriate for the individual incidents in accordance with the extracted playbook; (a method of assisting a user with critical-event management. The method being performed by a computing system includes displaying, to a user via a graphical user interface (GUI) of the computing system, information concerning a first stored critical event; soliciting, via the GUI, a user to provide one or more attribute annotations for one or more corresponding respective attributes of the stored critical event; receiving, from the user via the GUI, the one or more attribute annotations; storing, in memory of the computing system, the one or more attribute annotations in an analytics table comprising values for a plurality of attributes of each of a plurality of stored critical events, including the first stored event; executing at least one predictive algorithm that operates on contents of the analytics table so as to build one or more predictive models representing at least some of the plurality of stored critical events; [0005]), and a workflow engine section that outputs a process for the incident, (FIG. 1 illustrates an example process 100 of using one or more pattern-recognition algorithms 104 to operate upon critical-event data 108 so as to determine patterns and/or other groupings within the critical-event data, such as the patterns/groupings 112 illustrated in a visualization GUI 116 (Analytics Dashboard GUI), [0029]; The output of pattern-recognition algorithm(s) 104 may be used by a visualization GUI, such as visualization GUI 116 of FIG. 1, that allows a user to view representations of the output of the pattern recognition algorithm(s). Visualization GUI 116 may be configured to display any one or more of a variety of charts, graphs, tables, and/or other data-visualization graphics that allow a user to view the output of pattern-recognition algorithm(s) 104 and/or representations of such output, [0034]), wherein the workflow engine section includes: a current situation/prediction section that acquires the incident information regarding an incident that has occurred or appears to occur, predicts damage, and extracts the playbook appropriate for the incident according to the damage and the type of the incident; (The method being performed by a computing system includes providing, [ ] one more predictive models of data contained in an analytics table for a plurality of stored critical events, wherein the data comprises a plurality of values for a corresponding plurality of attributes of each of the plurality of stored critical events; receiving, via an event notification interface, a notification of a new critical event; executing a predictive algorithm that uses the one or more predictive models to automatically classify one or more attributes of the new critical event; and based on the automatic classifying, predicting a value for each of the one or more attributes of the new critical event, [0006]), a plan generation/evaluation section that, in accordance with the extracted playbook, generates the processing workflows appropriate for the individual incidents, creates response plans according to the processing workflows, and evaluates whether the generated response plans are feasible; (executing a predictive algorithm that uses the one or more predictive models to automatically determine one or more suggested actions that a responder can take in resolving the critical event; based on the resource affected, automatically determining one or more services associated with the resource affected; [0007]), and an instruction/execution section that outputs the response plans for the incidents according to the evaluation, (displaying, via a graphical user interface (GUI) of the computing system, a service dependency graph visually depicting the one or more services, the resource affected, and an impact that the critical event has on the one or more services, wherein the resource affected is represented by a user-selectable icon; receiving via the GUI a user selection of the user-selectable icon; and in response to the user selection, displaying to the user via the GUI a popup window that allows a user to view the one or more suggested actions, [0007]). Claim 10 is rejected for reasons corresponding to those of claim 1. In this claim, the addition of a system, with software that includes a playbook database, a playbook selection section, a workflow generation section, a workflow engine section, a current situation / prediction section, a plan generation / evaluation section, and an instruction /execution section does not change the rational for the rejections under 35 U.S.C § 102 or the referenced prior art. (Saraiya teaches, a computing system that can be used to implement any one or more of the software-based functionalities disclosed herein, [0026] and FIG. 16). Regarding Claim 3, The incident response system according to claim 1, wherein the playbook includes weighted information regarding priority goals for a risk. Saraiya teaches, (The Cognition Engine can then use classification and/or regression algorithms to estimate which of many newly arriving critical events will be “major” events, i.e., those that will be most disruptive and/or will require the most resources to resolve. To increase the robustness and usefulness of the predictive models, the CRM subsystem displays via an Analytics Dashboard data for historical critical events, including any just-completed critical event, and prompts a knowledgeable user to append labels (e.g., Yes/No labels) to their historical data. For example, labels may indicate that a critical event: [0162] was “tough,” for example, took more than 48 hours to resolve, [0163]. These labels add extra, important information to the analytics table, and when a user organization adds them, they contribute to a new kind of learning by Cognition Engine. The predictive algorithms can recognize that certain combinations of labels and attribute patterns are significant. [ ] When a new critical event arrives into the CEM system, the Cognition Engine scores it against some of the stored predictive models. For example, a new critical event might get a score of 85% likelihood that it will be a low-ROI event. Predictions such as these can help managers to decide to lower the priority of that critical event and instead focus the team on resolving other critical events, [0168]). Claim 12 is rejected for reasons corresponding to those of claim 3. In this claim, the addition of a system does not change the rational for the rejections under 35 U.S.C § 102 or the referenced prior art. (Saraiya teaches, a computing system that can be used to implement any one or more of the software-based functionalities disclosed herein, [0026] and FIG. 16). Regarding claim 4, The incident response system according to claim 1, wherein the workflow engine section holds individual processes in the processing workflows as processing blocks. Saraiya teaches a series of processes, (1.1 Pattern Recognition, [0028], 1.2 Modeling [0037], 1.3 Automatic Predictions for New Critical Events, [0042], 1.4 Critical Event Annotation, [0047], 1.5 Critical-Event Predictive Analytics [0051], and 1.6 Critical-Event Prescriptive Analytics, [0055]). Regarding claim 5, The incident response system according to claim 1, wherein the processing workflows include, for each incident, a prediction stage, Saraiya teaches, (1.1 Pattern Recognition, [0028], 1.2 Modeling [0037], 1.3 Automatic Predictions for New Critical Events, [0042]), a planning stage, (1.2 Modeling [0037], 1.3 Automatic Predictions for New Critical Events, [0042], 1.4 Critical Event Annotation, [0047]), and a verification/evaluation stage, (1.4 Critical Event Annotation, [0047], 1.5 Critical-Event Predictive Analytics [0051], and 1.6 Critical-Event Prescriptive Analytics, [0055]). Regarding claim 6, The incident response system according to claim 5, wherein the processing workflows further include a data collection stage, Saraiya teaches, (Data collection and processing, FIG. 1 & 2, FIG. 7 & 8, retrieving, from a datastore in memory of the computing system, data contained in an analytics table comprising values for a plurality of attributes of each of a plurality of stored critical events; [0004]), and an instruction stage, (a method of assisting a user with critical-event management. The method being performed by a computing system includes, [ ] executing a predictive algorithm that uses the one or more predictive models to automatically determine one or more suggested actions that a responder can take in resolving the critical event; based on the resource affected, automatically determining one or more services associated with the resource affected; [0007], and implementation of predictive and prescriptive analytics in a critical event management (CEM) system; [0018] and FIG. 8). Regarding claim 7, The incident response system according to claim 1, wherein, when generating the processing workflows appropriate for the individual incidents, the workflow generation section changes preconditions or priority goals that are to be used for individual processes in the processing workflows. Saraiya teaches, (a method of assisting a user with critical-event management. The method being performed by a computing system includes providing, [ ] one more predictive models of data contained in an analytics table for a plurality of stored critical events, [ ] receiving, via an event notification interface, a notification of a new critical event; executing a predictive algorithm that uses the one or more predictive models to automatically classify one or more attributes of the new critical event; and based on the automatic classifying, predicting a value for each of the one or more attributes of the new critical event, [0006], and executing a predictive algorithm that uses the one or more predictive models to automatically determine one or more suggested actions that a responder can take in resolving the critical event; automatically determining one or more services associated with the resource affected; [0007]). Claim 13 is rejected for reasons corresponding to those of claim 7. In this claim, the addition of a system does not change the rational for the rejections under 35 U.S.C § 102 or the referenced prior art. (Saraiya teaches, a computing system that can be used to implement any one or more of the software-based functionalities disclosed herein, [0026] and FIG. 16). Regarding claim 8, The incident response system according to claim 4, wherein, when generating the processing workflows appropriate for the individual incidents, the workflow generation section changes processing conditions that are set for the processing blocks. Saraiya teaches, (receiving, via an event notification interface, a notification of a new critical event; executing a predictive algorithm that uses the one or more predictive models to automatically classify one or more attributes of the new critical event; and based on the automatic classifying, predicting a value for each of the one or more attributes of the new critical event, [0006], and executing a predictive algorithm that uses the one or more predictive models to automatically determine one or more suggested actions that a responder can take in resolving the critical event; automatically determining one or more services associated with the resource affected; [0007]). Claim Rejections 35 U.S.C. §103 The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim 9 is rejected under 35 U.S.C. § 103) as being taught by Saraiya, (US 20210406041 A1), hereafter Saraiya, “Analytics Dashboards for Critical Event Management Software Systems, and Related Software,” in view of Zettle, (US 20190268354 A1), hereafter Zettle, “Incident Response Techniques.” Regarding claim 9, The incident response system according to claim 1, wherein, when the playbook is to be created in advance for each type of risk in order to describe the processing workflow for generating being a response flow for a relevant risk incident Saraiya does not teach, Zettle teaches, (The incident response techniques include [ ] an incident playbook for providing default or customizable instructions for resolving a particular incident, [Abstract], by categorizing the processing workflow based on a standard processing workflow and stored in the playbook database, the playbook is created with reference to a plurality of incidents in past. (Playbook widget 740 may provide a default workflow for each type of security incident based on the category and/or sub-category of a particular security incident, The incident states 746, 748, 750 may include default incident states within a workflow defined by the National Institute of Standards and Technology (NIST). The series of default incident states may proceed in the following order: Analysis, Contain, Eradicate, Recover, and Review. For example, the tasks within the Analysis incident state may relate to determining whether a threat exists and if a threat exists, the identity of the threat, [0083], receive an indication of an incident record stored in an incident record data store, identify a category and subcategory of the incident record, select a playbook based on the category and the subcategory, from one or more playbooks stored in a playbook data store, and generate and render a playbook graphical user interface (GUI) based on the selected playbook. Further, the playbook GUI includes a series of tasks to be implemented for resolution of the incident, [0007]). Saraiya and Zettle are both considered to be analogous to the claimed invention because they are both in the field of incident response development and execution. It would have been obvious to one of ordinary skill in the art before the effective filing date to combine incident response techniques of Saraiya with the historical playbook documentation of Zettle to enable an incident playbook for providing default or customizable instructions for resolving a particular incident to lesser-experienced personas, Zettle, [Abstract]. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure or directed to the state of the art is listed on the enclosed PTO-892. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL BOROWSKI whose telephone number is (703)756-1822. The examiner can normally be reached M-F 8-4:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jerry O’Connor can be reached on (571) 272-6787. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at (866) 217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call (800) 786-9199 (IN USA OR CANADA) or (571) 272-1000. /MB/ Patent Examiner, Art Unit 3624 /MEHMET YESILDAG/Primary Examiner, Art Unit 3624
Read full office action

Prosecution Timeline

Jan 10, 2025
Application Filed
Mar 13, 2026
Non-Final Rejection — §101, §102, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
0%
Grant Probability
0%
With Interview (+0.0%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 12 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month