Prosecution Insights
Last updated: April 19, 2026
Application No. 18/694,218

Monitoring A Computing System With Respect To A Recovery Scenario

Final Rejection §103
Filed
Mar 21, 2024
Examiner
DILUZIO, NICHOLAS JOSEPH
Art Unit
2498
Tech Center
2400 — Computer Networks
Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
OA Round
2 (Final)
33%
Grant Probability
At Risk
3-4
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants only 33% of cases
33%
Career Allow Rate
4 granted / 12 resolved
-24.7% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
31 currently pending
Career history
43
Total Applications
across all art units

Statute-Specific Performance

§101
10.4%
-29.6% vs TC avg
§103
61.1%
+21.1% vs TC avg
§102
8.8%
-31.2% vs TC avg
§112
19.7%
-20.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 12 resolved cases

Office Action

§103
DETAILED ACTION Examiner acknowledges receipt of Applicant’s amendment filed on 12/04/2025 Claims 21, 22, 24, 29, 33, and 35-39 are currently amended Claims 21-40 are pending Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statements (IDS) submitted on 11/18/2025, 12/02/2025, and 02/23/2026 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner. Response to Amendment Examiner has fully considered Applicant’s amendments to the Claims in the arguments filed on 12/04/2025. Claims 21-40 remain pending in the application. Examiner has withdrawn the Objections, 112(b) rejections, and 101 rejections of Claims 21-40 based on the amendments. However, additional Claim objections arise. Response to Arguments Applicant’s arguments filed 12/04/2025, with respect to the rejections of claims 21-40 under 35 USC 102(a)(2) and 103 have been fully considered and are persuasive. Therefore, the rejections have been withdrawn. However, upon further consideration, new ground(s) of rejection are made in view of the previously applied reference from Shemer, in combination with a newly applied reference from Cichonski et al. (Cichonski et al. “Computer Security Incident Handling Guide”, Recommendations of the National Institute of Standards and Technology, Special Publication 800-61, Revision 2 – August 2012), hereinafter Cichonski. Specifically, Cichonski teaches the newly added limitations “obtaining a set of real-time system recovery indicators from the computing system, the set of real-time system recovery indicators comprising data representing access patterns to the computing system, traffic flow patterns through the computing system, and system vulnerability indicators, each of the real-time system recovery indicators being in a format of a numerical text or an image”; “the set of real-time system recovery indicator”; and “wherein the pre-emptive actions comprise: a) encrypting or deleting data from the computing system; and [[/or]] b) disabling one or more components in the computing system; and c) adding an additional firewall rule to the computing system to compensate for the recovery scenario” and additional limitations, the rejections of which previously relied upon the teachings of Shemer. However, the teachings of Shemer are still relied upon herein. Claim Objections Claims 21, 36, and 39 objected to because of the following informalities: Independent Claims 21 and 36 include the similar limitation “the real-time system recovery indicators being in a format of a numerical text or an image”. Support from the instant specification recites: “the system recovery indicators obtained may be in any format, for example, numerical, text, image, etc.” (P. 9, Line 15-16). Further, the term “numerical text” is unclear because it is unclear if numerical and text are to be interpreted as separate format concepts or to be interpreted together to represent and numerical to text conversion/representation e.g. Excel, ASCII, Integer codes, etc.. The limitation could be re-written as “the real-time system recovery indicators being in a , text, or an image format” to clarify that the formats are to be interpreted as numerical, text, or image. In Line 1-2 of Claim 39, the limitation “The security system according to claim 36, determining the risk …” should read: “The security system according to claim 36, wherein determining the risk …”. Appropriate correction is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 21-28, and 34-40 is/are rejected under 35 U.S.C. 103 as being unpatentable over Cichonski et al. (Cichonski et al. “Computer Security Incident Handling Guide”, Recommendations of the National Institute of Standards and Technology, Special Publication 800-61, Revision 2 – August 2012), hereinafter Cichonski, in view of Shemer et al. (US 10747606 B1), hereinafter Shemer. Regarding Claim 21: Cichonski teaches a method performed by a security system for use in monitoring a computing system with respect to an occurrence of a recovery scenario from which the computing system would require recovery (Cichonski – P. 1: this publication provides guidelines for incident handling, particularly for analyzing incident related data and determining the appropriate response to each incident. The guidelines can be followed independently of particular hardware platforms, operating systems, protocols, or applications. Because performing incident response effectively is a complex undertaking, establishing a successful incident response capability requires substantial planning and resources. Continually monitoring for attacks is essential), the method comprising: obtaining a set of real-time system recovery indicators from the computing system (Cichonski – P. 26: Signs of an incident fall into one of two categories: precursors and indicators. A precursor is a sign that an incident may occur in the future. An indicator is a sign that an incident may have occurred or may be occurring now; and P. 27: Precursors and indicators are identified using many different sources, with the most common being computer security software alerts, logs, publicly available information, and people), the set of real-time system recovery indicators comprising data representing access patterns to the computing system (Cichonski – P. 26-27: While precursors are relatively rare, indicators are all too common. Too many types of indicators exist to exhaustively list them, but some examples are listed below: … An application logs multiple failed login attempts from an unfamiliar remote system), traffic flow patterns through the computing system (Cichonski – P. 26-27: While precursors are relatively rare, indicators are all too common. Too many types of indicators exist to exhaustively list them, but some examples are listed below: … A network administrator notices an unusual deviation from typical network traffic flows), and system vulnerability indicators (Cichonski – P. 26-27: While precursors are relatively rare, indicators are all too common. Too many types of indicators exist to exhaustively list them, but some examples are listed below: … A network intrusion detection sensor alerts when a buffer overflow attempt occurs against a database server; Antivirus software alerts when it detects that a host is infected with malware; An email administrator sees a large number of bounced emails with suspicious content), each of the real-time system recovery indicators being in a format of a numerical text or an image (Cichonski – P. 27: Precursors and indicators are identified using many different sources, with the most common being computer security software alerts, logs, publicly available information, and people. Table 3-2 lists common sources of precursors and indicators for each category; and Table 3-1: sources of precursors and indicators, including alerts; Examiner’s Comment: alerts are understood to be in numerical, text, and/or image formats); the set of real-time system recovery indicators (Cichonski – P. 26: Signs of an incident fall into one of two categories: precursors and indicators. A precursor is a sign that an incident may occur in the future. An indicator is a sign that an incident may have occurred or may be occurring now; and P. 27: Precursors and indicators are identified using many different sources, with the most common being computer security software alerts, logs, publicly available information, and people); and performing one or more pre-emptive actions so as to manage the recovery scenario as the recovery scenario progresses, wherein the pre-emptive actions are performed as the recovery scenario is occurring and wherein the pre-emptive actions comprise: a) encrypting or deleting data from the computing system; and b) disabling one or more components in the computing system; and c) adding an additional firewall rule to the computing system to compensate for the recovery scenario (Cichonski – Figure 3-3: Highlights a “Containment, Eradication, and Recovery” phase of an incident response lifecycle; and P. 37: 3.3.4 Eradication and Recovery After an incident has been contained, eradication may be necessary to eliminate components of the incident, such as deleting malware and disabling breached user accounts, as well as identifying and mitigating all vulnerabilities that were exploited. During eradication, it is important to identify all affected hosts within the organization so that they can be remediated. For some incidents, eradication is either not necessary or is performed during recovery. In recovery, administrators restore systems to normal operation, confirm that the systems are functioning normally, and (if applicable) remediate vulnerabilities to prevent similar incidents. Recovery may involve such actions as restoring systems from clean backups, rebuilding systems from scratch, replacing compromised files with clean versions, installing patches, changing passwords, and tightening network perimeter security (e.g., firewall rulesets, boundary router access control lists). Higher levels of system logging or network monitoring are often part of the recovery process. Once a resource is successfully attacked, it is often attacked again, or other resources within the organization are attacked in a similar manner. Eradication and recovery should be done in a phased approach so that remediation steps are prioritized). Cichonski does not expressly teach determining, from the set of real-time system recovery indicators, a risk associated with the recovery scenario that the computing system will undergo the recovery scenario; and responsive to the determined risk, automatically performing one or more pre-emptive actions. However, Shemer teaches determining, from [the set of real-time system recovery] indicators, a risk associated with the recovery scenario that the computing system will undergo the recovery scenario (Shemer – Col. 15, Line 49-52: The calculation of risk of block 330, in one embodiment, can assess risks that correspond to a predetermined list of possible risks that are reasonably likely or historically possible for a given location; and Col. 16, Line 59-65: based on the received risk and calculations of risks in blocks 330/520 a set of probabilities is generated (blocks 530) for one or more of the adverse events (block 335/530, 540). In some embodiments, the set of probabilities also includes the expected time of arrival/occurrence of the adverse event, its potential scope, and/or its duration (block 540); and Col. 18, Line 3-6: the probabilities calculated in block 335 can then be consolidated to classify (block 340) the way such probabilities may affect the data center or any desired desires location; and Col. 18, Line 24-27: In some embodiments, each type of classification is associated with one or more possible responses, where the responses generally are designed to minimize data loss and/or computer system downtime for that type of event); and responsive to the determined risk, automatically performing one or more pre-emptive actions so as to manage the recovery scenario as the recovery scenario progresses (Shemer – Col. 18, Line 54-66: after risks are classified (block 340), one or more system behavior changes are triggered based at least in part on the classification (block 340) and the timelines associated with the set of probabilities (block 335). In some embodiments, the system behavior changes correspond to dynamic and/or automatic adjustments of system operation and/or dynamic and/or automatic implementation of preventative measures (block 350), to help minimize loss of data and/or service outages, and/or to mitigate or lessen, at least, the impact of the disaster or adverse event (e.g., to substantially minimize its impact). For example, in same embodiments, these can include pre-emptive adjustments or measures based on the classification and timeline; and Col. 19, Line 13-30: In some embodiments, the dynamic and/or automatic adjustments and/or preventative measures of blocks 350-356 can … cause certain system actions to occur in one or more systems (as described more fully herein), such as creating a clone (e.g., a copy of an image or images, drive or drive of a first location at a second location), creating an image or snapshot, replicating/duplicating, stopping or starting CDP or CRR, failover, backup, etc.; causing or generating one or more types of failover operations (block 356), as described further herein), wherein the pre-emptive actions are performed as the recovery scenario is occurring (Shemer – Col. 12, Line 37-43: In at least some embodiments, the method 300 of FIG. 3 is a computer-implemented method 300 that is performed at least partially by one or more computer systems that may be subject to the predicted, imminent, or current disaster or adverse event, and are performed as much as is possible prior to and/or during the disaster or adverse event). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify Cichonski, further incorporating Shemer to arrive at the conclusion of the claimed invention. One would be motivated to incorporate Shemer’s teaching to determine a risk of occurrence of a recovery scenario and to pre-emptively or proactively kickstart recovery operations based on the risk evaluation into Cichonski’s method for detecting and mitigating the effects of a recovery scenario. This combination provides a particular set of incident indicators contributing to an overall risk assessment, the result of which may initiate protective actions in a computing system that is determined to be in potential danger. Regarding Claim 22: The combination of Cichonski and Shemer teaches the method according to claim 21. Shemer further teaches wherein the pre-emptive actions comprise: d) creating an image of part of the computing system; and/or e) storing artifacts of the computing system in a storage space that is separate from the computing system (Shemer – Col. 19, Line 13-28: In some embodiments, the dynamic and/or automatic adjustments and/or preventative measures of blocks 350-356 can result, in some embodiments, in one or more of the following types of actions which can, in some embodiments, be accomplished at least in part using management and deployment tools used with a given production site, storage system, RPA, DPA, SAN, VASA, backup site, host, production site, data storage center, etc.: causing or generating one or more communications causing system controls (block 356) to cause certain system actions to occur in one or more systems (as described more fully herein), such as creating a clone (e.g., a copy of an image or images, drive or drive of a first location at a second location), creating an image or snapshot, replicating/duplicating, stopping or starting CDP or CRR, failover, backup, etc.). Regarding Claim 23: The combination of Cichonski and Shemer teaches the method according to claim 21. Shemer further teaches wherein the pre-emptive actions are to: secure the computing system against occurrence of the recovery scenario; and/or enable the computing system to be recovered if the recovery scenario occurs (Shemer – Col. 11, Line 54-67 and Col. 12, Line 1-2: at least some data protection products, such as those described herein, are employed to provide redundancy of computing, network and storage and help secure protection of data and/or continuous operation, even when such events occur. These products include, but are not limited to, products providing backup, replication, distributed storages, geo-located caches, active-active availability mechanisms, and redundancy in almost all components of a data system. However, at (least) some of these mechanisms need some management of their operations. For example, backup systems often create backups on a schedule (daily, weekly etc.), replication systems need to know when to failover or restore and so on. In event of a significant disaster (a data center that is flooded, for example) the recovery operations can be many and diverse. After or during a disaster or other significant event, data at a new (or rehabilitated location) may need to be restored from backups or replication data). Regarding Claim 24: The combination of Cichonski and Shemer teaches the method according to claim 21. Shemer further teaches wherein the determining of the risk that the computing system will undergo the recovery scenario comprises: predicting a likelihood that the computing system will undergo the recovery scenario from system recovery indicators (Shemer – Col. 12, Line 47-67 and Col. 13, Line 1-35: Referring again to FIG. 3, at the start (block 310), risk related information is retrieved (e.g., by querying one or more data sources) and/or received (e.g., by receiving information that could be related to a risk, whether directly or indirectly) (block 320) from one or more data sources. This information and data sources (blocks 315 and 325) includes virtually any type of information from any source anywhere in the world, relating to any event whether natural or human caused, including but not limited to: … data center operational information (including but not limited to real-time data related to environmental conditions; operational history; maintenance, inspection, repair and test of any part of the data center, and sociological, and other conditions at or near any data center or computer system being protected and/or monitored); and wherein the risk is determined as a function of the predicted likelihood and an estimation of impact if the recovery scenario were to occur (Shemer – Col. 15, Line 36-42: In some embodiments, the risks calculated can include either or both of quantitative risks and qualitative risks. Determining quantitative risks, in at least one embodiment, can at least relate to numerically determining probabilities of one or more various unfavorable or negative events and determining a likely extent of losses if a given event or set of events takes place). Regarding Claim 25: The combination of Cichonski and Shemer teaches the method according to claim 21. Shemer further teaches wherein the pre-emptive actions are selected from the options a) and b) according to a type of the recovery scenario, so as to mitigate against said type of recovery scenario (Shemer – Col. 18, Line 54-64: Referring again to FIG. 3, after risks are classified (block 340), one or more system behavior changes are triggered based at least in part on the classification (block 340) and the timelines associated with the set of probabilities (block 335). In some embodiments, the system behavior changes correspond to dynamic and/or automatic adjustments of system operation and/or dynamic and/or automatic implementation of preventative measures (block 350), to help minimize loss of data and/or service outages, and/or to mitigate or lessen, at least, the impact of the disaster or adverse event (e.g., to substantially minimize its impact); and Col. 19, Line 49-58: For predicted or imminent destruction of premises, in some embodiments, the countermeasures are used to provide alerts and warnings as early as possible to preserve human life, generate control signals configured to instruct systems to back up all data, generate instructions to move resources away from the premises, if possible (e.g., failover), disconnect resources from power to avoid electrocution and/or shock, possible encryption or destruction of sensitive data to prevent its becoming accessible to inappropriate or criminal users). Regarding Claim 26: The combination of Cichonski and Shemer teaches the method according to claim 21. Shemer further teaches wherein the pre-emptive actions are selected from the options a) and b) dependent on the risk (Shemer – Col. 18, Line 54-64: Referring again to FIG. 3, after risks are classified (block 340), one or more system behavior changes are triggered based at least in part on the classification (block 340) and the timelines associated with the set of probabilities (block 335). In some embodiments, the system behavior changes correspond to dynamic and/or automatic adjustments of system operation and/or dynamic and/or automatic implementation of preventative measures (block 350), to help minimize loss of data and/or service outages, and/or to mitigate or lessen, at least, the impact of the disaster or adverse event (e.g., to substantially minimize its impact); and Col. 19, Line 49-58: For predicted or imminent destruction of premises, in some embodiments, the countermeasures are used to provide alerts and warnings as early as possible to preserve human life, generate control signals configured to instruct systems to back up all data, generate instructions to move resources away from the premises, if possible (e.g., failover), disconnect resources from power to avoid electrocution and/or shock, possible encryption or destruction of sensitive data to prevent its becoming accessible to inappropriate or criminal users). Regarding Claim 27: The combination of Cichonski and Shemer teaches the method according to claim 22. Shemer further teaches wherein the pre-emptive actions are selected from the options c), d) and e) according to a type of the recovery scenario, so as to mitigate against said type of recovery scenario (Shemer – Col. 20, Line 66-67, and Col. 21, Line 1-19: The following are illustrative examples of adjustments (blocks 350-356) usable in certain exemplary hypothetical scenarios, in accordance with at least some embodiments, but these are not to be construed as limiting: First Example—Expected Immediate Disaster, Short Power Loss: Actions in Some Embodiments May Include, but are not Limited to a. Flush all caches. b. Take snapshots (e.g., of storage arrays) if fast enough (e.g., in seconds or sub-seconds). c. Live migrate to other sites (including failover sites) if possible (or use any other technique capable of allowing live migration of a running virtual machine's (VM) file system from one storage system to another, with no downtime for the VM or service disruption for end users. d. Shorten recovery point objective (RPO) (i.e., maximum targeted period in which data might be lost from an IT service due to a major incident), where possible, such as by buffering some data and sending it in bulk). Regarding Claim 28: The combination of Cichonski and Shemer teaches the method according to claim 22. Shemer further teaches wherein the pre-emptive actions are selected from the options c), d) and e) dependent on the risk (Shemer – Col. 18, Line 54-64: Referring again to FIG. 3, after risks are classified (block 340), one or more system behavior changes are triggered based at least in part on the classification (block 340) and the timelines associated with the set of probabilities (block 335). In some embodiments, the system behavior changes correspond to dynamic and/or automatic adjustments of system operation and/or dynamic and/or automatic implementation of preventative measures (block 350), to help minimize loss of data and/or service outages, and/or to mitigate or lessen, at least, the impact of the disaster or adverse event (e.g., to substantially minimize its impact); and Col. 20, Line 66-67, and Col. 21, Line 1-19: The following are illustrative examples of adjustments (blocks 350-356) usable in certain exemplary hypothetical scenarios, in accordance with at least some embodiments, but these are not to be construed as limiting: First Example—Expected Immediate Disaster, Short Power Loss: Actions in Some Embodiments May Include, but are not Limited to a. Flush all caches. b. Take snapshots (e.g., of storage arrays) if fast enough (e.g., in seconds or sub-seconds). c. Live migrate to other sites (including failover sites) if possible (or use any other technique capable of allowing live migration of a running virtual machine's (VM) file system from one storage system to another, with no downtime for the VM or service disruption for end users. d. Shorten recovery point objective (RPO) (i.e., maximum targeted period in which data might be lost from an IT service due to a major incident), where possible, such as by buffering some data and sending it in bulk). Regarding Claim 34: The combination of Cichonski and Shemer teaches the method according to claim 21. Shemer further teaches wherein the recovery scenario is caused by: an external attack on the computing system; a system failure of the computing system; an adverse environmental condition affecting the computing system; an uncontrolled system change in or related to the computing system; and/or a human error which has affected the computing system (Shemer – Col. 16, Line 8-43: Examples of adverse events which might not have been previously foreseen for a given location, but suddenly may become of more concern and risk, in at least some embodiments, include but are not limited to situations such as the following hypothetical examples: risk of deliberate destruction of or interference with a data center or computer system, due to reports of increased political or civil unrest near a data center in a location that previously had no such history—where data sources such as news and media and/or social media can provide data inputs showing increased tensions or social concerns near a data center located in a geographical area of interest; risk of a computer outage due to a fire or other damage resulting from a truck crashing into a building where computer systems are located—where data sources such as vehicle and/or building mounted cameras, new and media sources, emergency alerts, and even social media, can provide inputs relating to incidents near a host site; risk of damage to computer systems and/or data centers due to damage caused by possible rioting or looting that might occur after a closely-watched jury decision is read or after a sporting event final playoff game takes place—where data sources such as news media, social media, messaging services, etc. even social media might provide useful inputs that such actions are being planned or actually are taking place; and risk of certain types of hacking, malware, ransomware, and/or denial of service types of attacks on computer or data center installations—where data sources such as message boards, social media, and even records of types of searches done on search engines, may provide useful information that criminals are exchanging information about planning or how to carry out such attacks on specific sites). Regarding Claim 35: The combination of Cichonski and Shemer teaches the method according to claim 21. Shemer further teaches comprising repeating, in an iterative manner: the determining the risk that the computing system will undergo the recovery scenario; and responsive to the determining the risk, performing the one or more pre-emptive actions so as to mitigate against the occurrence of the recovery scenario (Shemer – Figure 3: illustration of a method of preventative data protection; and Col. 21, Line 41-55: Referring again to FIG. 3. in some embodiments, after the dynamic and automatic adjustments are made (block 350), if the adverse event(s) or disaster(s) have occurred, in some embodiments, processing ends (i.e., the YES—V1 outcome at block 360, leading to block 380). In some embodiments (i.e., the YES—V2 outcome at block 360), even if the adverse event has occurred, if the risk of additional adverse events or disasters has not ended (i.e., the NO outcome at block 365), continual checks are made (e.g., by repeating blocks 320-356) to see if new and/or updated risk information is received, and, based on the new and/or updated information, risks are r-evaluated, as noted above, and new or modified adjustments and/or preventative measures are implemented, based on predicted and/or actual adverse event(s) and/or disaster(s)). Regarding Claim 36: Cichonski teaches monitoring a computing system with respect to an occurrence of a recovery scenario from which the computing system would require recovery (Cichonski – P. 1: this publication provides guidelines for incident handling, particularly for analyzing incident related data and determining the appropriate response to each incident. The guidelines can be followed independently of particular hardware platforms, operating systems, protocols, or applications. Because performing incident response effectively is a complex undertaking, establishing a successful incident response capability requires substantial planning and resources. Continually monitoring for attacks is essential); obtain a set of real-time system recovery indicators from the computing system (Cichonski – P. 26: Signs of an incident fall into one of two categories: precursors and indicators. A precursor is a sign that an incident may occur in the future. An indicator is a sign that an incident may have occurred or may be occurring now; and P. 27: Precursors and indicators are identified using many different sources, with the most common being computer security software alerts, logs, publicly available information, and people), the set of real-time system recovery indicators comprising data representing access patterns to the computing system (Cichonski – P. 26-27: While precursors are relatively rare, indicators are all too common. Too many types of indicators exist to exhaustively list them, but some examples are listed below: … An application logs multiple failed login attempts from an unfamiliar remote system), traffic flow patterns through the computing system (Cichonski – P. 26-27: While precursors are relatively rare, indicators are all too common. Too many types of indicators exist to exhaustively list them, but some examples are listed below: … A network administrator notices an unusual deviation from typical network traffic flows), and system vulnerability indicators (Cichonski – P. 26-27: While precursors are relatively rare, indicators are all too common. Too many types of indicators exist to exhaustively list them, but some examples are listed below: … A network intrusion detection sensor alerts when a buffer overflow attempt occurs against a database server; Antivirus software alerts when it detects that a host is infected with malware; An email administrator sees a large number of bounced emails with suspicious content), each of the real-time system recovery indicators being in a format of a numerical text or an image (Cichonski – P. 27: Precursors and indicators are identified using many different sources, with the most common being computer security software alerts, logs, publicly available information, and people. Table 3-2 lists common sources of precursors and indicators for each category; and Table 3-1: sources of precursors and indicators, including alerts; Examiner’s Comment: alerts are understood to be in numerical, text, and/or image formats); the set of real-time system recovery indicators (Cichonski – P. 26: Signs of an incident fall into one of two categories: precursors and indicators. A precursor is a sign that an incident may occur in the future. An indicator is a sign that an incident may have occurred or may be occurring now; and P. 27: Precursors and indicators are identified using many different sources, with the most common being computer security software alerts, logs, publicly available information, and people); and perform one or more pre-emptive actions so as to manage the recovery scenario as the recovery scenario progresses, wherein the pre-emptive actions are performed as the recovery scenario is occurring and wherein the pre-emptive actions comprise: a) encrypting or deleting data from the computing system; and b) disabling one or more components in the computing system; and c) adding an additional firewall rule to the computing system to compensate for the recovery scenario (Cichonski – Figure 3-3: Highlights a “Containment, Eradication, and Recovery” phase of an incident response lifecycle; and P. 37: 3.3.4 Eradication and Recovery After an incident has been contained, eradication may be necessary to eliminate components of the incident, such as deleting malware and disabling breached user accounts, as well as identifying and mitigating all vulnerabilities that were exploited. During eradication, it is important to identify all affected hosts within the organization so that they can be remediated. For some incidents, eradication is either not necessary or is performed during recovery. In recovery, administrators restore systems to normal operation, confirm that the systems are functioning normally, and (if applicable) remediate vulnerabilities to prevent similar incidents. Recovery may involve such actions as restoring systems from clean backups, rebuilding systems from scratch, replacing compromised files with clean versions, installing patches, changing passwords, and tightening network perimeter security (e.g., firewall rulesets, boundary router access control lists). Higher levels of system logging or network monitoring are often part of the recovery process. Once a resource is successfully attacked, it is often attacked again, or other resources within the organization are attacked in a similar manner. Eradication and recovery should be done in a phased approach so that remediation steps are prioritized). Cichonski does not expressly teach A security system … the security system comprising: a memory comprising instruction data representing a set of instructions; and a processor configured to communicate with the memory and to execute the set of instructions, wherein the set of instructions, when executed by the processor, cause the security system to; determine, from the set of real-time system recovery indicators, a risk associated with the recovery scenario that the computing system will undergo the recovery scenario; and responsive to the determined risk, automatically performing one or more pre-emptive actions. However, Shemer teaches a security system … the security system comprising: a memory comprising instruction data representing a set of instructions; and a processor configured to communicate with the memory and to execute the set of instructions, wherein the set of instructions, when executed by the processor, cause the security system to (Shemer – Col. 24, Line 63-67 and Col. 25, Line 1: The function, operation, or sequence of operations may be hard coded into the electronic circuit or soft coded by way of instructions held in a memory device. A “processor” may perform the function, operation, or sequence of operations using digital values or using analog signals); determine, from [the set of real-time system recovery] indicators, a risk associated with the recovery scenario that the computing system will undergo the recovery scenario (Shemer – Col. 15, Line 49-52: The calculation of risk of block 330, in one embodiment, can assess risks that correspond to a predetermined list of possible risks that are reasonably likely or historically possible for a given location; and Col. 16, Line 59-65: based on the received risk and calculations of risks in blocks 330/520 a set of probabilities is generated (blocks 530) for one or more of the adverse events (block 335/530, 540). In some embodiments, the set of probabilities also includes the expected time of arrival/occurrence of the adverse event, its potential scope, and/or its duration (block 540); and Col. 18, Line 3-6: the probabilities calculated in block 335 can then be consolidated to classify (block 340) the way such probabilities may affect the data center or any desired desires location; and Col. 18, Line 24-27: In some embodiments, each type of classification is associated with one or more possible responses, where the responses generally are designed to minimize data loss and/or computer system downtime for that type of event); and automatically performing one or more pre-emptive actions so as to manage the recovery scenario as the recovery scenario progresses (Shemer – Col. 18, Line 54-66: after risks are classified (block 340), one or more system behavior changes are triggered based at least in part on the classification (block 340) and the timelines associated with the set of probabilities (block 335). In some embodiments, the system behavior changes correspond to dynamic and/or automatic adjustments of system operation and/or dynamic and/or automatic implementation of preventative measures (block 350), to help minimize loss of data and/or service outages, and/or to mitigate or lessen, at least, the impact of the disaster or adverse event (e.g., to substantially minimize its impact). For example, in same embodiments, these can include pre-emptive adjustments or measures based on the classification and timeline; and Col. 19, Line 13-30: In some embodiments, the dynamic and/or automatic adjustments and/or preventative measures of blocks 350-356 can … cause certain system actions to occur in one or more systems (as described more fully herein), such as creating a clone (e.g., a copy of an image or images, drive or drive of a first location at a second location), creating an image or snapshot, replicating/duplicating, stopping or starting CDP or CRR, failover, backup, etc.; causing or generating one or more types of failover operations (block 356), as described further herein), wherein the pre-emptive actions are performed as the recovery scenario is occurring (Shemer – Col. 12, Line 37-43: In at least some embodiments, the method 300 of FIG. 3 is a computer-implemented method 300 that is performed at least partially by one or more computer systems that may be subject to the predicted, imminent, or current disaster or adverse event, and are performed as much as is possible prior to and/or during the disaster or adverse event). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify Cichonski, further incorporating Shemer to arrive at the conclusion of the claimed invention. One would be motivated to incorporate Shemer’s teaching to determine a risk of occurrence of a recovery scenario and to pre-emptively or proactively kickstart recovery operations based on the risk evaluation into Cichonski’s method for detecting and mitigating the effects of a recovery scenario. This combination provides a particular set of incident indicators contributing to an overall risk assessment, the result of which may initiate protective actions in a computing system that is determined to be in potential danger. Regarding Claim 37: The combination of Cichonski and Shemer teaches the security system according to claim 36. Shemer further teaches wherein the pre-emptive actions comprise: d) creating an image of part of the computing system; and/or e) storing artifacts of the computing system in a storage space that is separate from the computing system (Shemer – Col. 19, Line 13-28: In some embodiments, the dynamic and/or automatic adjustments and/or preventative measures of blocks 350-356 can result, in some embodiments, in one or more of the following types of actions which can, in some embodiments, be accomplished at least in part using management and deployment tools used with a given production site, storage system, RPA, DPA, SAN, VASA, backup site, host, production site, data storage center, etc.: causing or generating one or more communications causing system controls (block 356) to cause certain system actions to occur in one or more systems (as described more fully herein), such as creating a clone (e.g., a copy of an image or images, drive or drive of a first location at a second location), creating an image or snapshot, replicating/duplicating, stopping or starting CDP or CRR, failover, backup, etc.). Regarding Claim 38: The combination of Cichonski and Shemer teaches the security system according to claim 36. Shemer further teaches wherein the pre-emptive actions are to: secure the computing system against occurrence of the recovery scenario; and/or enable the computing system to be recovered if the recovery scenario occurs (Shemer – Col. 11, Line 54-67 and Col. 12, Line 1-2: at least some data protection products, such as those described herein, are employed to provide redundancy of computing, network and storage and help secure protection of data and/or continuous operation, even when such events occur. These products include, but are not limited to, products providing backup, replication, distributed storages, geo-located caches, active-active availability mechanisms, and redundancy in almost all components of a data system. However, at (least) some of these mechanisms need some management of their operations. For example, backup systems often create backups on a schedule (daily, weekly etc.), replication systems need to know when to failover or restore and so on. In event of a significant disaster (a data center that is flooded, for example) the recovery operations can be many and diverse. After or during a disaster or other significant event, data at a new (or rehabilitated location) may need to be restored from backups or replication data). Regarding Claim 39: The combination of Cichonski and Shemer teaches the security system according to claim 36. Shemer further teaches determining the risk that the computing system will undergo the recovery scenario comprises: predict a likelihood that the computing system will undergo the recovery scenario from system recovery indicators (Shemer – Col. 12, Line 47-67 and Col. 13, Line 1-35: Referring again to FIG. 3, at the start (block 310), risk related information is retrieved (e.g., by querying one or more data sources) and/or received (e.g., by receiving information that could be related to a risk, whether directly or indirectly) (block 320) from one or more data sources. This information and data sources (blocks 315 and 325) includes virtually any type of information from any source anywhere in the world, relating to any event whether natural or human caused, including but not limited to: … data center operational information (including but not limited to real-time data related to environmental conditions; operational history; maintenance, inspection, repair and test of any part of the data center, and sociological, and other conditions at or near any data center or computer system being protected and/or monitored); and wherein the risk is determined as a function of the predicted likelihood and an estimation of impact if the recovery scenario were to occur (Shemer – Col. 15, Line 36-42: In some embodiments, the risks calculated can include either or both of quantitative risks and qualitative risks. Determining quantitative risks, in at least one embodiment, can at least relate to numerically determining probabilities of one or more various unfavorable or negative events and determining a likely extent of losses if a given event or set of events takes place). Regarding Claim 40: Claim 40 is a computer program product claim with limitations corresponding to those of method Claim 21 and system Claim 36. Therefore, claim 40 is rejected with the same rationale as that of the rejections of Claim 21 and Claim 36. In addition, Shemer teaches the computer program product comprising non-transitory computer readable media (Shemer – Col. 25, Line 26-34: A non-transitory machine-readable medium may include but is not limited to tangible media, such as magnetic recording media including hard drives, floppy diskettes, and magnetic tape media, optical recording media including compact discs (CDs) and digital versatile discs (DVDs), solid state memory such as flash memory, hybrid magnetic and solid state memory, non-volatile memory, volatile memory, and so forth, but does not include a transitory signal per se). Claim(s) 29-33 is/are rejected under 35 U.S.C. 103 as being unpatentable over Cichonski in view of Shemer and Angeles et al. (US 20180107534 A1), hereinafter Angeles. Regarding Claim 29: The combination of Cichonski and Shemer teaches the method according to claim 22. The combination of Cichonski and Shemer does not expressly teach wherein the performing of the one or more pre-emptive actions are performed responsive to the risk being above a first pre-determined threshold risk. However, Angeles teaches wherein the performing of one or more pre-emptive actions is performed responsive to the risk being above a first pre-determined threshold risk (Angeles – Paragraph [0056]: According to an embodiment of the present invention, prediction program 334 utilizes the social media data and the environmental data stored to database repository 332 to determine a weighted severity score (WSS) which is used to trigger actions to prevent downtime and loss of data of local server 330; and Paragraph [0062]: According to embodiments of the present invention, a low severity value indicates that a problem is unlikely while a high severity value indicates a problem is likely; and Figure 5: table illustrating various actions corresponding to weighted severity score thresholds being met/exceeded). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify Cichonski and Shemer, further incorporating Angeles to arrive at the conclusion of the claimed invention. One would be motivated to incorporate Angeles’s teaching associate different pre-emptive actions with thresholds of a risk measurement into Cichonski and Shemer’s method for detecting/predicting and mitigating the effects of a recovery scenario. This combined functionality would enhance the system by efficiently determining an appropriate responsive action corresponding to an evaluated level of risk to a system. Regarding Claim 30: The combination of Cichonski, Shemer, and Angeles teaches the method according to claim 29. Angeles further teaches wherein the pre-emptive actions comprise option c) when the risk is above the first pre-determined threshold risk (Angeles – Paragraph [0056]: According to an embodiment of the present invention, prediction program 334 utilizes the social media data and the environmental data stored to database repository 332 to determine a weighted severity score (WSS) which is used to trigger actions to prevent downtime and loss of data of local server 330; and Figure 5: table illustrating various actions corresponding to weighted severity score thresholds being met/exceeded). The motivation to combine the arts is the same as that of Claim 29. Regarding Claim 31: The combination of Cichonski and Shemer teaches the method according to claim 22. The combination of Cichonski and Shemer does not expressly teach wherein the pre-emptive actions comprise option d) when the risk is above a second pre-determined threshold risk. However, Angeles teaches wherein the pre-emptive actions comprise option d) when the risk is above a second pre-determined threshold risk (Angeles – Paragraph [0056]: According to an embodiment of the present invention, prediction program 334 utilizes the social media data and the environmental data stored to database repository 332 to determine a weighted severity score (WSS) which is used to trigger actions to prevent downtime and loss of data of local server 330; and Paragraph [0062]: According to embodiments of the present invention, a low severity value indicates that a problem is unlikely while a high severity value indicates a problem is likely; and Figure 5: table illustrating various actions corresponding to weighted severity score thresholds being met/exceeded). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify Cichonski and Shemer, further incorporating Angeles to arrive at the conclusion of the claimed invention. One would be motivated to incorporate Angeles’s teaching associate different pre-emptive actions with thresholds of a risk measurement into Cichonski and Shemer’s method for detecting/predicting and mitigating the effects of a recovery scenario. This combined functionality would enhance the system by efficiently determining an appropriate responsive action corresponding to an evaluated level of risk to a system. Regarding Claim 32: The combination of Cichonski, Shemer, and Angeles teaches the method according to claim 31. Angeles further teaches wherein the pre-emptive actions comprise options a) or b) when the risk is above a third pre-determined threshold risk (Angeles – Paragraph [0056]: According to an embodiment of the present invention, prediction program 334 utilizes the social media data and the environmental data stored to database repository 332 to determine a weighted severity score (WSS) which is used to trigger actions to prevent downtime and loss of data of local server 330; and Figure 5: table illustrating various actions corresponding to weighted severity score thresholds being met/exceeded). The motivation to combine the arts is the same as that of Claim 31. Regarding Claim 33: The combination of Cichonski, Shemer, and Angeles teaches the method according to claim 32. Angeles further teaches wherein the one or more pre-emptive actions are performed responsive to the risk being above a first pre-determined threshold risk, and the third pre-determined threshold risk represents a higher risk than the second pre-determined threshold risk; and wherein the second pre-determined threshold risk represents a higher risk than the first pre-determined threshold risk (Angeles – Paragraph [0056]: According to an embodiment of the present invention, prediction program 334 utilizes the social media data and the environmental data stored to database repository 332 to determine a weighted severity score (WSS) which is used to trigger actions to prevent downtime and loss of data of local server 330; and Paragraph [0062]: According to embodiments of the present invention, a low severity value indicates that a problem is unlikely while a high severity value indicates a problem is likely; and Figure 5: table illustrating various actions corresponding to weighted severity score thresholds being met/exceeded). The motivation to combine the arts is the same as that of Claim 31. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Ngo et al. (US 20200137103 A1) teaches a system for detecting a system vulnerability, determine a risk related to the vulnerability, and pre-emptively initiated backup/recovery operations based on the risk Hicks et al. (US 20210406385 A1) teaches an analysis system for determining system risks regarding recovery scenarios Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to NICHOLAS JOSEPH DILUZIO whose telephone number is (703)756-1229. The examiner can normally be reached Mon - Fri -- 7:30 AM - 5 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Yin-Chen Shaw can be reached at 571-272-8878. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NICHOLAS JOSEPH DILUZIO/Examiner, Art Unit 2498 /YIN CHEN SHAW/Supervisory Patent Examiner, Art Unit 2498
Read full office action

Prosecution Timeline

Mar 21, 2024
Application Filed
Aug 20, 2025
Non-Final Rejection — §103
Dec 04, 2025
Response Filed
Mar 12, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596792
DATA ENCRYPTION DETECTION
2y 5m to grant Granted Apr 07, 2026
Patent 12490087
AUTHENTICATION SERVER FUNCTION SELECTION IN AN AUTHENTICATION AND KEY AGREEMENT
2y 5m to grant Granted Dec 02, 2025
Patent 12475218
METHOD AND SYSTEM FOR IDENTIFYING A COMPROMISED POINT-OF-SALE TERMINAL NETWORK
2y 5m to grant Granted Nov 18, 2025
Patent 12367440
ARTIFICIAL INTELLIGENCE-BASED SYSTEM AND METHOD FOR FACILITATING MANAGEMENT OF THREATS FOR AN ORGANIZATON
2y 5m to grant Granted Jul 22, 2025
Patent 11966466
UNIFIED WORKLOAD RUNTIME PROTECTION
2y 5m to grant Granted Apr 23, 2024
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
33%
Grant Probability
99%
With Interview (+100.0%)
3y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 12 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month