Prosecution Insights
Last updated: April 19, 2026
Application No. 18/771,395

Context-Based Countermeasures for Cybersecurity Threats

Non-Final OA §102§103
Filed
Jul 12, 2024
Examiner
BAZNA, JUDY
Art Unit
2495
Tech Center
2400 — Computer Networks
Assignee
Csp Inc.
OA Round
1 (Non-Final)
67%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
90%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
16 granted / 24 resolved
+8.7% vs TC avg
Strong +23% interview lift
Without
With
+22.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
19 currently pending
Career history
43
Total Applications
across all art units

Statute-Specific Performance

§101
4.6%
-35.4% vs TC avg
§103
77.2%
+37.2% vs TC avg
§102
9.7%
-30.3% vs TC avg
§112
5.9%
-34.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 24 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted by applicant dated 12/31/2024 have been considered by the examiner. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 2, 6-8, 11, 13, 14, 17, 18 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Kliger (US 20200045075 A1). Regarding claim 1, Kliger teaches a method performed at a computing device having memory and one or more processors, the method comprising: identifying a process running on the computing device (Fig. 3. Para [0038]-[0039]: the identification of processes being performed by a computing system (act 121).); in response to identifying the process running on the computing device (Fig. 3. Para [0038]-[0039]): selecting one or more countermeasures from a plurality of countermeasures based at least in part on the determined process (Para [0048]: the process of determining which processes are related to the identified alert (act 122) is performed to identify the specific processes a client performs in remediation of a detected alert condition.); and executing each of the selected countermeasures at the computing device (Para [0071]. Claim 1: providing a mitigation file to the client system that includes the predictive set of mitigation processes for responding to the particular threat scenario. Then, the computing system 200, without receiving a request for a composite remediation file from the target system, automatically in response to detecting the alert condition.). Regarding claim 2, Kliger teaches the method of claim 1, further comprising: determining an operating context for the identified process on the computing device where the process is running, wherein the one or more countermeasures are selected based at least in part on the determined operating context (Fig. 3. Para [0035]-[0037]: the correlation vector 300 that is generated and used to identify processes associated with an alert and which can also be used to identify an alert from processes that are being performed. For each identified alert or alert type, identifying processes performed by a corresponding plurality of different client systems within a predetermined time and/or process proximity to and after the identified alert (act 121), determining which of the plurality of processes are related to the identified alert based on a correlation vector of the plurality of processes and the identified alert (act 122), and for each client of the plurality of different client systems, creating a client remediation process set that includes the processes that are determined to be related to the identified alert and that were performed by the client within the predetermined period of time and/or process proximity to the identified alert (act 123).). Regarding claim 6, Kliger teaches the method of claim 1, wherein at least one of the countermeasures of the selected countermeasures is executed in parallel to running the process (Para [0016]. Para [0071]. Claim 1: the mitigation file comprises an executable file having executable instructions corresponding to the remediation process sets for automatically performing remedial actions in response to running the mitigation file at the client system.). Regarding claim 7, Kliger teaches the method of claim 1, wherein a first countermeasure and a second countermeasure of the plurality of countermeasures are trained based on distinct types of malicious attacks (Para [0079]-[0080]: FIG. 8 illustrates an example of a threat vector 800, which includes one or more threats or threat types (comprising one or more processes or conditions) and various distance attributes/characteristics that are associated with the corresponding threats or threat types. The various distance attributes are defined characteristics associated with the identified threat scenarios. Nonlimiting examples of the types of distance attributes include such things as resources/components involved, detection components, severity, threat-specific information, mitigation steps taken, time stamps, and other attributes.). Regarding claim 8, Kliger teaches the method of claim 1, wherein a first countermeasure and a second countermeasure of the plurality of countermeasures are configured to mitigate distinct types of malicious attacks (Para [0079]-[0080]: FIG. 8 illustrates an example of a threat vector 800, which includes one or more threats or threat types (comprising one or more processes or conditions) and various distance attributes/characteristics that are associated with the corresponding threats or threat types. The various distance attributes are defined characteristics associated with the identified threat scenarios. Nonlimiting examples of the types of distance attributes include such things as resources/components involved, detection components, severity, threat-specific information, mitigation steps taken, time stamps, and other attributes.). Regarding claim 11, Kliger teaches the method of claim 1, wherein one or more of the selected countermeasures are reactive artificial intelligence machines (Para [0083]). As per claim 13, 14, the claim claiming a computing device essentially corresponding to the method claim 1, 2 above, and they are rejected, at least for the same reasons. As per claim 17, 18, the claim claiming a non-transitory computer-readable storage medium essentially corresponding to the method claim 1,2 above, and they are rejected, at least for the same reasons. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 3, 4, 15, 19 are rejected under 35 U.S.C. 103 as being unpatentable over Kliger (US 20200045075 A1) in view of Monastyrsky (US 20180365415 A1). Regarding claim 3, Kliger teaches the method of claim 1, wherein the one or more countermeasures are received at the computing device prior to identifying the process (Para [0053]: the generation of the composite remediation files (130) is based on assembling a cluster of remediation process sets for a particular alert type that omits some of the total identified or stored remediation process sets that are associated with the alert type. This can enable for a target system that is experiencing an alert condition to obtain a corresponding composite remediation file that is specifically tailored for and based on the remediation process sets of similarly configured client systems.). Kliger does not explicitly disclose (perform an action) via a trust agent. Monastyrsky does disclose (perform an action) via a trust agent (Para [0033]: a software agent 110 (i.e., an “agent”) is installed on the side of the client 100.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Kliger of executing countermeasures at the computing device with the teachings of Monastyrsky to include the well-known technique of (perform an action) via a trust agent because the results would have been predictable and resulted in enhancing the security protocols. Regarding claim 4, Kliger teaches the method of claim 1, wherein the one or more countermeasures are received at the computing device in response to identifying the process running on the computing device (Claim 1). Kliger does not explicitly disclose (perform an action) via a trust agent. Monastyrsky does disclose (perform an action) via a trust agent (Para [0033]: a software agent 110 (i.e., an “agent”) is installed on the side of the client 100.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Kliger of executing countermeasures at the computing device with the teachings of Monastyrsky to include the well-known technique of (perform an action) via a trust agent because the results would have been predictable and resulted in enhancing the security protocols. As per claim 15, the claim claiming a computing device essentially corresponding to the method claim 4 above, and they are rejected, at least for the same reasons. As per claim 19, the claim claiming a non-transitory computer-readable storage medium essentially corresponding to the method claim 4 above, and they are rejected, at least for the same reasons. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Kliger (US 20200045075 A1) in view of THOMAS (US 20170063920 A1). Regarding claim 5, Kliger teaches the method of claim 1. Kilger teaches wherein the one or more countermeasures are applied as a group. THOMAS does disclose wherein the one or more countermeasures are applied as a group (Claim 47: wherein the response includes a plurality of groups of countermeasures, each group countermeasure from the plurality of group of countermeasures to be applied). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Kliger with the teachings of THOMAS to include wherein the one or more countermeasures are applied as a group in order to provide defense in depth. Claims 9, 16, 20 are rejected under 35 U.S.C. 103 as being unpatentable over Kliger (US 20200045075 A1) in view of Talati (US 20220311804 A1) in view of Monastyrsky (US 20180365415 A1). Regarding claim 9, Kliger teaches the method of claim 1. Kliger does not explicitly disclose further comprising in response to identifying the process running on the computing device: performing one or more checks in accordance with one or more countermeasure policies; and in response to detecting one or more suspicious agents in accordance with the one or more countermeasure policies, sending an alert to a trust center. Talati does disclose further comprising in response to identifying the process running on the computing device: performing one or more checks in accordance with one or more countermeasure policies (Para [0060]: when a threat or policy violation is detected by the threat management facility 400, the threat management facility 400 may perform or initiate remedial action through a remedial action facility 428. Remedial action may take a variety of forms, such as terminating or modifying an ongoing process or interaction, issuing an alert, sending a warning to a client or administration facility 434 of an ongoing process or interaction.); and in response to detecting one or more suspicious agents in accordance with the one or more countermeasure policies, sending an alert to a trust center (Para [0060]: when a threat or policy violation is detected by the threat management facility 400, the threat management facility 400 may perform or initiate remedial action through a remedial action facility 428. Remedial action may take a variety of forms, such as terminating or modifying an ongoing process or interaction, issuing an alert, sending a warning to a client or administration facility 434 of an ongoing process or interaction.). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Kliger with the teachings of Talati to include further comprising in response to identifying the process running on the computing device: performing one or more checks in accordance with one or more countermeasure policies; and in response to detecting one or more suspicious agents in accordance with the one or more countermeasure policies, sending an alert to a trust center in order to warn the client or administration facility of an ongoing process or interaction to prevent the threat (Talati Para [0060]). Kliger in view Talati of does not explicitly disclose (perform an action) via a trust agent. Monastyrsky does disclose (perform an action) via a trust agent (Para [0033]: a software agent 110 (i.e., an “agent”) is installed on the side of the client 100.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Kliger in view of Talati of executing countermeasures at the computing device with the teachings of Monastyrsky to include the well-known technique of (perform an action) via a trust agent because the results would have been predictable and resulted in enhancing the security protocols. As per claim 16, the claim claiming a computing device essentially corresponding to the method claim 9 above, and they are rejected, at least for the same reasons. As per claim 20, the claim claiming a non-transitory computer-readable storage medium essentially corresponding to the method claim 9 above, and they are rejected, at least for the same reasons. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Kliger (US 20200045075 A1) in view of Talati (US 20220311804 A1). Regarding claim 10, Kliger teaches the method of claim 1. Kliger does not explicitly disclose further comprising in response to identifying the process running on the computing device: in response to detecting one or more suspicious agents in accordance with the one or more countermeasure policies, terminating the process Talati does disclose further comprising in response to identifying the process running on the computing device: in response to detecting one or more suspicious agents in accordance with the one or more countermeasure policies, terminating the process (Para [0060]: when a threat or policy violation is detected by the threat management facility 400, the threat management facility 400 may perform or initiate remedial action through a remedial action facility 428. Remedial action may take a variety of forms, such as terminating or modifying an ongoing process or interaction, issuing an alert, sending a warning to a client or administration facility 434 of an ongoing process or interaction.). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Kliger with the teachings of Talati to include further comprising in response to identifying the process running on the computing device: in response to detecting one or more suspicious agents in accordance with the one or more countermeasure policies, terminating the process in order to warn the client or administration facility of an ongoing process or interaction to prevent the threat (Talati Para [0060]). Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Kliger (US 20200045075 A1) in view of Tumblin (US 11449602 B1). Regarding claim 12, Kliger teaches the method of claim 1. Kliger does not explicitly disclose wherein the one or more countermeasures include one or more of: a no-trust countermeasure, a self-protection countermeasure, a reflective injection countermeasure, a heap spray countermeasure, a read buffer countermeasure, a write buffer countermeasure, an unauthorized function countermeasure, a malicious script countermeasure, a shell code countermeasure, a Javascript countermeasure, a privilege escalation countermeasure, a tamper countermeasure, a hollowing countermeasure, an immutable countermeasure, a registry key countermeasure, a malicious path countermeasure, an image load countermeasure, a malicious registry entry countermeasure, a DLL hooking countermeasure, a connection block countermeasure, and a digital certificate verification countermeasure. Tumblin does disclose wherein the one or more countermeasures include one or more of: a no-trust countermeasure, a self-protection countermeasure, a reflective injection countermeasure, a heap spray countermeasure, a read buffer countermeasure, a write buffer countermeasure, an unauthorized function countermeasure, a malicious script countermeasure, a shell code countermeasure, a Javascript countermeasure, a privilege escalation countermeasure, a tamper countermeasure, a hollowing countermeasure, an immutable countermeasure, a registry key countermeasure, a malicious path countermeasure, an image load countermeasure, a malicious registry entry countermeasure, a DLL hooking countermeasure, a connection block countermeasure, and a digital certificate verification countermeasure (Col 17 lines 21-32: the remedial action includes (752) applying one or more countermeasures. In some implementations, the countermeasures include detecting and preventing one or more of: heap spray, reflective injection, unknown read buffering, blocked address communication, unauthorized functions, malicious scripts, privilege escalation, function tampering, unknown shellcode, and trust tampering.). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Kliger with the teachings of Tumblin to include wherein the one or more countermeasures include one or more of: a no-trust countermeasure, a self-protection countermeasure, a reflective injection countermeasure, a heap spray countermeasure, a read buffer countermeasure, a write buffer countermeasure, an unauthorized function countermeasure, a malicious script countermeasure, a shell code countermeasure, a Javascript countermeasure, a privilege escalation countermeasure, a tamper countermeasure, a hollowing countermeasure, an immutable countermeasure, a registry key countermeasure, a malicious path countermeasure, an image load countermeasure, a malicious registry entry countermeasure, a DLL hooking countermeasure, a connection block countermeasure, and a digital certificate verification countermeasure in order to enhancing security and prevent the threat (Tumblin Col 17 lines 21-32). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JUDY BAZNA whose telephone number is (703)756-1258. The examiner can normally be reached Monday - Friday 08:30 AM-05:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Farid Homayounmehr can be reached at (571) 272-3739. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JUDY BAZNA/ Examiner, Art Unit 2495 /FARID HOMAYOUNMEHR/ Supervisory Patent Examiner, Art Unit 2495
Read full office action

Prosecution Timeline

Jul 12, 2024
Application Filed
Jan 21, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585784
SYSTEM FOR COMPONENT-LEVEL THREAT ASSESSMENT IN A COMPUTING ENVIRONMENT
2y 5m to grant Granted Mar 24, 2026
Patent 12579261
MANAGING INFERENCE MODELS IN VIEW OF RECONSTRUCTABILITY OF SENSITIVE INFORMATION
2y 5m to grant Granted Mar 17, 2026
Patent 12572643
CIRCUIT AND METHOD FOR DETECTING A FAULT INJECTION ATTACK IN AN INTEGRATED CIRCUIT
2y 5m to grant Granted Mar 10, 2026
Patent 12549335
COORDINATING DATA ACCESS AMONG MULTIPLE SERVICES
2y 5m to grant Granted Feb 10, 2026
Patent 12536288
DETECTING BACKDOORS IN BINARY SOFTWARE CODE
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
67%
Grant Probability
90%
With Interview (+22.9%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 24 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month