Prosecution Insights
Last updated: April 19, 2026
Application No. 17/313,487

System for automatically discovering, enriching and remediating entities interacting in a computer network

Final Rejection §103
Filed
May 06, 2021
Examiner
BROWN, CHRISTOPHER J
Art Unit
2439
Tech Center
2400 — Computer Networks
Assignee
Noetic Cyber Inc.
OA Round
6 (Final)
75%
Grant Probability
Favorable
7-8
OA Rounds
3y 6m
To Grant
88%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
533 granted / 707 resolved
+17.4% vs TC avg
Moderate +13% lift
Without
With
+12.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
36 currently pending
Career history
743
Total Applications
across all art units

Statute-Specific Performance

§101
12.7%
-27.3% vs TC avg
§103
54.6%
+14.6% vs TC avg
§102
10.4%
-29.6% vs TC avg
§112
11.1%
-28.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 707 resolved cases

Office Action

§103
0Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments, have been fully considered but are not persuasive. Applicant argues that independent claims have been amended to recite validating mitigating controls are properly implemented on a relationship graph. Applicant argues that Coull US 11,201,890 fails to teach this as relied upon in the previous action because “Coull calculates threat scores for each object in the graph and used for threat score propagation, not for validating whether safeguards represented by mitigating controls are properly implemented”. Examiner points out that Coull teaches modified threat scores in real time, in part “the subgraph can include user selectable features ….cause remediation… semantic graph is continuously updated”. Coull teaches a GUI based on user input to produce both reports, focus on graphs and implement remedial measures chosen by the user. Upon implementation, and in real time security graph scores are updated based on signals, which would show the remediation is “properly implemented”. Examiner has left Ganor in the rejection, which does not teach graphs but teaches a GUI that lets users explicitly test whether security measures are “properly implemented”. Examiner argues that at minimum, if Coull is not sufficient to anticipate the claim as stated, then the combination of Coull and Ganor should be sufficient to anticipate the claim limitations. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-4, 6, 8, 9, 12, 13, 15, 16, 18, 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Harry US 2020/0272972 in view of Roytman US 2015/0237062 in view of Ganor US 2018/0375892 in view of Coull US 11,201,890. As per claim 1. Harry teaches A method for managing a computer environment, the method comprising: rendering a graphical user interface on a display of a user device, the graphical user interface generating risk information based on input from a user via an input mechanism of the user device, the input indicating definitions of risk objects, risk scenarios, and mitigating controls relevant to the computer environment; a database system receiving the risk information and generating a risk hierarchy indicating associations between the risk objects and the risk scenarios and associations between the risk scenarios and the mitigating controls based on the risk information; calculating risk scores for the computer environment based on the risk hierarchy indicating the associations between the risk objects and the risk scenarios; and conditions of the computer environment; and managing the computer environment based on the calculated risk scores. [0055]-[0058][0066][0067][0093]-[0096] (teaches risk scores calculated using risk events, risk scenarios and associated conditions of the computer systems, and impacts and additional information entered by the user, teaches graph topologies and a data store, allows users to prioritize remediation) Harry teaches the risk objects representing risks affecting the computer environment, the risk scenarios representing conditions of the computer environment associated with the risks, [0055]-[0058][0066][0067][0093]-[0096] (teaches risk scores calculated using risk events, risk scenarios and associated conditions of the computer systems, and impacts and additional information entered by the user) Roytman teaches calculating risk scores for the computer environment based on the risk hierarchy indicating the associations between the risk scenarios the mitigating controls based on the risk information; calculating and mitigating controls relevant to the computer environment; and managing the computer environment based on the calculated risk scores. [0037][0058][0066][0072]-[0076][0097][0100]-[0105] (teaches calculating risk scores and modifying risk scores based on mitigating controls and managing the computer based on the combination of scores and vulnerabilities) Roytman teaches mitigating controls representing existing safeguards within the computer environment for preventing the risk scenarios. (teaches calculating a risk score and including mitigation factors) [0101]-[0104] Roytman teaches mitigating controls including evaluation criteria for evaluating whether safeguards represented by the mitigating controls are properly implemented [0037][0058][0066][0072]-[0076][0097][0100]-[0105] (teaches calculating risk scores and modifying risk scores based on mitigating controls and managing the computer based on the combination of scores and vulnerabilities) It would have been obvious for one of ordinary skill in the art at the time the invention was filed to use the teaching of Roytman with Harry because it more accurately models the risk to the system. Ganor teaches mitigating controls including evaluation criteria for evaluating whether safeguards represented by the mitigating controls are properly implemented in the computer environment based on the evaluation criteria, [0071][0092][0093] (teaches a mitigation that is implemented and auditing said mitigation to ensure compliance) It would have been obvious to one of ordinary skill in the art at the time the invention was filed to use the evaluation of Ganor with the prior art because it ensures security measures are functioning properly. Ganor teaches GUI and mitigating controls include respective evaluation criteria indicating a validation type used for validating whether the safeguards are properly implemented. Ganor teaches receiving the input indicating the definition of the mitigating controls comprises receiving for each of the at least some of the mitigating controls, input specifying the validation type/query indicated in the respective evaluation criteria. [0109] (GUI used to accept input from a user for simulation control) [0036]-[0047]( teaches the variety of simulations which are validation types) [0071] Audits in GUI [0090]-[0095] Coull teaches evaluation criteria including whether a validation query into an entity relationship graph that indicates entities and relationships between the entities that are relevant to security of the computer environment is to be used for validating whether one or more safeguards are properly implemented, and input specifying the validation query to be executed against the entity relationship graph to validate whether the one or more safeguards represented by the mitigating control are properly implemented in the computer environment (Column 6 lines 44-66) (teaches generation of a graph and relation of nodes and objects for a cyber threat analyzer) (Column 7 lines 24-49) (teaches calculating threat scores for objects in the semantic graph and displaying on GUI, threat scores calculated include “past mitigations” or report “recommended mitigations”) (Column 9 lines 25-40; line 59 to Column 10 line 30) (teaches updating threat scores based on time periods or other factors, and thus teaches altering threat scores after mitigations are implemented) It would have been obvious to one of ordinary skill in the art at the time the invention was filed to use the teaching of Coull with the prior art because it improves user security. As per claim 3. Roytman teaches The method of claim 1, further comprising each of the wherein calculating the risk scores based on the risk hierarchy comprises calculating the risk scores based on evaluation results of the evaluation criteria for each of the mitigating controls. [0103][0104] (teaches evaluating remediation measures and calculating updated risk scores based on evaluation) As per claim 4. Roytman teaches The method of claim 3, further comprising each of the risk scenarios including an expected loss magnitude (ELM) and an expected loss frequency (ELF) and each of the mitigating controls including a mitigation control strength (MCS), wherein calculating the risk scores based on the risk hierarchy comprises calculating the risk scores based on the ELM and ELF values for each of the risk scenarios the MCS values for each of the mitigating controls. [0037][0058][0066][0072]-[0076][0097][0100]-[0105] (teaches calculating risk scores and modifying risk scores based on mitigating controls and managing the computer based on the combination of scores and vulnerabilities) Harry more explicitly teaches risk scenarios and including a loss magnitude and frequency to calculate a risk score. [0055]-[0058][0066][0067][0093]-[0096] (Risk likelihood, impact and final score for a given risk event and scenario) As per claim 6. Harry teaches The method of claim 4, further comprising specifying the ELF, ELM, and MCS values in the risk hierarchy as equations that are based on attributes of and relationships between entities of the computer environment as indicated in an entity relationship graph indicating the conditions of the computer environment. [0061][0062] [0072]-[0076] [0082]-[0084] (teaches calculation of risk using graph relationships and ELF/ELM.) Harry teaches further comprising specifying the ELF, ELM, and MCS values in the risk hierarchy as having minimum, most likely, and maximum values, wherein calculating the risk scores comprises generating probability distributions for each of the ELF, ELM, and MCS values, running simulations based on the probability distributions, and generating the risk scores as probabilities based on the simulations. [0074][0075][0079[0093]-[0096] (teaches minimum/maximum, generating probability distributions and generating risk scores based on the calculations) Roytman teaches calculating a risk score and including mitigation factors MCS [0101]-[0104] As per claim 8. Roytman teaches The method of claim 3, wherein calculating the risk scores based on the evaluation criteria for each of the mitigating controls comprises validating whether each mitigating control is properly implemented in the computer environment by executing queries specified in the evaluation criteria against a database storing information about the conditions of the computer environment. [0038]-[0045] [0103][0104] (teaches databases to provide vulnerability and contextual data) (teaches evaluating remediation measures and calculating updated risk scores based on evaluation) Ganor teaches receiving the input indicating the definition of the mitigating controls comprises receiving for each of the at least some of the mitigating controls, input specifying the validation query indicated in the respective evaluation criteria. [0109] (GUI used to accept input from a user for simulation control) [0036]-[0047]( teaches the variety of simulations which are validation types) [0071] Audits in GUI [0090]-[0095] Coull teaches evaluation criteria including whether a validation query into an entity relationship graph that indicates entities and relationships between the entities that are relevant to security of the computer environment is to be used for validating whether one or more safeguards are properly implemented, and input specifying the validation query to be executed against the entity relationship graph to validate whether the one or more safeguards represented by the mitigating control are properly implemented in the computer environment (Column 6 lines 44-66) (teaches generation of a graph and relation of nodes and objects for a cyber threat analyzer) (Column 7 lines 24-49) (teaches calculating threat scores for objects in the semantic graph and displaying on GUI, threat scores calculated include “past mitigations” or report “recommended mitigations”) (Column 9 lines 25-40; line 59 to Column 10 line 30) (teaches updating threat scores based on time periods or other factors, and thus teaches altering threat scores after mitigations are implemented) Harry additionally teaches evaluation criteria and database storing risk data and contextual data. As per claim 9. Harry teaches The method of claim 3, wherein calculating the risk scores based on the evaluation criteria for each of the mitigating controls comprises validating whether each mitigating control is properly implemented in the computer environment by presenting a user interface via a user device associated with and individual or group identified in the evaluation criteria, wherein the user interface generates confirmation information concerning implementation of the mitigating control based on user input received via the user interface. [0057] (teaches user interaction and remediation selection) Roytman teaches evaluating remediation measures and calculating updated risk scores based on evaluation, and user GUI to interact with remediation measures [0081]-[0085][0103][0104] Ganon teaches each of the at least some of the mitigating controls include the respective evaluation criteria indicating whether a user attestation is to be used for validating whether one or more safeguards represented by the mitigation control are properly implemented in the computer environment based on the criteria. [0090]-[0095] (teaches security manager to manage security compliance and mitigation controls including periodic audits and assessing whether mitigation has been performed as appropriate) As per claim 12. Harry teaches The method of claim 1, further comprising storing the risk hierarchy as a graph database with nodes representing the risk objects, the risk scenarios, and the mitigating controls, and edges representing the associations between the risk objects and the risk scenarios and the associations between the risk scenarios and the mitigating, wherein each node indicates properties for the risk object, risk scenario, or mitigating control represented by the node. [0055]-[0058][0066][0067] [0061][0062] [0072]-[0076] [0082]-[0084] [0093]-[0096] (teaches user setting up risk objects, scenarios and mitigation using node and graph calculations and mapping, and calculation of risk using graph relationships and ELF/ELM.) Roytman teaches calculating a risk score and including mitigation factors MCS [0101]-[0104] As per claim 13. Harry teaches A system for managing a computer environment, the system comprising:a user device for executing a graph query and display app for rendering a graphical user interface on a display of the user device, wherein the graph query and display app generates risk information based on input from a user via an input mechanism of the user device, the input indicating definitions of risk objects, risk scenarios, and mitigating controls relevant to the computer environment; and a server system for executing a database system, which receives the risk information and generates a risk hierarchy indicating associations between the risk objects and the risk scenarios and associations between the risk scenarios [0055]-[0058][0066][0067][0093]-[0096] (teaches risk scores calculated using risk events, risk scenarios and associated conditions of the computer systems, and impacts and additional information entered by the user, teaches graph topologies and a data store, allows users to prioritize remediation) Harry teaches the risk objects representing risks affecting the computer environment, the risk scenarios representing conditions of the computer environment associated with the risks, [0055]-[0058][0066][0067][0093]-[0096] (teaches risk scores calculated using risk events, risk scenarios and associated conditions of the computer systems, and impacts and additional information entered by the user) Roytman teaches associations between the risk objects and the risk scenarios and associations between the risk scenarios and the mitigating controls based on the risk information; calculating and mitigating controls relevant to the computer environment; and managing the computer environment based on the calculated risk scores. [0037][0058][0066][0072]-[0076][0097][0100]-[0105] (teaches calculating risk scores and modifying risk scores based on mitigating controls and managing the computer based on the combination of scores and vulnerabilities) Roytman teaches mitigating controls representing existing safeguards within the computer environment for preventing the risk scenarios. (teaches calculating a risk score and including mitigation factors) [0101]-[0104] Roytman teaches mitigating controls including evaluation criteria for evaluating whether safeguards represented by the mitigating controls are properly implemented [0037][0058][0066][0072]-[0076][0097][0100]-[0105] (teaches calculating risk scores and modifying risk scores based on mitigating controls and managing the computer based on the combination of scores and vulnerabilities) It would have been obvious for one of ordinary skill in the art at the time the invention was filed to use the teaching of Roytman with Harry because it more accurately models the risk to the system. It would have been obvious for one of ordinary skill in the art at the time the invention was filed to use the teaching of Roytman with Harry because it more accurately models the risk to the system. Ganor teaches mitigating controls including evaluation criteria for evaluating whether safeguards represented by the mitigating controls are properly implemented in the computer environment based on the evaluation criteria, [0071][0092][0093] (teaches a mitigation that is implemented and auditing said mitigation to ensure compliance) It would have been obvious to one of ordinary skill in the art at the time the invention was filed to use the evaluation of Ganor with the prior art because it ensures security measures are functioning properly. Ganor teaches GUI and mitigating controls include respective evaluation criteria indicating a validation type used for validating whether the safeguards are properly implemented. Ganor teaches receiving the input indicating the definition of the mitigating controls comprises receiving for each of the at least some of the mitigating controls, input specifying the validation type indicated in the respective evaluation criteria. [0109] (GUI used to accept input from a user for simulation control) [0036]-[0047]( teaches the variety of simulations which are validation types) [0071] Audits in GUI [0090]-[0095] Coull teaches evaluation criteria including whether a validation query into an entity relationship graph that indicates entities and relationships between the entities that are relevant to security of the computer environment is to be used for validating whether one or more safeguards are properly implemented, and input specifying the validation query to be executed against the entity relationship graph to validate whether the one or more safeguards represented by the mitigating control are properly implemented in the computer environment (Column 6 lines 44-66) (teaches generation of a graph and relation of nodes and objects for a cyber threat analyzer) (Column 7 lines 24-49) (teaches calculating threat scores for objects in the semantic graph and displaying on GUI, threat scores calculated include “past mitigations” or report “recommended mitigations”) (Column 9 lines 25-40; line 59 to Column 10 line 30) (teaches updating threat scores based on time periods or other factors, and thus teaches altering threat scores after mitigations are implemented) It would have been obvious to one of ordinary skill in the art at the time the invention was filed to use the teaching of Coull with the prior art because it improves user security. As per claim 15. Roytman teaches the risk scores are calculated based on evaluation results of the evaluation criteria for each of the mitigating controls. [0103][0104] (teaches evaluating remediation measures and calculating updated risk scores based on evaluation) As per claim 16. Roytman teaches The system of claim 15, wherein each of the risk scenarios includes an expected loss magnitude (ELM) and an expected loss frequency (ELF) and each of the mitigating controls including a mitigation control strength (MCS), and wherein the risk scores are calculated based on the ELM and ELF values for each of the risk scenarios the MCS values for each of the mitigating controls. 0037][0058][0066][0072]-[0076][0097][0100]-[0105] (teaches calculating risk scores and modifying risk scores based on mitigating controls and managing the computer based on the combination of scores and vulnerabilities) Harry more explicitly teaches risk scenarios and including a loss magnitude and frequency to calculate a risk score. [0055]-[0058][0066][0067][0093]-[0096] (Risk likelihood, impact and final score for a given risk event and scenario) As per claim 18. Harry teaches The system of claim 16, wherein the ELF, ELM, and MCS values are specified in the risk hierarchy as equations that are based on attributes of and relationships between entities of the computer environment as indicated in an entity relationship graph indicating the conditions of the computer environment. [0061][0062] [0072]-[0076] [0082]-[0084] (teaches calculation of risk using graph relationships and ELF/ELM.) Harry teaches further comprising specifying the ELF, ELM, and MCS values in the risk hierarchy as having minimum, most likely, and maximum values, wherein calculating the risk scores comprises generating probability distributions for each of the ELF, ELM, and MCS values, running simulations based on the probability distributions, and generating the risk scores as probabilities based on the simulations. [0074][0075][0079[0093]-[0096] (teaches minimum/maximum, generating probability distributions and generating risk scores based on the calculations) Roytman teaches calculating a risk score and including mitigation factors MCS [0101]-[0104] As per claim 20. Roytman teaches The system of claim 15, wherein the server system validates whether each mitigating control is properly implemented in the computer environment by executing queries specified in the evaluation criteria against a database storing information about the conditions of the computer environment. [0038]-[0045] [0103][0104] (teaches databases to provide vulnerability and contextual data) (teaches evaluating remediation measures and calculating updated risk scores based on evaluation) Harry additionally teaches evaluation criteria and database storing risk data and contextual data. Ganor teaches mitigating controls including evaluation criteria for evaluating whether safeguards represented by the mitigating controls are properly implemented in the computer environment based on the evaluation criteria, [0071][0092][0093] (teaches a mitigation that is implemented and auditing said mitigation to ensure compliance) Ganor teaches receiving the input indicating the definition of the mitigating controls comprises receiving for each of the at least some of the mitigating controls, input specifying the validation type/query indicated in the respective evaluation criteria. [0109] (GUI used to accept input from a user for simulation control) [0036]-[0047]( teaches the variety of simulations which are validation types) [0071] Audits in GUI [0090]-[0095] It would have been obvious to one of ordinary skill in the art at the time the invention was filed to use the evaluation of Ganor with the prior art because it ensures security measures are functioning properly. Coull teaches evaluation criteria including whether a validation query into an entity relationship graph that indicates entities and relationships between the entities that are relevant to security of the computer environment is to be used for validating whether one or more safeguards are properly implemented. (Column 6 lines 44-66) (teaches generation of a graph and relation of nodes and objects for a cyber threat analyzer) (Column 7 lines 24-49) (teaches calculating threat scores for objects in the semantic graph and displaying on GUI, threat scores calculated include “past mitigations” or report “recommended mitigations”) (Column 9 lines 25-40; line 59 to Column 10 line 30) (teaches updating threat scores based on time periods or other factors, and thus teaches altering threat scores after mitigations are implemented) Claim(s) 10, 11, is/are rejected under 35 U.S.C. 103 as being unpatentable over Harry US2020/0272972 in view of Roytman US 2015/0237062 in view of Ganor US 2018/0375892 in view of Coull US 11,201,890 in view of Rush US 2016/0224911 As per claim 10. Roytman teaches The method of claim 1, further comprising recurring the calculating of the risk score for each risk object based on an evaluation frequency associated with each risk object and storing the resulting scores for future reference and/or time series or trending analyses. [0066][0077]-[0080] (teaches evaluation frequency and storing said data) Rush teaches historical data for trending analyses [0100]. It would have been obvious to one of ordinary skill in the art at the time the invention was filed to use the calculation of Rush with the prior art because it provides an accurate assessment of risk. As per claim 11. Harry teaches The method of claim 10, further comprising storing each of the resulting scores for each calculation of the risk scores for each risk object as a node in the risk hierarchy with an edge connecting the risk score node to the risk object node. [0055]-[0058][0066][0067] [0082]-[0084] [0093]-[0096] (teaches risk scores calculated using risk events, risk scenarios and associated conditions of the computer systems, and impacts and additional information entered by the user, teaches graph topologies and a data store) Claim(s) 21-24 is/are rejected under 35 U.S.C. 103 as being unpatentable over Harry US2020/0272972 in view of Roytman US 2015/0237062 in view of Ganor US 2018/0375892 in view of in view of Coull US 11,201,890 Burle US 2021/0234889. As per claim 21. (New) Burle teaches The method of claim 1, wherein managing the computer environment comprises: configuring a rules engine with rules for detecting specified conditions of entities, properties of entities, and relationships between entities indicated in an entity relationship graph indicating the conditions of the computer environment; and performing one or more specified actions in response to detecting the specified conditions. [0030][0031][0035][0116][0117] (teaches automated scanning of an entity based on rules or a policy to determined and maintain the safety of the network, and teaches exploring relationships between entities to determine the proper remediation actions) As per claim 22. (New) Burle teaches The method of claim 21, wherein performing the one or more specified actions comprises: invoking a vulnerability scan to determine known software vulnerabilities that one or more entities indicated in the entity relationship graph are susceptible to. [0030][0031][0035][0036][0079][0116][0117] (teaches automated scanning of an entity based on rules or a policy to determined and maintain the safety of the network, and teaches exploring relationships between entities to determine the proper remediation actions; teaches vulnerability scanning for software vulnerabilities) It would have been obvious to one of ordinary skill in the art at the time the invention was filed to use the teachings of Burle with the prior art because it increases security without degradation in performance. As per claim 23. (New) Burle teaches The method of claim 21, wherein performing the one or more specified actions comprises: triggering a scan to identify IP ports one or more entities indicated in the entity relationship graph are listening on. [0030][0031][0035][0047][0093][0116][0117] (teaches automated scanning of an entity based on rules or a policy to determined and maintain the safety of the network, and teaches exploring relationships between entities to determine the proper remediation actions; teaches scanning ports and taking remediation steps including configuration of ports) As per claim 24. (New) Burle teaches The method of claim 21, wherein performing the one or more specified actions comprises: triggering one or more automated activities to bring one or more entities indicated in the entity relationship graph or the computer environment into compliance with a desired state. [0030][0031][0035][0116][0117] (teaches automated scanning of an entity based on rules or a policy to determined and maintain the safety of the network, and teaches exploring relationships between entities to determine the proper remediation actions; teaches taking remediation action to bring the entities into compliance.) Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTOPHER BROWN whose telephone number is (571)272-3833. The examiner can normally be reached M-F 8-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Luu Pham can be reached on (571) 270-5002. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHRISTOPHER J BROWN/Primary Examiner, Art Unit 2439
Read full office action

Prosecution Timeline

May 06, 2021
Application Filed
Jun 10, 2024
Non-Final Rejection — §103
Sep 04, 2024
Response Filed
Dec 18, 2024
Final Rejection — §103
Feb 26, 2025
Request for Continued Examination
Feb 28, 2025
Response after Non-Final Action
Mar 04, 2025
Non-Final Rejection — §103
May 14, 2025
Response Filed
Aug 25, 2025
Final Rejection — §103
Oct 06, 2025
Request for Continued Examination
Oct 11, 2025
Response after Non-Final Action
Oct 15, 2025
Non-Final Rejection — §103
Dec 15, 2025
Response Filed
Mar 12, 2026
Final Rejection — §103
Apr 14, 2026
Examiner Interview Summary
Apr 14, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603822
SOFTWARE AS A SERVICE (SaaS) USER INTERFACE (UI) FOR DISPLAYING USER ACTIVITIES IN AN ARTIFICIAL INTELLIGENCE (AI)-BASED CYBER THREAT DEFENSE SYSTEM
2y 5m to grant Granted Apr 14, 2026
Patent 12574725
METHODS, APPARATUSES, COMPUTER PROGRAMS AND CARRIERS FOR SECURITY MANAGEMENT BEFORE HANDOVER FROM 5G TO 4G SYSTEM
2y 5m to grant Granted Mar 10, 2026
Patent 12563390
AUTHENTICATING A DEVICE IN A COMMUNICATION NETWORK OF AN AUTOMATION INSTALLATION
2y 5m to grant Granted Feb 24, 2026
Patent 12563056
SYSTEM AND METHOD FOR MONITORING AND MANAGING COMPUTING ENVIRONMENT
2y 5m to grant Granted Feb 24, 2026
Patent 12537828
ON-DEMAND SOFTWARE-DEFINED SECURITY SERVICE ORCHESTRATION FOR A 5G WIRELESS NETWORK
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
75%
Grant Probability
88%
With Interview (+12.6%)
3y 6m
Median Time to Grant
High
PTA Risk
Based on 707 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month