Prosecution Insights
Last updated: April 19, 2026
Application No. 18/778,626

GRAPH-BASED ANALYSIS OF SECURITY INCIDENTS

Non-Final OA §101§103§DP
Filed
Jul 19, 2024
Examiner
TOLENTINO, RODERICK
Art Unit
2439
Tech Center
2400 — Computer Networks
Assignee
Microsoft Technology Licensing, LLC
OA Round
1 (Non-Final)
77%
Grant Probability
Favorable
1-2
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
545 granted / 705 resolved
+19.3% vs TC avg
Strong +35% interview lift
Without
With
+35.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
25 currently pending
Career history
730
Total Applications
across all art units

Statute-Specific Performance

§101
15.7%
-24.3% vs TC avg
§103
56.2%
+16.2% vs TC avg
§102
11.9%
-28.1% vs TC avg
§112
8.3%
-31.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 705 resolved cases

Office Action

§101 §103 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Detailed Action Office Action is in response to the instant Application 18/778,626 filed on 7/19/2024. Claims 1-20 are pending. This Office Action is Non-Final. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-7, 9-17 and 19-20 are rejected under 35 U. S. C. 101 as being directed to non-statutory subject matter as being directed to an abstract idea without being integrated into a practical application or significantly more. Regarding claims 1, 11 and 20, the claim is directed to an abstract idea as reciting the limitations “accessing data…,” “extracting, …, node data …,” “generating, …, a multipartite graph …,” “identifying, …, subgraphs …,” “ranking the subgraphs …,” and “generating an output ….” The aforementioned steps are “mental process/mathematical calculation” as broadly interpreted said steps could be performed in the human mind. Therefore, the claim recites an abstract idea. Said abstract idea and/or judicial exception is not integrated into a practical application as the claim does not recite any other active steps that utilize determination result into a practical application. It’s noted that the claims recite additional elements (i.e., processor/memory, computing system). However, said additional elements are recited at a high-level of generality (i.e., as a generic processor performing a generic computer function of accessing, extracting, generating or identifying etc.,) such that it amounts no more than mere instructions to apply the exception or abstract idea using a generic computer component. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements when considered both individually and as an ordered combination do not amount to significantly more than the abstract idea. As mentioned above, although the claims recite additional elements, said elements taken individually or as a combination, do not result in the claim amounting to significantly more than the abstract idea because as the additional elements perform generic computer content distributing functions routinely used in information technology field. See US Applications 2013/0254535, 2015/0156194 and 2011/0154027. As discussed above, the additional elements recited at a high-level of generality such that they amount no more than mere instructions to apply the exception using a generic computer component. Therefore, the claim is directed to non-statutory subject matter. Regarding claims 2-7, 9, 10, 12-17 and 19; the dependent claims are also rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter for the same reasons addressed above as the claims recite an abstract idea without being integrated into a practical application or significantly more. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1, 11 and 20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 11 and 20 of U.S. Patent No. 12,081,569. Although the claims at issue are not identical, they are not patentably distinct from each other because all the limitations in the instant application are anticipated by U.S. Patent No. 12,081,569. See table below. Instant Application U.S. Patent No. 12,081,569 1. A computer-implemented method for analyzing a security incident detected in a computer network, the method comprising executing, by one or more computer processors, instructions for performing operations comprising: accessing data records of at least one of network activity or security alerts associated with the security incident; extracting, from the data records, node data identifying entities of multiple different types within the computer network and edge data identifying relations between the entities; generating, based on the node and edge data, a multipartite graph representing the entities of the multiple different types as different respective types of nodes and the relations as edges between the nodes; identifying, with a graph-based clustering technique, subgraphs within the multipartite graph; ranking the subgraphs based on a metric quantifying an associated severity of threat; and generating an output from at least a subset of the nodes corresponding to nodes within one or more highest-ranking subgraphs. 11. A system comprising: hardware processing circuitry; and one or more hardware memories storing instructions that, when executed, configure the hardware processing circuitry to perform operations for analyzing a security incident detected in a computer network, the operations comprising: accessing data records of at least one of network activity or security alerts associated with the security incident; extracting, from the data records, node data identifying entities of multiple different types within the computer network and edge data identifying relations between the entities; generating, based on the node and edge data, a multipartite graph representing the entities of the multiple different types as different respective types of nodes and the relations as edges between the nodes; identifying, with a graph-based clustering technique, subgraphs within the multipartite graph; ranking the subgraphs based on a metric quantifying an associated severity of threat; and generating an output from at least a subset of the nodes corresponding to nodes within one or more highest-ranking subgraphs. 20. A non-transitory computer-readable medium comprising instructions that, when executed, configure hardware processing circuitry to perform operations for analyzing a security incident detected in a computer network, the operations comprising: accessing data records of at least one of network activity or security alerts associated with the security incident; extracting, from the data records, node data identifying entities of multiple different types within the computer network and edge data identifying relations between the entities; generating, based on the node and edge data, a multipartite graph representing the entities of the multiple different types as different respective types of nodes and the relations as edges between the nodes; identifying, with a graph-based clustering technique, subgraphs within the multipartite graph; ranking the subgraphs based on a metric quantifying an associated severity of threat; and generating an output from at least a subset of the nodes corresponding to nodes within one or more highest-ranking subgraphs. 1. A computer-implemented method for analyzing a security incident detected in a computer network, the method comprising executing, by one or more computer processors, instructions for performing operations comprising: accessing data records of at least one of network activity or security alerts having time stamps within a time period associated with the security incident and pertaining to an organization associated with the security incident; extracting, from the data records, node data identifying machines within the computer network, processes spawned on the machines, and network destinations external to the computer network connected to by the processes, and edge data identifying relations between the machines and the processes they have spawned and between the processes and the network destinations they have accessed; generating, based on the node and edge data, a multipartite graph representing the machines, processes, and network destinations as different types of nodes and the relations as edges between the nodes; identifying, with a graph-based clustering technique, subgraphs within the multipartite graph; ranking the subgraphs based on at least one of numbers of security alerts or numbers of known indicators of compromise (IoCs) associated with the subgraphs; and providing an output listing at least a subset of the nodes within one or more highest-ranking subgraphs. 11. A system comprising: hardware processing circuitry; and one or more hardware memories storing instructions that, when executed, configure the hardware processing circuitry to perform operations for analyzing a security incident detected in a computer network, the operations comprising: accessing data records of at least one of network activity or security alerts having time stamps within a time period associated with the security incident and pertaining to an organization associated with the security incident; extracting, from the data records, node data identifying machines within the computer network, processes spawned by the machines, and network destinations external to the computer network connected to by the processes, and edge data identifying relations between the machines and the processes they have spawned and between the processes and the network destinations they have accessed; generating, based on the node and edge data, a multipartite graph representing the machines, processes, and network destinations as different types of nodes and the relations as edges between the nodes; identifying, with a graph-based clustering technique, subgraphs within the multipartite graph; ranking the subgraphs based on at least one of numbers of security alerts or numbers of known indicators of compromise (IoCs) associated with the subgraphs; and providing an output listing at least a subset of the nodes within one or more highest-ranking subgraphs. 20. A non-transitory computer-readable medium comprising instructions that, when executed, configure hardware processing circuitry to perform operations for analyzing a security incident detected in a computer network, the operations comprising: accessing data records of at least one of network activity or security alerts having time stamps within a time period associated with the security incident and pertaining to an organization associated with the security incident; extracting, from the data records, node data identifying machines within the computer network, processes spawned by the machines, and network destinations external to the computer network connected to by the processes, and edge data identifying relations between the machines and the processes they have spawned and between the processes and the network destinations they have accessed; generating, based on the node and edge data, a multipartite graph representing the machines, processes, and network destinations as different types of nodes and the relations as edges between the nodes; identifying, with a graph-based clustering technique, subgraphs within the multipartite graph; ranking the subgraphs based on at least one of numbers of security alerts or numbers of known indicators of compromise (IoCs) associated with the subgraphs; and providing an output listing at least a subset of the nodes within one or more highest-ranking subgraphs. Regarding claims 2-10 and 12-19; claims 2-10 and 12-19 are also rejected under Double Patenting for similar reasons respectively and are dependent on claims 1, 11 and 20 and therefore inherit the rejection from issues of the independent claims. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1, 6, 7, 9, 11, 16, 17, 19 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Park et al. (US 2018/0159876) in view of Fellows (US 2022/0225101). As per claim 1, Park teaches a computer-implemented method for analyzing a security incident detected in a computer network, the method comprising executing, by one or more computer processors, instructions for performing operations comprising: accessing data records of at least one of network activity or security alerts associated with the security incident (Park, Paragraph 0049 recites “The cybersecurity knowledge graph is derived one or more data sources and includes a set of nodes, and a set of edges. In one embodiment, processing proceeds as follows using a method. Preferably, the method is automated and begins upon receipt of information from a security system (e.g., a SIEM) representing an offense.”); extracting, from the data records, node data identifying entities of multiple different types within the computer network and edge data identifying relations between the entities; generating, based on the node and edge data, a multipartite graph representing the entities of the multiple different types as different respective types of nodes and the relations as edges between the nodes (Park, Paragraph 0049 recites “Based on the offense type, context data about the offense is extracted, and an initial offense context graph is built. The initial offense context graph typically comprises a set of nodes, and a set of edges, with an edge representing a relationship between a pair of nodes in the set. At least one of the set of nodes in the offense context graph is a root node representing an offending entity that is determined as a cause of the offense. The initial offense context graph also includes one or more activity nodes connected to the root node either directly or through one or more other nodes of the set, wherein at least one activity node has associated therewith data representing an observable. The root node and its one or more activity nodes associated therewith (and the observables) represent a context for the offense. According to the method, the knowledge graph and potentially other data sources are then examined to further refine the initial offense context graph.”); identifying, with a graph-based clustering technique, subgraphs within the multipartite graph (Park, Paragraphs 0050-0051 recites “In particular, preferably the knowledge graph is explored by locating the observables (identified in the initial offense graph) in the knowledge graph. Based on the located observables and their connections being associated with one or more known malicious entities as represented in the knowledge graph, one or more subgraphs of the knowledge graph are then generated. A subgraph typically has a hypothesis (about the offense) associated therewith. Using a hypothesis, the security system (or other data source) is then queried to attempt to obtain one or more additional observables (i.e. evidence) supporting the hypothesis. Then, a refined offense context graph is created, preferably by merging the initial offense context graph, the one or more sub-graphs derived from the knowledge graph exploration, and the additional observables mined from the one or more hypotheses. The resulting refined offense context graph is then provided (e.g., to a SOC analyst) for further analysis. An offense context graph that has been refined in this manner, namely, by incorporating one or more subgraphs derived from the knowledge graph as well as additional observables mined from examining the subgraph hypotheses, provides for a refined graph that reveals potential causal relationships more readily, or otherwise provides information that reveals which parts of the graph might best be prioritized for further analysis. The approach thus greatly simplifies the further analysis and corrective tasks that must then be undertaken to address the root cause of the offense.”). But fails to teach ranking the subgraphs based on a metric quantifying an associated severity of threat; and generating an output from at least a subset of the nodes corresponding to nodes within one or more highest-ranking subgraphs. However, in an analogous art Fellows teaches ranking the subgraphs based on a metric quantifying an associated severity of threat; and generating an output from at least a subset of the nodes corresponding to nodes within one or more highest-ranking subgraphs (Fellows, Paragraph 0174 recites “The formatting module can populate a given template with relevant data, graphs, or other information as appropriate in various specified fields, along with a ranking of a likelihood of whether that hypothesis cyber threat is supported and its threat severity level for each of the supported cyber threat hypotheses, and then output the formatted threat incident report with the ranking of each supported cyber threat hypothesis, which is presented digitally on the user interface and/or printed as the printable report.”). It would have been obvious to a person of ordinary skill in the art, at the earliest effective filing date to use Fellows’ AI Cybersecurity system monitoring wireless data transmissions with Park’s Consolidating Structured And Unstructured Security And Threat Intelligence With Knowledge Graphs because it offers the advantage of having a detailed threat analysis system to determine a threats attributes. As per claim 6, Park in combination with Fellows teaches the method of claim 1, Park further teaches wherein the metric quantifying the severity of the threat associated with the subgraphs comprises numbers of security alerts or numbers of known indicators of compromise (IoCs) associated with the subgraphs (Park, Paragraph 0057 recites “FIG. 5 depicts a modeling diagram showing the various entities involved in the technique and their interactions. As depicted, these entities include the SOC user 500, the SIEM system 502, the (offense) context graph 504, a knowledge graph 506, and a maintenance entity 508. Viewing the interactions from top to bottom, the knowledge graph 506 may be updated with new data/records 510 periodically; this operation is shown as an off-line operation (above the dotted line). The remainder of the figure depicts the process flow referenced above. Thus, the new offense 505 is identified by the SIEM system 502 and used together with the offense details 510 and data mining 512 to generate the context graph 504 via the offense extraction and analysis 514 and context graph building 516 operations. Once built, the knowledge graph 506 is explored 518 to identify one or more subgraphs. The evidence-based threat hypothesis scoring uses the subgraphs at operation 520, and the process may iterate (operation 522) as previously described. After evidence validation and IOC mining 524, the offense investigation 526 is then carried out, typically by the SOC user 500.”). As per claim 7, Park in combination with Fellows teaches the method of claim 1, Park further teaches wherein the operations further comprise identifying one or more new IoCs among the subset of the nodes (Park, Paragraph 0057 recites “FIG. 5 depicts a modeling diagram showing the various entities involved in the technique and their interactions. As depicted, these entities include the SOC user 500, the SIEM system 502, the (offense) context graph 504, a knowledge graph 506, and a maintenance entity 508. Viewing the interactions from top to bottom, the knowledge graph 506 may be updated with new data/records 510 periodically; this operation is shown as an off-line operation (above the dotted line). The remainder of the figure depicts the process flow referenced above. Thus, the new offense 505 is identified by the SIEM system 502 and used together with the offense details 510 and data mining 512 to generate the context graph 504 via the offense extraction and analysis 514 and context graph building 516 operations. Once built, the knowledge graph 506 is explored 518 to identify one or more subgraphs. The evidence-based threat hypothesis scoring uses the subgraphs at operation 520, and the process may iterate (operation 522) as previously described. After evidence validation and IOC mining 524, the offense investigation 526 is then carried out, typically by the SOC user 500.”). As per claim 9, Park in combination with Fellows teaches the method of claim 1, Park further teaches the operations further comprising extracting, from the data records, feature data associated with at least one of the nodes or the edges, and assigning feature vectors to the nodes or edges based on the feature data, wherein the subgraphs are identified based in part on the feature vectors (Park, Paragraph 0049 recites “Based on the offense type, context data about the offense is extracted, and an initial offense context graph is built. The initial offense context graph typically comprises a set of nodes, and a set of edges, with an edge representing a relationship between a pair of nodes in the set. At least one of the set of nodes in the offense context graph is a root node representing an offending entity that is determined as a cause of the offense. The initial offense context graph also includes one or more activity nodes connected to the root node either directly or through one or more other nodes of the set, wherein at least one activity node has associated therewith data representing an observable. The root node and its one or more activity nodes associated therewith (and the observables) represent a context for the offense. According to the method, the knowledge graph and potentially other data sources are then examined to further refine the initial offense context graph.”). Regarding claims 11 and 20, claims 11 and 20 are directed to a system and a non-transitory computer-readable medium associated with the method of claim 1. Claims 11 and 20 are of similar scope to claim 1, and are therefore rejected under similar rationale. Regarding claim 16, claim 16 is directed to a similar system associated with the method of claim 6 respectively. Claim 16 is similar in scope to claim 6, respectively, and are therefore rejected under similar rationale. Regarding claim 17, claim 17 is directed to a similar system associated with the method of claim 7 respectively. Claim 17 is similar in scope to claim 7, respectively, and are therefore rejected under similar rationale. Regarding claim 19, claim 19 is directed to a similar system associated with the method of claim 9 respectively. Claim 19 is similar in scope to claim 9, respectively, and are therefore rejected under similar rationale. Claim(s) 2-4 and 12-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Park et al. (US 2018/0159876) and Fellows (US 2022/0225101) and in further view of Couterier et al. (US 2017/0214718). As per claim 2, Park in combination with Fellows teaches the method of claim 1, but fails to teach wherein accessing the data records comprises ingesting data from a data repository, using a specified timeframe as an input parameter to access only data records stored in the data repository that fall with the specified timeframe, the specified timeframe including start and end times associated with the security incident. However, in an analogous art Couterier teaches wherein accessing the data records comprises ingesting data from a data repository, using a specified timeframe as an input parameter to access only data records stored in the data repository that fall with the specified timeframe, the specified timeframe including start and end times associated with the security incident (Couterier, Paragraph 0058 recites “FIG. 6 is a diagram illustrating a time dependency for storing forensically interesting data so that it can be analyzed. In this example, suppose that the SIEM application identifies significant egress traffic from an IP address that has a non-desirable geo-location. The SIEM immediately rates this as a suspicious flow. The SIEM then correlates both a domain name and all IP addresses previously associated with the domain from its historical repository. All flows previous retained by the capture appliance (past history) to the event and all flows (future) following the event, containing either this IP address or domain name will be tagged for longer retention. The SIEM creates a filter policy and sends it to the archiver so the network packets are retained in the second repository.”). It would have been obvious to a person of ordinary skill in the art, at the earliest effective filing date to use Couterier’s intelligent security context aware elastic storage with Park’s Consolidating Structured And Unstructured Security And Threat Intelligence With Knowledge Graphs because it offers the advantage of being able to navigate a repository efficiently. As per claim 3, Park in combination with Fellows teaches the method of claim 1, but fails to teach wherein accessing the data records comprises ingesting data from a data repository that stores data across multiple organizations, using an identifier of an organization associated with the security incident as an input parameter to filter the data and access only data records pertaining to the organization. However, in an analogous art Couterier teaches wherein accessing the data records comprises ingesting data from a data repository that stores data across multiple organizations, using an identifier of an organization associated with the security incident as an input parameter to filter the data and access only data records pertaining to the organization (Couterier, Paragraph 0058 recites “FIG. 6 is a diagram illustrating a time dependency for storing forensically interesting data so that it can be analyzed. In this example, suppose that the SIEM application identifies significant egress traffic from an IP address that has a non-desirable geo-location. The SIEM immediately rates this as a suspicious flow. The SIEM then correlates both a domain name and all IP addresses previously associated with the domain from its historical repository. All flows previous retained by the capture appliance (past history) to the event and all flows (future) following the event, containing either this IP address or domain name will be tagged for longer retention. The SIEM creates a filter policy and sends it to the archiver so the network packets are retained in the second repository.”). It would have been obvious to a person of ordinary skill in the art, at the earliest effective filing date to use Couterier’s intelligent security context aware elastic storage with Park’s Consolidating Structured And Unstructured Security And Threat Intelligence With Knowledge Graphs because it offers the advantage of being able to navigate a repository efficiently. As per claim 4, Park in combination with Fellows teaches the method of claim 1, but fails to teach wherein the multiple different types of entities within the computer network comprise types selected among: machines within the computer network, processes executed within the computer network, network destinations external to and connected to the computer network, and users of the computer network. However, in an analogous art Couterier teaches wherein the multiple different types of entities within the computer network comprise types selected among: machines within the computer network, processes executed within the computer network, network destinations external to and connected to the computer network, and users of the computer network (Couterier, Paragraph 0058 recites “FIG. 6 is a diagram illustrating a time dependency for storing forensically interesting data so that it can be analyzed. In this example, suppose that the SIEM application identifies significant egress traffic from an IP address that has a non-desirable geo-location. The SIEM immediately rates this as a suspicious flow. The SIEM then correlates both a domain name and all IP addresses previously associated with the domain from its historical repository. All flows previous retained by the capture appliance (past history) to the event and all flows (future) following the event, containing either this IP address or domain name will be tagged for longer retention. The SIEM creates a filter policy and sends it to the archiver so the network packets are retained in the second repository.” And Paragraph 0046 recites “g packets should be retained in the second repository, in addition to the packet characteristics of interesting packets. For example, the filtering policies may have retention parameters like “packets like this are interesting for a deterministic period”, “packets like this are interesting forever”, or “packets like this are interesting for until termination is expressly indicated”. In addition, the characteristics for identifying an interesting packet can be expressed in packet characteristics such as IP address, source, destination, etc., or in terms of “flows” and “patterns” indicative of an intrusion. Packets stored in secondary storage 417 can be “tagged” according to the particular filtering policy which caused them to be retained as well as their retention parameter. In the absence of a requested retention parameter, default retention parameters can be used, both for how long a given filtering policy will be in effect as well as how long packets should be retained in the second repository.”). It would have been obvious to a person of ordinary skill in the art, at the earliest effective filing date to use Couterier’s intelligent security context aware elastic storage with Park’s Consolidating Structured And Unstructured Security And Threat Intelligence With Knowledge Graphs because it offers the advantage of being able to navigate a repository efficiently. Regarding claim 12, claim 12 is directed to a similar system associated with the method of claim 2 respectively. Claim 12 is similar in scope to claim 2, respectively, and are therefore rejected under similar rationale. Regarding claim 13, claim 13 is directed to a similar system associated with the method of claim 3 respectively. Claim 13 is similar in scope to claim 3, respectively, and are therefore rejected under similar rationale. Regarding claim 14, claim 14 is directed to a similar system associated with the method of claim 4 respectively. Claim 14 is similar in scope to claim 4, respectively, and are therefore rejected under similar rationale. Claim(s) 5 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Park et al. (US 2018/0159876), Fellows (US 2022/0225101) and Couterier et al. (US 2017/0214718) and in further view of Pearcy et al. (US 2017/0116416). As per claim 5, Park in combination with Fellows and Couterier teaches the method of claim 4, but fails to teach wherein the multiple different types of entities within the computer network comprise processes executed within the computer network and further comprises at least one of child processes of the processes, file hashes associated with the processes, and file signers associated with the file hashes. However, in an analogous art Pearcy teaches wherein the multiple different types of entities within the computer network comprise processes executed within the computer network and further comprises at least one of child processes of the processes, file hashes associated with the processes, and file signers associated with the file hashes (Pearcy, Paragraph 0029 recites “SIEM 210 or other suitable portions of system 100 may be configured to aggregate and determine, for given file hashes or other unique identification of a given file, and for the time from when file 202 was first determined to exist on the client or was introduced to the client (i.e., age), determine network activity that is suspicious. Such network activity may include, for example, evidence of URLs accessed by clients with the given file wherein the URLs are not low-risk, or are otherwise IoCs. Furthermore, SIEM 210 or other suitable portions of system 100 may identify total IoC events and their associated risk over the time period. In addition, SIEM 210 or other suitable portions of system 100 may find all transmissions of the given file to or from affected clients. The time of such transmissions and identities of the respective parties may be determined. An event vector may be created for the given file.”). It would have been obvious to a person of ordinary skill in the art, at the earliest effective filing date to use Pearcy’s Advanced Threat Protection Cross-Product Security Controller with Park’s Consolidating Structured And Unstructured Security And Threat Intelligence With Knowledge Graphs because it offers the advantage of being able to determine suspicious activities in a network and perform mitigating actions. Regarding claim 15, claim 15 is directed to a similar system associated with the method of claim 5 respectively. Claim 15 is similar in scope to claim 5, respectively, and are therefore rejected under similar rationale. Claim(s) 8 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Park et al. (US 2018/0159876) and Fellows (US 2022/0225101) and in further view of Pearcy et al. (US 2017/0116416). As per claim 8, Park in combination with Fellows teaches the method of claim 1, but fails to teach the operations further comprising causing a risk-mitigating action to be taken based on the output. However, in an analogous art Pearcy teaches causing a risk-mitigating action to be taken based on the output (Pearcy, Paragraph 0031 recites “FIG. 3 is a further illustration of example operation of system 100, according to embodiments of the present disclosure. In particular, FIG. 3 illustrates an instance of event visualization 116 generated by operation of controller 114. FIG. 3 also illustrates options to take remedial action for affected clients. The generation of event visualization 116 and remedial action may be taken as a result of the analysis shown in FIG. 2.”). It would have been obvious to a person of ordinary skill in the art, at the earliest effective filing date to use Pearcy’s Advanced Threat Protection Cross-Product Security Controller with Park’s Consolidating Structured And Unstructured Security And Threat Intelligence With Knowledge Graphs because it offers the advantage of being able to determine suspicious activities in a network and perform mitigating actions. Regarding claim 18, claim 18 is directed to a similar system associated with the method of claim 8 respectively. Claim 18 is similar in scope to claim 8, respectively, and are therefore rejected under similar rationale. Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Park et al. (US 2018/0159876) and Fellows (US 2022/0225101) and in further view of Wen et al. (US 2022/0201026). As per claim 10, Park in combination with Fellows teaches the method of claim 1, but fails to teach wherein the graph-based clustering technique comprises at least one of spectral clustering, Louvain clustering, k-means clustering based on Node2Vec embeddings, and k-means clustering based on unsupervised GraphSAGE embeddings. However, in an analogous art Wen teaches wherein the graph-based clustering technique comprises at least one of spectral clustering, Louvain clustering, k-means clustering based on Node2Vec embeddings, and k-means clustering based on unsupervised GraphSAGE embeddings (Wen, Paragraph 0034 recites “In some embodiments, spectral clustering may be used to determine the clusters (e.g., clusters 401, 402, and 403) in the node graph 400. A graph cut (e.g., a bisection of a graph) may be calculated by defining a graph Laplacian, finding the significant eigenvector of the Laplacian, and thresholding the eigenvector. Nodes corresponding to elements of the eigenvector above the threshold may belong to a first partition of the graph, and nodes below the threshold belong to a second partition of the graph. Spectral clustering may be used to create any appropriate number partitions, or clusters, of nodes in the graph. The k most significant eigenvectors of the graph Laplacian may be found, the data points in the space spanned by these eigenvectors may be embedded, and the final clusters may be determined via k-means in some embodiments.”). It would have been obvious to a person of ordinary skill in the art, at the earliest effective filing date to use Wen’s Industrial process system threat detection with Park’s Consolidating Structured And Unstructured Security And Threat Intelligence With Knowledge Graphs because it offers the advantage of being able to help analyze data in clusters. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to RODERICK TOLENTINO whose telephone number is (571)272-2661. The examiner can normally be reached Mon- Fri 8am-4pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Luu Pham can be reached at 571-270-5002. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. RODERICK . TOLENTINO Examiner Art Unit 2439 /RODERICK TOLENTINO/Primary Examiner, Art Unit 2439
Read full office action

Prosecution Timeline

Jul 19, 2024
Application Filed
Jan 30, 2026
Non-Final Rejection — §101, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603907
SERVER AND METHOD FOR PROVIDING ONLINE THREAT DATA BASED ON USER-CUSTOMIZED KEYWORDS FOR PRIVATE CHANNEL
2y 5m to grant Granted Apr 14, 2026
Patent 12592915
INFERENCE-BASED SELECTIVE FLOW INSPECTION
2y 5m to grant Granted Mar 31, 2026
Patent 12580946
SYSTEMS AND METHODS FOR TRIGGERING TOKEN ALERTS
2y 5m to grant Granted Mar 17, 2026
Patent 12580948
CYBERSECURITY OPERATIONS MITIGATION MANAGEMENT
2y 5m to grant Granted Mar 17, 2026
Patent 12572632
SYSTEMS AND METHODS FOR DATA SECURITY MODEL MODIFICATION AND ANOMALY DETECTION
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
77%
Grant Probability
99%
With Interview (+35.4%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 705 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month