Prosecution Insights
Last updated: April 19, 2026
Application No. 18/953,010

METHODS AND SYSTEMS FOR DETECTING ATTACK CAMPAIGNS ON ELECTRONIC NETWORKS

Non-Final OA §101§103
Filed
Nov 19, 2024
Examiner
CRIBBS, MALCOLM
Art Unit
2497
Tech Center
2400 — Computer Networks
Assignee
Mixmode Inc.
OA Round
1 (Non-Final)
89%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 89% — above average
89%
Career Allow Rate
679 granted / 765 resolved
+30.8% vs TC avg
Moderate +15% lift
Without
With
+14.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
17 currently pending
Career history
782
Total Applications
across all art units

Statute-Specific Performance

§101
12.5%
-27.5% vs TC avg
§103
42.2%
+2.2% vs TC avg
§102
10.9%
-29.1% vs TC avg
§112
21.5%
-18.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 765 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. This action is in response to the correspondence filed 11/19/2024. Claims 1-20 are presented for examination. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. As to claim 1, the claim recites receiving sensor data from one or more sensors coupled to one or more electronic networks; detecting, from the sensor data, a plurality of anomalous relationships between two or more entities of the plurality of entities, or between at least one entity of the plurality of entities and an external entity that is not a part of the one or more electronic networks, or between the one or more electronic networks as a whole and the external entity; and determining, based on the plurality of anomalous relationships, that the one or more electronic networks are being subjected to one or more attack campaigns. The limitation of “receiving sensor data from one or more sensors,” as drafted, is a process that, under its broadest reasonable interpretation, covers mere data gathering. The “sensors” performing conventional operations of collecting data are well-known, routine, and conventional that do not amount to significantly more. The limitation of “detecting, from the sensor data, a plurality of anomalous relationships,” as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting “sensors” nothing in the claim element precludes the step from practically being performed in the mind. For example, but for the “sensors” language, “detecting” in the context of this claim encompasses the user mentally and/or visually examining data. Similarly, the limitation of “determining, based on the plurality of anomalous relationships, that the one or more electronic networks are being subjected to one or more attack campaigns,” as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of “sensors” as “determining” in the context of this claim encompasses the user mentally inferring based on the observed data. If claim limitations, under their broadest reasonable interpretation, cover performance of the limitations in the mind but for the recitation of generic computer components, then it falls in the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. This judicial exception is not integrated into a practical application. In particular, the claim does not recite any additional elements that amount to no more than mere instructions to apply the exception using a generic computer component. Further, the “sensors” perform operations of collecting data which is a well-known, routine, and conventional operation that does not amount to significantly more. Accordingly, the abstract idea is not integrated into a practical application as the elements do not impose any meaningful limits on practicing the abstract idea. Therefore, the claims are not patent eligible. As to claims 2-20, the claims do not cure the deficiency of claim 1 and are rejected under 35 USC § 101 for their dependency upon claim 1 while not integrating the abstract idea into practical application or include elements that amount to significantly more than the abstract idea. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-14 and 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over US 20220394082 A1 to Keren et al. (hereinafter Keren) in view of US 20230275912 A1 to Shahul Hameed et al. (hereinafter Shahul Hameed). As to claim 1, Keren teaches a method comprising: a. receiving sensor data from one or more sensors coupled to one or more electronic networks (paragraphs 33, 39 and 40, collection of network object data corresponding to one or more network objects), the one or more electronic networks comprising a plurality of entities (paragraphs 26 and 39, one or more network objects, the objects being virtual entities or instances of systems, devices, or components, including virtual systems, devices, or components, or any combination thereof);b. detecting, from the sensor data, a plurality of anomalous relationships between two or more entities of the plurality of entities, or between at least one entity of the plurality of entities and an external entity that is not a part of the one or more electronic networks, or between the one or more electronic networks as a whole and the external entity (paragraphs 42-45, relationships between network objects are determined, wherein determination of network object relationships at S230 may include analysis of such determined relationships to identify impermissible relationships); and Keren does not explicitly teach c. determining, based on the plurality of anomalous relationships, that the one or more electronic networks are being subjected to one or more attack campaigns. However, Shahul Hameed teaches determining, based on the plurality of anomalous relationships, that the one or more electronic networks are being subjected to one or more attack campaigns (paragraphs 8, 26 and 29, leveraging useful network relations to detect more subtle IoC, identifying relationships between entities, identifying one or more IoCs among the entities discovering new modes of attack). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the teachings of Keren with the graph-based analysis of security incidents taught by Shahul Hameed in order to enable the efficient detection of attacks associated with additional modes of attack (“attack vectors”) with high precision and recall, which in turn helps plug vulnerabilities otherwise left open and risk-mitigating actions, therefore optimizing the overall security and efficiency of the system (paragraphs 8 and 29). As to claim 2, Shahul Hameed teaches further comprising reporting an alert that one or more electronic networks are being subjected to an attack campaign or storing the alert in an analytics module or a data aggregation module (paragraph 12, issue security alerts triggered in course of a security incident, [attack]). As to claim 3, Shahul Hameed teaches wherein (c) comprises: i. constructing an indicator network from the plurality of anomalous relationships (paragraph 27, generated graph from extracted node and edge data); ii. evaluating or aggregating indicator network graph features from the indicator network (paragraph 28, performing clustering and ranking of the clusters); and iii. determining, by comparing the indicator network graph features with one or more baseline graph features of a baseline graph of the one or more electronic networks, that one or more of the indicator network graph features are not indicative of normal electronic network behavior (paragraph 12, benign stored for subsequent analysis in the data repositories which are used for comparison to detect IoCs). As to claim 4, Shahul Hameed teaches further comprising reporting an alert that the one or more of the indicator network graph features are not indicative of normal electronic network behavior or storing the alert in an analytics module or a data aggregation module (paragraph 12, benign stored for subsequent analysis in the data repositories). As to claim 5, Shahul Hameed teaches wherein the alert comprises a time-series plot showing the occurrence of one or more of the indicator network graph features over time, an indicator showing departures of the one or more indicator network graph features from the one or more baseline graph features, or annotated information about how to interpret the occurrence of the one or more indicator network graph features (paragraph 12, Time-stamped data records of the security alerts 132 and/or of other monitored network activity). As to claim 6, Shahul Hameed teaches wherein the indicator network comprises a plurality of indicator network subgraphs, each indicator network subgraph associated with an anomalous relationship between two or more entities of the plurality of entities, or between at least one entity of the plurality of entities and an external entity that is not a part of the one or more electronic networks, or between the one or more electronic networks as a whole and the external entity (paragraph 28, clustering into groups read as subgraphs and clustering is based on the feature vectors assigned to the nodes and/or edges). As to claim 7, Shahul Hameed teaches wherein the indicator network graph features comprise paths, motifs, topological structures, cliques, graph embeddings, or connected components associated with the indicator network, or wherein the indicator network graph features are identified using one or more graph neural networks (Fig. 2 and paragraph 18, nodes representing machines, processes, IP addresses, and domains, and four edge types reflecting the relationships machines and the process they spawn, processes and the domains to which they connect, domains and the IP addresses to which they resolve, and optionally directly the processes and the IP addresses to which they connect). As to claim 8, Shahul Hameed teaches wherein (ii) or (iii) comprises counting motifs or paths on the indicator network or the baseline graph (paragraph 19, the number of security alerts associated with each cluster, such as the number of nodes within each cluster that represent security alerts in cases where alerts constitute one type of node, or otherwise the number of security alerts that show up as attributes of nodes (e.g., of machines or processes triggering them) within the cluster). As to claim 9, Keren teaches wherein (c) further comprises: iv. combining anomalous relationship graph features from the indicator network to thereby form an attack campaign graph (paragraph 44, updating the graph or graphs constructed at S220 to include the determined relationships). As to claim 10, Shahul Hameed teaches wherein (iv) comprises combining the anomalous relationship graph features to reflect a temporal ordering of the anomalous relationship graph features (paragraph 28, ranking the clusters according to some metric of maliciousness, e.g., based on the number of security alerts associated with each cluster, the number of IoCs associated with each cluster, or a combination of both. An output generated from one or more highest-ranking clusters (i.e., clusters having the greatest associated number of security alerts or IoCs), e.g., an output listing of the nodes, or a subset of the nodes (e.g., the nodes with highest connectivity), within the highest-ranking cluster(s) is provided at 312). As to claim 11, Shahul Hameed teaches further comprising reporting the attack campaign graph (paragraph 25, a post-mortem report of an incident, as may be generated by security analysists using their threat-hunting experience and the security tools 130, or in some cases automatically by the tools). As to claim 12, Shahul Hameed teaches further comprising: d. ranking the one or more indicator network graph features or the one or more attack campaigns (paragraph 28, ranking the clusters according to some metric of maliciousness, e.g., based on the number of security alerts associated with each cluster, the number of IoCs associated with each cluster, or a combination of both. An output generated from one or more highest-ranking clusters (i.e., clusters having the greatest associated number of security alerts or IoCs), e.g., an output listing of the nodes, or a subset of the nodes (e.g., the nodes with highest connectivity), within the highest-ranking cluster(s) is provided at 312). As to claim 13, Shahul Hameed teaches further comprising displaying the ranking of the one or more indicator network graph features or the one or more attack campaigns (paragraph 7, a listing of the nodes within the highest-ranking cluster or clusters may be provided as output to a security analyst, or to a software tool, to facilitate further analysis and/or inform the selection and/or execution of risk-mitigating actions). As to claim 14, Shahul Hameed teaches wherein (d) comprises ranking the one or more indicator network graph features or the one or more attack campaigns based on an age of each attack campaign, a stage of progression of each attack campaign, or a risk score associated with each attack campaign (paragraph 28, ranking based on the number of security alerts associated with each cluster, the number of IoCs associated with each cluster, or a combination of both). As to claim 16, Shahul Hameed teaches further comprising reporting the ranking of the one or more indicator network graph features or the one or more attack campaigns (paragraph 28, an output listing of the nodes, or a subset of the nodes (e.g., the nodes with highest connectivity), within the highest-ranking cluster(s) is provided at 312). As to claim 17, Shahul Hameed teaches further comprising labeling the one or more attack campaigns as one or more known attack campaigns or one or more novel attack campaigns based on a comparison between the one or more attack campaigns and a database of known attack campaigns (paragraphs 7 and 29, previously unknown IoCs may be discovered among the listed nodes and detected IoCs may include already known IoCs as well as new, previously undiscovered IoCs). As to claim 18, Keren teaches further comprising permitting a user or analyst of the one or more electronic networks to label the one or more attack campaigns (paragraph 44, updated at S230 by associating one or more data labels, tags, or other, like, features with a graph entry for a network object determined to have a relationship with another object. The association of data labels and tags may further include the association of labels or tags describing various aspects of the determined relationship or connection including, as examples and without limitation, connection source and destination, connection type, connection direction, connection status, connection protocol, and the like, as well as any combination thereof). As to claim 19, Keren teaches wherein the one or more electronic networks comprise a single electronic network and wherein the single electronic network comprises the two or more entities (FIG. 1, environment including cloud platform including objects and apps). As to claim 20, Keren teaches wherein the one or more electronic networks comprise at least a first electronic network and a second electronic network, wherein the first electronic network comprises a first entity of the at least two entities, and wherein the second electronic network comprises a second entity of the at least two entities (FIG. 1, environment including cloud platforms including a platform of both objects and apps and a platform of apps). Allowable Subject Matter Claim 15 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: Dependent claim 15 is allowable over the prior art of record, including Keren and Shahul Hameed cited by the Examiner, taken individually or in combination, because the prior art of record fails to particularly disclose, fairly suggest or render obvious wherein (d) comprises, for each attack campaign, determining a weighted mixture of the age of each attack campaign, the stage of progression of each attack campaign, and the risk score associated with each attack campaign, and ranking each attacked campaign based on each weighted mixture, in view of the other limitations of independent claim 1, as to claims 15. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MALCOLM CRIBBS whose telephone number is (571)270-1566. The examiner can normally be reached Monday-Friday 930a-330p; 430p-630p. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Eleni Shiferaw can be reached at (571)272-3867. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. MALCOLM . CRIBBS Examiner Art Unit 2497 /MALCOLM CRIBBS/Primary Examiner, Art Unit 2497
Read full office action

Prosecution Timeline

Nov 19, 2024
Application Filed
Feb 07, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603755
REMOTE ATTESTATION WITH REMEDIATION
2y 5m to grant Granted Apr 14, 2026
Patent 12593215
MOBILE DEVICE MANAGEMENT AND CONTROL METHOD AND APPARATUS
2y 5m to grant Granted Mar 31, 2026
Patent 12585813
COMBINING ALLOWLIST AND BLOCKLIST SUPPORT IN DATA QUERIES
2y 5m to grant Granted Mar 24, 2026
Patent 12580781
DEVICE MANAGEMENT METHOD, SYSTEM, AND APPARATUS
2y 5m to grant Granted Mar 17, 2026
Patent 12579306
TECHNIQUES FOR MANAGING ACTIVITY LOGS IN A MANNER THAT PROMOTES USER PRIVACY
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
89%
Grant Probability
99%
With Interview (+14.6%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 765 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month