DETAILED ACTION
This action is responsive to application filed on 11/30/2023. Claims 1, 10 and 18 are independents. Claims 1-20 are currently pending.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed 12/17/2025 have been fully considered but they are not persuasive.
Applicant argues:
Santos fails to disclose, teach, or suggest at least "determine a constraint solution for entity to urgency classification based on the collection of rules, wherein the collection of rules comprises inequality statements mapping one or more features to an entity prioritization". However, Santos fails to disclose, teach or suggest, any collection of rules that comprises inequality statements - let alone that the inequality statements map features to an entity prioritization or that a constraint solution is determined based on the collection of rules comprising said inequality statements that map features to an entity prioritization.
Examiner disagree because cited portion in previous office action para [0096], 00120-00211] does not explicitly disclose “ inequality statements”, however Santos in para [0057] does teaches the claimed limitation “wherein the collection of rules comprises inequality statements mapping one or more features to an entity prioritization;
Examiner interpreting that the high risk event can be calculated as: Number of vulnerabilities > threshold = high risk event, as described in para [0057].
Santos disclose the comparison “number of vulnerability > threshold” constitute an inequality statement because it compares a numerical feature value with threshold using a relational operator. The outcome of this comparison determines the risk classification (high risk event).
Under broadest reasonable interpretation, an “inequality statement” refers to rule that compare a numerical value associated with one or more features to a threshold value using a relational operator (e.g. greater than, less than, greater than equal to) to determine a classification or prioritization.
This definition aligns with common mathematical meaning of inequality.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Applying the subject matter eligibility test, as outlined in MPEP 2106:
Step 1: Statutory Category
The claims fall within a statutory category. Claim 1-9 are considered “method” claims, claims 10-17 are considered “Non-transitory computer readable medium” and claims 18-20 are considered “computing system”. Thus apparatus and method are members of the statutory categories. Thus, the analysis moves towards step 2A, prong one of the subject matter eligibility test.
Step 2A, Prong One: Judicial Exception
The claims recite a judicial exception, specifically an abstract idea. For example, claims 1, 10 and 18 recite “translating entity prioritization rules to a continuous numerical space”, “maintaining a plurality of alerts stored in an alert history…”; “receiving a collection of rules…”, “determine a constraint solution for entity to urgency classification…”; “applying the constraint solution to an entity prioritization task …”. Such processes are akin to a mathematical calculation which have been recognized as abstract ideas. Thus, the analysis moves towards step 2A, prong two.
Accordingly, under Step 2A Prong I of the 2019 Guidance, independent claims 1, 15 and 18 each recite an abstract idea in the form of mental processes, mathematical concept and method of organizing human activity, even when generic references to electronic or computer implementation are disregarded.
Step 2A, Prong Two: Integration into a Practical Application
The claims do not integrate the abstract idea into a practical application. The additional elements, such as entities and entity (Claim 1, 6, 11), A non-transitory computer readable medium (Claim 10), a processor (claim 10, claim 18), A computing system (claim 18), a memory (claim 18) appears to be generic computer functions which do not constitute meaningful limitations that would amount to significantly more than the abstract idea. The combination of these additional element is no more than generic computer functions.
Thus, even in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limitations on practicing the abstract idea. In Recentive Analytics, Inc. v. Fox Corp., 2023-2437 (Fed. Cir. Apr. 18, 2025), the Federal Circuit held that applying generic machine learning techniques to a specific field without improving the underlying technology does not constitute a practical application. The court emphasized that claims must delineate how the machine learning technology achieves a technological improvement. Thus, the analysis moves towards step 2B.
Independent claims 1, 15 and 18 therefore do not integrate the abstract idea into a practical application under Step 2A Prong II.
Step 2B: Inventive concept
Claim is additionally analyzed under Step 2B to evaluates whether the claim as a whole amount to significantly more than the recited exception, whether any additional element, or combination of additional elements, adds an inventive concept to the claim. When claims evaluated under step 2B, it is no more than what is well-understood, routine, conventional activity in the field. The specification does not provide any indication anything other than a generic computer component. The mere “translating entity prioritization rules to a continuous numerical space”, “maintaining a plurality of alerts stored in an alert history…”; “receiving a collection of rules…”, “determine a constraint solution for entity to urgency classification…”; “applying the constraint solution to an entity prioritization task …” is a well-understood, routing and conventional function when it is claimed in a merely generic manner as it is here.
Independent claims 1, 15 and 18 therefore do not integrate the abstract idea into a practical application under Step 2B.
Accordingly, independent claims 1, 15 and 18, and dependent claims 2-9, 11-17 and 19-20 that stand with them, do not recite an inventive concept sufficient to transform the abstract idea into a patent eligible application. The claims are therefore directed to an abstract idea and fail to amount to significantly more than the judicial exception under 35 U.S.C. 101.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Dos Santos et al. (WO 2021133479 A1), hereinafter Dos Santos.
Regarding claims 1, 10 and 18, Dos Santos teaches [a] method (Abstract, methods) for translating entity prioritization rules to a continuous numerical space, comprising:
maintaining a plurality of alerts stored in an alert history (para. 0031, [i]n the case of potential incident, relevant contextual information is stored and available for investigation; para. 00103, [a]ggregation device 106 may further provide log information of activity and properties of network coupled devices 122a-b to network monitor device 102. It is appreciated
that log information may be particularly reliable for stable network environments (e.g., where the types of devices on the network do not change often). The log information may include information of updates of software of network coupled devices 122a-b);
receiving a collection of rules for determining urgency of entities (para. 0031-0032, risk score of an entity may be defined as the likelihood of an event happening multiplied by the impact of the event for each type of risk considered. The types of risk, used in computing the risk score, can include cyber-security or cyber-attack risk and operational failure risk; para. 056, priority of an issue can be based on a variety of factors, including the severity of an event (e.g., high security risk or high operational risk of the event source entity), the number of vulnerabilities, IOCs and other related information associated with the one or more entities associated with the issue, the risk associated with each entity that is associated with the issue, and the types of events associated with the issue; para. 0057, [h]igh risk events (e.g., events whose source or destination are associated with a high risk score) and a high number of the vulnerabilities; para. 0067, weighted sum of severity of alerts in a time window, use of a threshold for the sum, and correlation of the sums may be used to determine if a particular severity-based grouping of alerts is relevant or important; para. 00197, logging or storing the compliance level);
determine a constraint solution for entity to urgency classification based on the collection of rules (paras. 0030-0034); and
wherein the collection of rules comprises inequality statements mapping one or more features to an entity prioritization (para [0057] In some embodiments, if each of the events composing the issue have a high severity, then the issue will be marked as high priority. High risk events (e.g., events whose source or destination are associated with a high risk score) and a high number of the vulnerabilities (e.g., above a threshold, for instance, a customized or predefined threshold) can contribute to an issue being high priority. Examiner Note: Number of vulnerabilities > threshold = high risk event)
applying the constraint solution to an entity prioritization task to determine an entity prioritization, wherein the entity prioritization task processes one or more alerts of the plurality of alerts corresponding to the entity (para. 0033, The cyber-attack impact can be based on entity criticality (e.g., how critical an entity is), network criticality (e.g., how an entity is networked with one or more critical entities or whether it is located in a mission-critical area of the network), and proximity to critical devices (e.g. how many “hops” it would require an attacker to move laterally from this device to a critical one). The operational failure impact can be based on entity criticality and network criticality. The resulting risk score is a more powerful metric that allows users to prioritize a response based on the probability of something happening (e.g., vulnerabilities, connectivity to public entities, proximity to infected entities), current evidence of something happening (e.g., alerts), and the impact of the problem or threat; para. 0034, Embodiments may be part of a system that has detection functionality, triage functionality, investigation functionality, and response functionality. The detection functionality may include a threat library (e.g., an OT specific threat library), anomaly detection functionality, and functionality to detect vulnerabilities and indicators of compromise. The triage functionality, may include various embodiments, may include entity risk score functionality (e.g., automatic risk score determination), functionality to map a threat to a security taxonomy (e.g., common security taxonomy, for instance MITRE ATT&CK), and alert aggregation and correlation functionality, as described herein; para.0056. Embodiments are further able to associate a priority with issue. For example, the correlation engine may use information from the data model including the edges of the graph relationship and information of the relationships between various entities on the network to prioritize different entities within the network. The priority allows a user (e.g., analyst) to focus on the highest priority issues first. In some embodiments, the priority is based on a scale from information to critical, thereby allowing a user to focus on the most critical issues first. The priority of an issue can be based on a variety of factors, including the severity of an event (e.g., high security risk or high operational risk of the event source entity), the number of vulnerabilities, IOCs and other related information associated with the one or more entities associated with the issue, the risk associated with each entity that is associated with the issue, and the types of events associated with the issue. The factors can be used to determine a prioritization score. For example, certain types of events, for instance events related to the operations of the controllers will have a higher priority than other events that based on the data model are less important for plant operation).
Regarding claim 2, Dos Santos teaches the limitations of claim 1 as described above. Dos Santos further teaches wherein and the entity prioritization corresponds to the urgency classification (para. 0096, [t]he vulnerability assessment (VA) system may be configured to identify, quantify, and prioritize (e.g., rank) the vulnerabilities of an entity. The VA system may be able to catalog assets and capabilities or resources of an entity, assign a quantifiable value (or at least rank order) and importance to the resources, and identify the vulnerabilities or potential threats of each resource. The VA system may provide the aforementioned information for use by network monitor device 102; paras. 00120-00121).
Regarding claim 3, Dos Santos teaches the limitations of claim 2 as described above. Dos Santos further teaches wherein at least one of the inequality statements generated using latent semantic analysis based on a statement from a domain expert (para. 00121, The computing of the similarity function can be the basis on which clustering is performed. This can include pairwise distance clustering for < hosts[j]>, where i and j are individual features of events (e.g., alerts) and entities (e.g., hosts) respectively. The similarity function may be able to extend such that layer by layer further clustering is based on different features in a pairwise distance method. The running of the clustering algorithm creates clusters based on the similarity function. The clusters generated may be evaluated based on measuring the clustering efficiency and accuracy using the quantitative evaluation metrics, for instance, the sum of squared errors (SSE) of each cluster observations or qualitative evaluation with several experts (e.g., expert network engineers), or a combination thereof).
Regarding claims 4, 12 and 20, Dos Santos teaches the limitations of claims 1, 10 and 18, respectively, as described above. Dos Santos further teaches wherein the constraint solution is generated by a constraint solver that solves for the collection of rules, and the collection of rules associates one or more of an actor type, breadth, velocity, or importance to an urgency classification using one or more inequality statements (para. 0058-0060, Aggregation of events can be based on events having the same or similar event types, sources, destinations, protocols, or a combination thereof. Events with similar event types, sources, destinations, protocols, or combination thereof can be considered similar. In various embodiments, machine learning (e.g., unsupervised machine learning) may be used to determine whether events are similar and whether events should be aggregated; para. 00150, Embodiments may support pattern matching for frequency-related patterns).
Regarding claims 5 and 13, Dos Santos teaches the limitations of claims 4 and 12, respectively, as described above. Dos Santos further teaches wherein actor type comprises a characterization of a behavioral intent of an actor and is determined using one or more MITRE cyber-attack technique identifiers (T-numbers) (para. 0034, The triage functionality, may include various embodiments, may include entity risk score functionality (e.g., automatic risk score determination), functionality to map a threat to a security taxonomy (e.g., common security taxonomy, for instance MITRE ATT&CK), and alert aggregation and correlation functionality, as described herein; para. 00150, Embodiments may support pattern matching for complex multi-event patterns including, but not limited, killchain matching and MITRE ATT&CK).
Regarding claims 6 and 14, Dos Santos teaches limitations of claims 4 and 12, respectively, as described above. Dos Santos further teaches wherein breadth comprises a low, medium, or high classification of a diversity of behaviors of an entity and is determined using one or more equations that map a number of categories of alerts for a corresponding entity to a breadth classification (para. 0048, In some embodiments, alerts, network logs, change logs and other information sources may be accessed, the data normalized, anonymized, filtered, aggregated, correlated, ranged and prioritized, or a combination thereof. Alert correlation algorithms can be roughly divided into categories including similarity based algorithms (e.g., simple or hierarchical rules, machine learning based approaches), knowledge based algorithms, and statistical based algorithms (e.g., analysis of event repetition patterns to correlate with occurred incidents); paras. 00158-00159, An issue is determined based correlation of multiple events. An issue can further be determined based on a correlation of one or more events, alerts, along with context, risk posture, or vulnerabilities, as described herein. Embodiments allow configuration, customization, or a combination thereof of correlation criteria used for correlation of events and other information into issues, as described herein; The category associated with an issue can include security (e.g., malware), operational (e.g., reconfiguration of a controller, malfunction, etc.), attack type or similar attack (e.g., Mirai like attack), etc).
Regarding claims 7 and 15, Dos Santos teaches the limitations of claims 4 and 12, respectively, as described above. Dos Santos further teaches wherein velocity comprises a low, medium, or high classification of how quickly an entity is triggering alerts and is determined using one or more equations that map a number of categories triggered in a given time frame by an entity to a velocity classification (paras. 0065, The alerts may then be filtered based on alerts that are from entities that are down (e.g., offline) along with checking the periodicity of alerts; para. 0067, In various embodiments, a weighted sum of severity of alerts in a time window, use of a threshold for the sum, and correlation of the sums may be used to determine if a particular severity-based grouping of alerts is relevant or important. [0068] In various embodiments, events may be grouped based on bucketing. The bucketing approach may include temporal splitting of events into a bucket of periodic events or periodic group and a bucket of sporadic events or sporadic group. Periodic events and non-periodic (e.g., sporadic) events may be categorized separately. Events associated with blacklisted credentials, weak security protocols, failed connections, and compliance issues may be bucketed or grouped together. Buckets can be used for simple correlation between threat intelligence data, e.g., blacklisted IP addresses and periodic events. This allows increasing the severity associated with an event when a destination IP address of a periodic event is blacklisted).
Regarding claims 8 and 16, Dos Santos teaches the limitations of claims 4 and 12, respectively, as described above. Dos Santos further teaches wherein importance comprises a low, medium, or high classification of importance of a resource on which an entity is operating, and importance is determined based one or more rules that map a device function to an importance classification (para. 0077, embodiments are configured for reducing or preventing event flooding by correlating events into issues, which reflect high level occurrences (e.g., attacks) on a network. The issues may further be categorized (e.g., as security or operational) and prioritized (e.g., critical, high, medium, low, informational) to allow ranking of issues. Embodiments thus enable more effective response to events; para. 00161, a priority associated with the issue is determined. The priority may be critical, high, medium, low, or informational. The priority may be based on criticality of an entity, risk, the severity associated with an issue or an event associated with the issue, etc., as described herein).
Regarding claims 9 and 17, Dos Santos teaches the limitations of claims 1 and 10, respectively, as described above. Dos Santos further teaches further comprising applying the constraint solution to a group prioritization task to determine a group prioritization for a group of entities (para. 0067, In some embodiments, events may be grouped based on severity. The severity based grouping of events may be based on static filtering of alerts including filtering out alerts with low severity and high occurrence, while keeping the filtered out alerts in a database for forensic analysis. While the individual severity of each event might not be enough to warrant specific analysis, the grouping of alerts may reveal trends and associations that clarify the intentions of an attacker. In various embodiments, a weighted sum of severity of alerts in a time window, use of a threshold for the sum, and correlation of the sums may be used to determine if a particular severity-based grouping of alerts is relevant or important).
Regarding claims 11 and 19, each of which is rejected by the combination of claims 2 and 3.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to RUPALI DHAKAD whose telephone number is (571)270-3743. The examiner can normally be reached M-F 8:30-5:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alexander Lagor can be reached at 5712705143. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/R.D./Examiner, Art Unit 2437
/ALEXANDER LAGOR/Supervisory Patent Examiner, Art Unit 2437