Prosecution Insights
Last updated: April 19, 2026
Application No. 18/161,652

TECHNIQUES FOR PRIORITIZING RISK AND MITIGATION IN CLOUD BASED COMPUTING ENVIRONMENTS

Final Rejection §101§103§112
Filed
Jan 30, 2023
Examiner
SRIRAM, ADITYA
Art Unit
2491
Tech Center
2400 — Computer Networks
Assignee
Wiz Inc.
OA Round
4 (Final)
68%
Grant Probability
Favorable
5-6
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
26 granted / 38 resolved
+10.4% vs TC avg
Strong +32% interview lift
Without
With
+31.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
12 currently pending
Career history
50
Total Applications
across all art units

Statute-Specific Performance

§101
18.0%
-22.0% vs TC avg
§103
40.5%
+0.5% vs TC avg
§102
13.3%
-26.7% vs TC avg
§112
19.7%
-20.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 38 resolved cases

Office Action

§101 §103 §112
Those problems DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment Applicant’s amendment, filed 11/04/2025, has been entered and fully considered. Response to Arguments Applicant’s arguments, see pages 1-3, regarding the rejection of claims 1-19 under 35 U.S.C. 101 have been fully considered but are not persuasive. Applicant argues that the claim limitation reciting “initiating a mitigation action” cannot be done solely in a person’s mind because it is “clearly a real-world action against a cyber threat in the cloud computing environment”. However, the claim only recites the initiation of a mitigation action, in general. Under a broadest reasonable interpretation (BRI), words of the claim must be given their plain meaning, unless such meaning is inconsistent with the specification. The claimed limitation may be satisfied by a mental decision to ‘initiate’ (i.e., begin the process of) the generation of a recommendation, thereby initiating a mitigation action, in general. By contrast, paragraph [0063] of the specification does disclose embodiments of a mitigation action such as “revoking access to a network resource”, etc. that appear to initiate a mitigation against a cyber threat. The examiner recommends Applicant amend the claim in order to specify what is intended by the term “initiate” in order to overcome the above interpretation. Further, Applicant argues that an improvement may be found so long as at least sometimes an improvement is achieved using the example of a cache. However, the claim must reflect the improvement itself. While the claim recites a ‘compact representation’ of a vulnerability node for a particular cybersecurity threat being coupled to each of at least two nodes in which the cybersecurity threat is detected, the claim does not purport to improve computer capabilities because the severity index is generated based on the identifier in the received alert and the severity indicator. The severity index is not generated based on the security graph or the ‘compact’ vulnerability node, and subsequently the mitigation action is initiated based on the severity index. Therefore, the mitigation action is not recited to be based on the ‘compact representation’ of the vulnerability node or the security graph. Hence, the functionality of the mitigation action is not, at least, ‘sometimes’ improved by the security graph. Further, as noted above, the mitigation action is merely being “initiated”, and it’s the initiation that’s “based on the severity index”, not the mitigation action. In fact, the claimed mitigation action is never positively recited as actually being directly performed, and so further clarification of the claims would be necessitated in order to support Applicant’s instant argument. Further, Applicant argues that the amended claim implements an improvement in computer technology because the claim is implemented on at least one cloud computing infrastructure. However, another consideration when determining whether a claim integrates the judicial exception into a practical application in Step 2A Prong Two or recites significantly more than a judicial exception in Step 2B is whether the additional elements amount to more than generally linking the use of a judicial exception to a particular technological environment or field of use. See MPEP 2106.05(h). Therefore, the claim limitation reciting “implemented on at least one cloud computing infrastructure” merely indicates a technological environment in which to apply a judicial exception, and hence does not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application. Applicant’s arguments, see pages 3-5, with respect to the rejection of claim(s) 1-19 under 35 U.S.C. 103 in regards to reference Riccetti have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Nevertheless, Applicant argues, see pages 5-6, in regards to reference Canzanese, that user nodes in Canzanese (Canzanese: FIG. 3, user node 1) do not teach the claimed principal nodes because the user nodes in Canzanese are ‘actually user devices’. However, Canzanese teaches “a graph of enterprise network, with nodes representing entities in the network. Examples of entities include user endpoints 121, servers 161 a-m, file names, usernames, hostnames, IP addresses, mac addresses, email addresses, physical locations, instance identifiers, and autonomous system numbers (ASNs) etc.” (Canzanese: paragraph [0037]). Canzanese does not teach that nodes or entities are limited to user devices, distinct from user accounts. A username or an email address corresponds to a user account. Further, Applicant argues that user nodes of Canzanese are external to ‘any cloud computing environment’. However, Canzanese teaches that “a graph of enterprise network, with nodes representing entities in the network.” Therefore, the user nodes, representing user entities, are in an enterprise network. Further, Applicant argues that the security graph is not implemented on at least one cloud computing infrastructure. However, Canzanese teaches that Graph Generator 225 is in Alert Prioritization Engine 158, and Alert Prioritization Engine 158 is coupled to network 155 of Enterprise Network 111 (Canzanese: FIG. 1 and 2). Therefore, since the Alert Prioritization Engine 158 is in System 100, it is an infrastructure component of a cloud computing environment and hence, the security graph of Canzanese is implemented on at least one cloud computing infrastructure. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-19 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 1, 10-11 recite the limitation "the cybersecurity threat" in lines 12, 14, 15, respectively. There is insufficient antecedent basis for this limitation in the claim. The Examiner will assume this is a typographical error, and the intended limitation recites “the particular cybersecurity threat”. Claims 2-9, 12-19 are rejected under a similar rationale. The dependent claims included in the statement of rejection but not specifically addressed in the body of the rejection have inherited the deficiencies of their parent claim and have not resolved the deficiencies. Therefore, they are rejected based on the same rationale as applied to their parent claims above. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-19 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea (35 U.S.C. 101 Judicial Exception) without significantly more. The claims recite prioritizing alerts based on a security graph comprising: “receiving an alert…”, “generating a severity index…”, “initiating a mitigation action…”, which are directed to the abstract idea of mental processes. This judicial exception is not integrated into a practical application because the generically recited computer elements do not add a meaningful limitation to the abstract idea because they amount to simply implementing the abstract idea on a computer. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements, when considered separately and in combination, do not add significantly more to the abstract idea, as they are well-understood, routine, conventional computer functions as recognized by the courts. Based upon consideration of all the relevant factors with respect to the claimed invention as a whole, the claims are determined to be directed to an abstract idea without significantly more. The rationale for this determination is explained infra: The following are Principles of Law: A patent may be obtained for “any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof”; 35 U.S.C. § 101. The Supreme Court has consistently held that this provision contains an important implicit exception: laws of nature, natural phenomena, and abstract ideas are not patentable; See Alice Corp. v. CLS Bank Int’l, 134 S. Ct. 2347, 2354 (2014); Gottschalk v. Benson, 409 U.S. 63, 67 (1972) (“Phenomena of nature, though just discovered, mental processes, and abstract intellectual concepts are not patentable, as they are the basic tools of scientific and technological work.”). Notwithstanding that a law of nature or an abstract idea, by itself, is not patentable, an application of these concepts may be deserving of patent protection; See Mayo Collaborative Servs. v. Prometheus Labs., Inc., 132 S. Ct. 1289, 1293–94 (2012). In Mayo, the Court stated that “to transform an unpatentable law of nature into a patent-eligible application of such a law, one must do more than simply state the law of nature while adding the words ‘apply it.’” Mayo, 132 S. Ct. at 1294 (citation omitted). In Alice, the Court reaffirmed the framework set forth previously in Mayo “for distinguishing patents that claim laws of nature, natural phenomena, and abstract ideas from those that claim patent-eligible applications of these concepts.” Alice, 134 S. Ct. at 2355. The test for determining subject matter eligibility requires a first step of determining whether the claims are directed to a process, machine, manufacture, or composition of matter. If the claims are directed to one of the four patent-eligible subject matter categories, then the Examiner must perform a two-part analysis to determine whether a claim that is directed to a judicial exception recites additional elements that amount to significantly more than the exception. The first part of the second step in the analysis is to “determine whether the claims at issue are directed to one of those patent-ineligible concepts.” Id. If the claims are directed to a patent-ineligible concept, then the second part of the second step in the analysis is to consider the elements of the claims “individually and ‘as an ordered combination”’ to determine whether there are additional elements that “‘transform the nature of the claim’ into a patent-eligible application.” Id. (quoting Mayo, 132 S. Ct. at 1298, 1297). In other words, the second step in the analysis is to “search for an ‘inventive concept’‒ i.e., an element or combination of elements that is ‘sufficient to ensure that the patent in practice amounts to significantly more than a patent on the [ineligible concept] itself.’” Id. (brackets in original) (quoting Mayo, 132 S. Ct. at 1294). The prohibition against patenting an abstract idea “cannot be circumvented by attempting to limit the use of the formula to a particular technological environment or adding insignificant post-solution activity.” Bilski v. Kappos, 561 U.S. 593, 610–11 (2010) (citation and internal quotation marks omitted). The Court in Alice noted that “[s]imply appending conventional steps, specified at a high level of generality,” was not “enough” [in Mayo] to supply an “‘inventive concept.’” Alice, 134 S. Ct. at 2357 (quoting Mayo, 132 S. Ct. at 1300, 1297, 1294). In the “2019 Revised Patent Subject Matter Eligibility Guidance” (2019 PEG), the USPTO has prepared revised guidance for use by USPTO personnel in evaluating subject matter eligibility based upon rulings by the courts. The Examiner is bound by and applies the framework as set forth by the Court in Mayo and reaffirmed by the Court in Alice and follows the 2019 PEG for determining whether the claims are directed to patent-eligible subject matter. Step 1: Are the claims at issue directed to a process, machine, manufacture, or composition of matter? The Examiner finds that the claims are directed to one of the four statutory categories. Step 2A – Prong One: Does the claim recite an abstract idea, law of nature, or natural phenomenon? The Examiner finds that the claims are directed to the abstract idea of prioritizing alerts based on a security graph comprising: “receiving an alert…”, “generating a severity index…”, “initiating a mitigation action…”, which are directed to the abstract idea of mental processes. Step 2A – Prong Two: Does the claim recite additional elements that integrate the Judicial Exception into a practical application? The abstract idea is not integrated into a practical application because the generically recited computer elements do not add a meaningful limitation to the abstract idea because they amount to simply implementing the abstract idea on a computer. In determining whether the abstract idea was integrated into a practical application, the Examiner has considered whether there were any limitations indicative of integration into a practical application, such as: (1) Improvements to the functioning of a computer, or to any other technology or technical field; See MPEP § 2106.05(a) (2) Applying or using a judicial exception to affect a particular treatment or prophylaxis for a disease or medical condition; See Vanda Memo (Recent Subject Matter Eligibility Decision: Vanda Pharmaceuticals Inc. v. West-Ward Pharmaceuticals) (3) Applying the judicial exception with, or by use of, a particular machine; See MPEP § 2106.05(b) (4) Effecting a transformation or reduction of a particular article to a different state or thing; See MPEP § 2106.05(c) (5) Applying or using the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception; See MPEP § 2106.05(e) and Vanda Memo The Examiner notes that claim features of: “detecting an alert…”, “generating a severity index…”, “initiating a mitigation action…”does not improve the functioning of a computer or technical field, do not effect a particular treatment or prophylaxis for a disease or medical condition, do not apply or use a particular machine, do not effect a transformation or reduction of a particular article to a different state or thing, and do not apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception. Instead, the claim features of prioritizing alerts based on a security graph merely use a general-purpose computer as a tool to perform the abstract idea (See MPEP § 2106.05(f)) and merely generally link the use of the abstract idea to a field of use (See MPEP § 2106.05(h)). Thus, the Examiner finds that the claimed invention does not recite additional elements that integrate the Judicial Exception into a practical application. Step 2B: Is there something else in the claims that ensures that they are directed to significantly more than a patent-ineligible concept? The claims, as a whole, require nothing significantly more than generic computer implementation or can be performed entirely by a human. The additional element(s) or combination of element(s) in the claims other than the abstract idea per se amount to no more than recitation of generic computer structure (e.g., cloud entity, cloud computing environment, cloud computing infrastructure) that serves to perform generic computer functions (e.g., receiving, generating, indicating) that are well-understood, routine, and conventional activities previously known to the pertinent industry. The claimed alert, identifier, severity indicator, security graph, severity index, mitigation action are all numbers, data structures, or datum. Each of these elements are individually dispositive of patent eligibility because of the following legal holdings: “Data in its ethereal, non-physical form is simply information that does not fall under any of the categories of eligible subject matter under section 101.” Digitech Image Techs., LLC v. Electronics for Imaging, Inc., 758 F.3d 1344, 1350 (Fed. Cir. 2014). The Supreme Court has also explained that “[a]bstract software code is an idea without physical embodiment,” i.e., an abstraction. Microsoft Corp. v. AT&T Corp., 550 U.S. 437, 449 (2007). A claim that recites no more than software, logic, or a data structure (i.e., an abstract idea) – with no structural tie or functional interrelationship to an article of manufacture, machine, process or composition of matter does not fall within any statutory category and is not patentable subject matter; data structures in ethereal, non-physical form are non-statutory subject matter. In re Warmerdam, 33 F.3d 1354, 1361 (Fed. Cir. 1994); see Nuijten, 500 F.3d at 1357. Furthermore, the claimed invention does not have a specific asserted improvement in computer capabilities, nor is it a specific implementation of a solution to a problem in the software arts; See Enfish, LLC v. Microsoft Corp., 822 F.3d 1327 (Fed. Cir. 2016). Rather, the claims are merely directed towards the abstract idea of prioritizing alerts based on a security graph, which is similar to ideas that the courts have found to be abstract, as noted supra, and the claims are without a “practical application” or anything “significantly more”. Considering each of the claim elements in turn, the function performed by the computer system at each step of the process does no more than require a generic computer to perform a well-understood, routine, and conventional activity at a high level of generality. For example, “receiving an alert…”, “generating a severity index…”, “initiating a mitigation action…” which has been found by the courts to be a well-understood, routine, conventional activity in computers; See e.g. Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network). Further note that the abstract idea of prioritizing alerts based on a security graph to which the claimed invention is directed has a prior art basis outside of a computing/technological environment, e.g., an investigation of the security of a facility and an alert generated for an unlocked entry door. A security guard obtains a map of the facility and labels the unlocked entry door with a sticker particularly indicating ‘unlocked’. Based on the importance of the door, a mitigation action to close the door may be prioritized. The prohibition against patenting an abstract idea “cannot be circumvented by attempting to limit the use of the formula to a particular technological environment or adding insignificant post-solution activity.” Bilski v. Kappos, 561 U.S. 593, 610–11 (2010) (citation and internal quotation marks omitted). The Court in Alice noted that “[s]imply appending conventional steps, specified at a high level of generality,” was not “enough” [in Mayo] to supply an “‘inventive concept.’” Alice, 134 S. Ct. at 2357 (quoting Mayo, 132 S. Ct. at 1300, 1297, 1294). Viewed as a whole, the claims simply recite the steps of using generic computer components. The claims do not purport, for example, to improve the functioning of the computer system itself. Nor does it affect an improvement in any other technology or technical field. Instead, the claims amount to nothing significantly more than an instruction to implement the abstract idea using generic computer components. This is insufficient to transform an abstract idea into a patent-eligible invention. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-7, 9-17, 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pang et al. (USP App Pub 2016/0359891; hereinafter Pang) in view of Moolchandani et al. (USP App Pub 2023/0179623; hereinafter Moolchandani) in further view of Canzanese, JR. et al. (USP App Pub 2019/0379700; hereinafter Canzanese). Regarding claim 1, A method for prioritizing alerts (Pang: paragraph [0012], “priority ranking for endpoints”) and mitigation (Pang: paragraph [0012], “mitigate damage”) actions against cyber threats (Pang: col in a cloud computing environment (Pang: paragraph [0040], “cloud components”; paragraph [0046], “Endpoints 210 can include any communication device or component, such as a computer, server, hypervisor, virtual machine, container, process (e.g., running on a virtual machine), switch, router, gateway, host, device, external network, etc.”) implemented on at least one cloud computing infrastructure (Pang: paragraph [0048], “the subject technology can be implemented based on any network topology, including any data center or cloud network fabric”), comprising: receiving an alert (Pang: paragraph [0050], “compromised endpoint 302a can represent an endpoint 302 that has been compromised or misconfigured… Network monitoring system 100 can identify compromised endpoint 302”) based on a cloud entity deployed in the cloud computing environment (Pang: paragraph [0049], “network configurations of network environment 200. Various endpoints 302a-302m (collectively or individually, “endpoint 302”) can run services within the network”), wherein the alert includes an identifier of the cloud entity (Pang: paragraph [0051], “Network monitoring system 100 can identify compromised endpoint 302” i.e., a network monitoring system 100 knows what endpoint was compromised, thereby identifying it) and a severity indicator (Pang: paragraph [0055], “variety of criteria such as distance, critically (e.g., business criticality), network connectivity, redundancy, vulnerability” i.e., distance, criticality, connectivity, redundancy, vulnerability etc. may all indicate the severity of a compromised endpoint),… generating a severity index (Pang: paragraph [0051], “The priority ranking”) for the received alert based on the identifier of the cloud entity (Pang: FIG. 4, Application in table 400 with an associated priority ranking; paragraph [0059], “applications (e.g., endpoints 302)”) and the severity indicator (Pang: paragraph [0051], “The priority ranking can be established using a variety of criteria such as distance, critically (e.g., business criticality), network connectivity, redundancy, vulnerability”); and initiating a mitigation action (Pang: paragraph [0072], “Triage can mean creating…limiting the traffic to the endpoint 302”) based on the severity index (Pang: paragraph [0071], “If the first endpoint has a higher priority, the system can perform triage on the first endpoint (step 516). If the second endpoint has a higher priority, the system can perform triage on the second endpoint (step 518)” i.e., triage actions are taken in order of priority ranking). Pang does not teach … and wherein the cloud computing environment is represented by a security graph, wherein the security graph includes a plurality of nodes, at least one of the nodes being a resource node, at least one of the nodes being a principal node and when a particular cybersecurity threat is detected in at least two of the plurality of nodes, at least one of the nodes being a vulnerability node for the particular cybersecurity threat that is coupled to at each of the at least two nodes in which the cybersecurity threat is detected;… However, in the same field of endeavor Moolchandani does teach …and wherein the cloud computing environment (Moolchandani: paragraph [0048], “a GUI 500 displaying for a user sub-networks 502, 503, and 504 of a cloud network 501 scanned by the network scanner 111”) is represented by a security graph (Moolchandani: FIG. 5, GUI 500 displays a graph of vulnerabilities in the network; claim 10, “generating a network topology graph, wherein applying the set of vulnerability states to the network topology description comprises applying the set of vulnerability definitions to each network node of the network topology graph while traversing the network topology graph”), wherein the security graph (Moolchandani: FIG. 5, GUI 500 displays a graph of vulnerabilities in the network) includes a plurality of nodes (Moolchandani: paragraph [0048], “a GUI 500 displaying for a user sub-networks 502, 503, and 504 of a cloud network 501”; paragraph [0056], “sub-networks of connected resources”; paragraph [0026], “the system generates a network topology graph representing the network resources as nodes in the graph”), at least one of the nodes being a resource node (Moolchandani: paragraph [0056], “sub-networks of connected resources”; FIG. 5, Sub-network 502 is a sub-network of connected resources represented by a node), …and when a particular cybersecurity threat (Moolchandani: paragraph [0024], “The different functions performed by the devices in the computer network are associated with different types of vulnerabilities that may be exploited by unauthorized entities … Overly permissive security protocols throughout the network may allow an entity to change configurations of devices in the network.” i.e., permissive security is a particular cybersecurity threat) is detected in at least two of the plurality of nodes (Moolchandani: paragraph [0028], “the detected vulnerabilities to identify patterns of detected vulnerabilities that correspond to potential network attacks. The patterns may include one or more nodes”), at least one of the nodes being a vulnerability node for the particular cybersecurity threat (Moolchandani: FIG. 5, Vulnerability 3: Permissive Security is a vulnerability node represented in GUI 500 that corresponds to the particular cybersecurity threat of permissive security) that is coupled to at each of the nodes in which the cybersecurity threat is detected (Moolchandani: FIG. 5, Vulnerability 3: Permissive Security is coupled to Sub-network 502 and Sub-network 504, and is not coupled to Sub-network 503);… It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the prioritization and mitigation system of Pang to incorporate the teachings of Moolchandani to represent the components of the network in a graph and use a compact representation of a single vulnerability for a particular cybersecurity threat. The motivation for doing so is to “[identify] which patterns among the vulnerabilities 505 correspond to breach paths 506 which would enable an unauthorized entity to exploit vulnerabilities in the network” (Moolchandani: paragraph [0048]). Pang and Moolchandani do not teach …at least one of the nodes being a principal node… However, in the same field of endeavor, Canzanese does teach …at least one of the nodes being a principal node (Canzanese: paragraph [0076], “FIG. 3 presents a graph 301 of a computer network in which a host A is connected to a hundred users (user 1 to user 100). Users are connected to the host through an action type connection which is represented by broken lines connecting user nodes with the host A node.”; applicant specification: paragraph [0039], “a principal node 246 represents a user account”)… It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the network graph model of Pang and Moolchandani to incorporate the teachings of Canzanese to additionally use user nodes as components in the graph, in addition to resource nodes (Canzanese: paragraph [0076], “two database nodes”; FIG. 3). The motivation for doing so is to better prioritize vulnerable nodes by incorporating information about the node type (Canzanese: paragraph [0036], “a malware execution on a user endpoint 121 may not have the same priority level as compared to a malware execution on a system used as a jump box to access other user endpoints in the network”; paragraph [0037], “Graphs of enterprise networks can help security analysts visualize entities in the computer network and their alert status. The technology disclosed builds on a graph of enterprise network, with nodes representing entities in the network. Examples of entities include user endpoints 121, servers 161 a-m, file names, usernames”). Regarding claim 2, Pang, Moolchandani and Canzanese teach the method of claim 1, further comprising: generating the mitigation action based on the received alert (Pang: paragraph [0072], “if the endpoint 302 is a virtual machine or container, migrating it to another machine” i.e., based on the type of endpoint or cloud component, the system is choosing an appropriate mitigation action of moving the endpoint to another machine). Regarding claim 3, Pang, Moolchandani and Canzanese teach the method of claim 1, further comprising: generating the severity index (Pang: paragraph [0051], “The priority ranking”) based on a policy (Pang: paragraph [0051], “The priority ranking can be established using a variety of criteria such …critically (e.g., business criticality)”; paragraph [0065], “values (or rankings) associated with various criteria (e.g., distance, redundancy, vulnerability, etc.) and then combining multiple values (if there are multiple). Combining can include creating an average, a weighted average, a summation, etc.”) of the cloud computing environment (Pang: paragraph [0049], “network environment 200”). Regarding claim 4, Pang, Moolchandani and Canzanese teach the method of claim 3, wherein the policy (Pang: paragraph [0051], “The priority ranking can be established using a variety of criteria such as distance, critically (e.g., business criticality), network connectivity, redundancy, vulnerability, similarity to compromised endpoint 302a, etc.”) includes a plurality of attributes (Pang: paragraph [0051], “distance, critically (e.g., business criticality), network connectivity, redundancy, vulnerability, similarity to compromised endpoint 302a, etc.”), each attribute (Canzanese: paragraph [0045], “native alert scores”) corresponding to an attribute of a node (Canzanese: paragraph [0045], “The graph generator further assigns native alert scores to nodes that capture alert scores generated by security systems… an association type relationship is stronger than an action type relationship …A higher score is given to edges of a connection type representing a stronger relationship.”; paragraph [0039], “association connection between a user and an IP address is stronger than an authentication action connection between a user and a host, because the IP address is associated with the user for longer than the authenticated session of the user on the host” i.e., a longer network connection is an attribute related to network connectivity) representing the cloud entity (Canzanese: paragraph [0037], “nodes representing entities in the network”) in the security graph (Canzanese: paragraph [0038], “The nodes in graphs of enterprise computer network are connected to each other with different types of edges representing different types of relationships between the nodes”). Regarding claim 5, Pang, Moolchandani and Canzanese teach the method of claim 4, wherein at least a portion of the plurality of attributes each corresponds to a vulnerability indicator (Pang: paragraph [0051], “The priority ranking can be established using a variety of criteria such as…vulnerability”; paragraph [0065], “determining values (or rankings) associated with various criteria (e.g., …vulnerability, etc.)”). Regarding claim 6, Pang, Moolchandani and Canzanese teach the method of claim 1, further comprising: generating the alert in response (Pang: paragraph [0050], “compromised endpoint 302a can represent an endpoint 302 that has been compromised or misconfigured… Network monitoring system 100 can identify compromised endpoint 302”) to any one of: detecting a malware object on the cloud entity, determining an exposure path to the cloud entity, detecting a lateral movement associated with the cloud entity, detecting a misconfiguration (Pang: paragraph [0050], “misconfigured”), and detecting a policy violation in a corresponding infrastructure as code environment. Regarding claim 7, Pang, Moolchandani and Canzanese teach the method of claim 1, further comprising: querying the security graph (Canzanese: paragraph [0027], “For each starting node with a native alert score, we traverse the graph following edges from the starting node to propagate the starting node's native alert score to neighboring nodes”) based on the identifier of the cloud entity (Pang: paragraph [0052], “compromised endpoint 320a”; Canzanese: paragraph [0078], “the node representing IP 1.1.1.1 is selected as the starting node”; paragraph [0034], “Some security systems apply scores (such as on a scale of 1 to 100) indicating the risk associated with an individual event. An alert with a score of 100 likely poses a higher threat to the organization's network”) to generate an identifier of another node (Pang: paragraph [0052], “endpoint 320b”; Canzanese: paragraph [0098], “The traversing extends for at least a predetermined span from the starting nodes, through and to neighboring nodes connected by the edges”; paragraph [0078], “from starting node 1.1.1.1 to user 1 node”; FIG. 4A the 1st iteration comprises a traversal from starting node 1.1.1.1 to user 1 node. User 1 node is identified by the traversal due to its association to IP 1.1.1.1 by the edge connecting the nodes), wherein the another node (Canzanese: FIG. 4A, user 1 node) is connected by an edge to a node (Canzanese: FIG. 4A, user 1 node is connected by an edge to Host A node) representing the cloud entity (Canzanese: paragraph [0037], “nodes representing entities in the network”); and generating the severity index (Pang: paragraph [0051], “The priority ranking”; Canzanese: paragraph [0098], “the system ranks and prioritizes clusters for analysis according to the aggregate scores”) based on the identifier (Canzanese: FIG. 4A, User 1 node is identified by the traversal due to its association to IP 1.1.1.1 by the edge connecting the nodes) of the another node (Canzanese: paragraph [0078], “the first iteration, the propagated score from starting node 1.1.1.1 to user 1 node is 34.482”; paragraph [0098], “The system normalizes and accumulates propagated scores at visited nodes, summed with the native score assigned to the visited nodes to generate aggregate scores for the visited nodes”). Regarding claim 9, Pang, Moolchandani and Canzanese teach the method of claim 1, further comprising: inspecting the cloud entity to detect a cybersecurity threat (Pang: paragraph [0050], “Compromised endpoint 302a might be running a virus, worm…Network monitoring system 100 can identify compromised endpoint 302” i.e., a virus or a worm may be a cybersecurity threat); and generating the severity indicator (Pang: paragraph [0055], “variety of criteria such as distance, critically (e.g., business criticality), network connectivity, redundancy, vulnerability) based on the detected cybersecurity threat (Pang: paragraph [0052], “endpoint 302b is a distance of 2 away from compromised endpoint 302a” i.e., the identification of a compromised endpoint 302a is a result of an identification of the detected cybersecurity threat. A severity indicator of distance may only be computed once a compromised endpoint 302a is detected). The same motivation to combine references in the above listed claims is the same as the motivation stated in the rejection of claim 1. Re. claims 10-11, they recite analogous limitations as claim 1 and therefore is rejected for the same reasons. Re. claims 12-17, they recite analogous limitations as claims 2-7, respectively, and therefore are rejected for the same reasons. Re. claim 19, it recites analogous limitations as claim 9 and therefore is rejected for the same reason. Claim(s) 8, 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pang in view of Moolchandani in further view of Canzanese in further view of Shakarian et al. (USP App Pub 2020/0327237; hereinafter Shakarian). Regarding claim 8, Pang, Moolchandani and Canzanese teach the method of claim 1, wherein the severity indicator (Pang: paragraph [0051], “criteria such as…vulnerability” i.e., vulnerability may indicate the severity of a compromised endpoint) is received (Pang: paragraph [0065], “determining values (or rankings) associated with various criteria (e.g.,…vulnerability, etc.)”) … Pang, Moolchandani and Canzanese do not teach …from a common vulnerabilities and exposures database. However, in the same field of endeavor, Shakarian teaches …from a common vulnerabilities and exposures database (Shakarian: paragraph [0025], “Common Vulnerabilities and Exposures (CVE) is a unique identifier assigned to each software vulnerability reported in the National Vulnerability Database (NVD)”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the prioritization system of Pang, Moolchandani and Canzanese to incorporate the teachings of Shakarian to obtain vulnerability data from a Common Vulnerabilities and Exposures (CVE) database. The motivation for doing so is to directly map components in the system to known vulnerabilities that might have already been identified (Shakarian: paragraph [0048], “to align the inventory of a known software stack 154 with data 156 from the NIST's CPE numbering system, which then, in-turn, aligns components of the inventory of the software stack 154 with possible vulnerabilities 158 (numbered by CVE number)”; paragraph [0165], “the functionality can be leveraged to identify a plurality or set of CVEs/attack vectors that may be ranked, aggregated, and/or minimized”; paragraph [0164], “the processor 102 identifies possible components of the IT system 130 that are affiliated with at least one CPE (and possible CVE)”). Re. claim 18, it recites analogous limitations as claim 8 and therefore is rejected for the same reason. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ADITYA SRIRAM whose telephone number is (703)756-1715. The examiner can normally be reached Su-Sa: 9:00 AM - 11:59 AM PST and 1:00 PM - 8 PM PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Korzuch can be reached at (571) 272-7589. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /A.S./Examiner, Art Unit 2491 /WILLIAM R KORZUCH/Supervisory Patent Examiner, Art Unit 2491
Read full office action

Prosecution Timeline

Jan 30, 2023
Application Filed
Oct 09, 2024
Non-Final Rejection — §101, §103, §112
Jan 15, 2025
Response Filed
Jan 27, 2025
Final Rejection — §101, §103, §112
Apr 30, 2025
Request for Continued Examination
May 09, 2025
Response after Non-Final Action
Aug 03, 2025
Non-Final Rejection — §101, §103, §112
Nov 04, 2025
Response Filed
Dec 29, 2025
Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602454
METHOD FOR COLLABORATIVE MANAGEMENT OF LICENSES ACROSS INDUSTRIAL SECTORS
2y 5m to grant Granted Apr 14, 2026
Patent 12603781
METHOD OF CONTRACTING RESERVES USING PEDERSEN COMMITMENT AND METHOD OF PROVING RESERVES USING ZERO-KNOWLEDGE PROOF ALGORITHM BASED ON PEDERSEN COMMITMENT
2y 5m to grant Granted Apr 14, 2026
Patent 12598172
IDENTITY SHARDED CACHE FOR THE DATA PLANE DATA
2y 5m to grant Granted Apr 07, 2026
Patent 12585738
IMAGE FORMING APPARATUS CAPABLE OF CONTROLLING DISPLAY UNIT AND IMAGE FORMING UNIT BASED ON LICENSE STATE, AND CONTROL METHOD FOR THE IMAGE FORMING APPARATUS
2y 5m to grant Granted Mar 24, 2026
Patent 12572705
COLLABORATIVE DIGITAL BOARD
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
68%
Grant Probability
99%
With Interview (+31.8%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 38 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month