Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Detailed Action
Claims 1, 3, 4, 15, 16, and 18-20 are amended
Claims 2 and 17 have been cancelled
USC 112(b) rejections noted in the non-final have been overcome
Claims 1, 3-16, and 18-20 are pending
Priority
This application claims priority to Russian Patent Application No. RU2023116032, filed on 19 Jun. 2023. All priority documents have been received. Therefore, the effective filing date of this application is 06/19/2023.
Response to Arguments
Applicant’s arguments filed on 12/01/2025 have been fully considered.
With respect to the objection to the specification, the objection has not been overcome due to the abstract still reciting of legal phraseology of comprising “at least one subgraph comprising homogeneous objects”. Examiner suggests removing all legal phraseology.
With respect to the USC 112(b) rejection for claims 1-20 in the non-final mailed on 08/08/2025. The rejections have been overcome due to applicant’s amendments.
With respect to the USC 103 arguments Applicant has argued with respect to independent claim 1 that Nabeel fails to teach collecting information about unclassified objects in a network that have a generic information with classified malicious objects. Examiner respectfully disagrees Nabeel teaches ([NABEEL, para. 0045] “building associations between domains from DNS data to reflect their meaningful connections. For example, one association may be that the domains are deployed and/or controlled by the same entity. Once the associations have been established, an inference based approach deploys an inference algorithm to assess the maliciousness of a domain based on its associations with known malicious and benign domains”) ([NABEEL, para. 0048] “two domains may be associated if they are co-hosted and belong to the same dedicated apex domain or share more than one public IP address from different hosting providers. Once these associations have been identified and graphed, an inference-based algorithm may be deployed”) As can be seen from these citations Nabeel teaches of building associations between domains, and maliciousness of a domain can be assessed based on its association to known malicious domains. This is analogous to collecting information about objects in a network that have a generic information with classified malicious objects. Therefore, Nabeel teaches this limitation. The arguments state that Nabeel does not teach of recursive discovery of object relations. However, the claim language does not state of any recursive discovery. The claim simply recites broadly of “collecting information”. As for the limitation of “or with unclassified objects that in turn have a generic information with classified malicious objects” this limitation is in an alternative form and therefore mapping of the initial limitation is sufficient to meet one of the limitations. Furthermore, RAY teaches this limitation as can be seen in figure 5 of RAY various objects are connected to other objects which are in turn connected to the root cause USB device 512.
Applicant has further argued that Nabeel fails to teach “generating a graph of associations of classified malicious objects and unclassified objects in a form of vertices connected with edges, wherein vertices represent classified malicious objects and unclassified objects, and edges represent associations between said objects”. Examiner respectfully disagrees. Nabeel teaches ([NABEEL, para. 0068] “once the graph-building module 134 has completed building a graph detailing weights and relations between the domains of the domain list 108, the assessment module 136 may employ a belief propagation algorithm to determine a likelihood that a specific domain may be malicious”) ([NABEEL, para. 0045] “Once the associations have been established, an inference based approach deploys an inference algorithm to assess the maliciousness of a domain based on its associations with known malicious and benign domains.”) ([NABEEL, para. 0093] “Given a dedicated apex domain or a subdomain belonging to this apex domain, if it was known or inferred to be malicious, it was highly likely all subdomains under this apex domain were malicious.”) ([NABEEL, para. 0069, Fig. 2] “ Each domain in the graph 200 belongs to an apex domain. ”) ([NABEEL, para. 0072, Fig. 5] “FIG. 5 illustrates an example graph 500 built by the method of the present disclosure based on the example IP address and domain graph 200. … For example, the association module 132 has determined more domain associations 144 based on a dedicated apex domain relationship. For example, the association module 132 deems two domains associated if they are co-hosted domains and belong to the same dedicated apex domain. This association method is referred to as G-Domain within this disclosure. Of the eight domains in the example IP address and domain graph 200, only domains D2 206, D3 210, and D4 216 qualify for this association. This association method combined with the association rules used to build graph 300, combine to produce the example graph 500, containing five of the eight domains found in the example IP address and domain graph 200, providing an expanded graph for maliciousness analysis.”) As can be seen from these citations Nabeel teaches of building a graph of associations between classified malicious objects and unclassified objects. Furthermore, figures 2-6 clearly show the graphs consisting of vertices and edges. For example, in figure 5 domains D2-D4 are shown as vertices and the lines connecting them are edges showing their associations.
Applicant has further argued that Nabeel fails to teach “graph … comprising homogeneous objects … classifying each unclassified object in each subgraph at least as malicious using classification rules”. Examiner respectfully disagrees. Nabeel teaches of homogeneous objects as can be seen ([NABEEL, para. 0068] “configured to assess the maliciousness of a domain based on a weighted domain graph. For example, once the graph-building module 134 has completed building a graph detailing weights and relations between the domains of the domain list 108”). The terms homogenous objects broadly recite of objects that are same or similar. The graphs of NABEEL consists of associations of domains. Therefore, all the domains are homogeneous objects. Furthermore, Nabeel teaches classifying each unclassified object in each subgraph at least as malicious using classification rules as can be seen in ([NABEEL, para. 0068] “The assessment module 136 may be configured to assess the maliciousness of a domain based on a weighted domain graph. For example, once the graph-building module 134 has completed building a graph detailing weights and relations between the domains of the domain list 108 … the assessment module 136 may employ a path-based inference algorithm to determine the likelihood that a specific domain may be malicious”) ([NABEEL, para. 0045] “Once the associations have been established, an inference based approach deploys an inference algorithm to assess the maliciousness of a domain based on its associations with known malicious and benign domains.”) ([NABEEL, para. 0075] “the maliciousness of a domain is assessed based on the weighted domain graph. For example, an assessment module 136 may employ a random forest classification algorithm on a weighted domain graph 146 to assess the likelihood of maliciousness.”) Nabeel teaches of a classification rule being an association of an unclassified domain with a known malicious domain.
Applicant has further argued that Nabeel fails to teach extracting from the generated graph of associations at least one subgraph … and containing at least one unclassified object based on at least one of the following: an analysis of a group association between objects; and an analysis of sequential association between objects. Examiner is not relying on Nabeel to teach this limitation. RAY teaches this limitation as can be seen in ([RAY, para.0111] “The event graph 500 may similarly be traversed going forward from one or more of the root cause 504 or the security event 502 to identify one or more other computing objects affected by the root cause 504 or the security event 502. For example, the first file 516 and the second 518 potentially may be corrupted because the USB device 512 included malicious content.”) ([RAY, para.0112] “The event graph 500 may include one or more computing objects or events that are not located on a path between the security event 502 and the root cause 504. These computing objects or events may be filtered or ‘pruned’ from the event graph 500 when performing a root cause analysis or an analysis to identify other computing objects affected by the root cause 504 or the security event 502.”) ([RAY, para.0114] “The event graph 500 may be created or analyzed using rules that define one or more relationships between events and computing objects”). As can be seen Ray teaches of generating an event graph 500 which is “pruned” to identify other computing objects affected by the root cause. The pruning of the event graph is similar to extracting a subgraph as recited in the claim language. Therefore, RAY teaches this limitation.
Applicant has further argued that Nabeel fails to teach restricting access to an object that is classified as malicious. Examiner is relying on RAY to teach this limitation. RAY teaches ([RAY, para.0103] “if a particular process executing on the endpoint is compromised, or potentially compromised or otherwise under suspicion, keys to that process may be revoked in order to prevent, e.g., data leakage or other malicious activity”) ([RAY, para.0065] “Based on reputation, potential threat sources may be blocked, quarantined, restricted, monitored, or some combination of these, before an exchange of data can be made.”) the process of RAY is restricted by revoking keys to prevent other malicious activity.
The combination of Nabeel and Ray under USC 103 is proper both references relate to determining maliciousness by building a graph of associations. Ray realizes the need for improvements to correlation, analysis, and visualization ([RAY, abstract]).
With respect to the arguments of claim 5, Applicant has argued that NABEEL-RAY fails to teach wherein the graph of associations contains only associations between objects of different types. Examiner respectfully disagrees. RAY teaches ([RAY, para. 0106] “As part of a root cause analysis, one or more cause identification rules may be applied to one or more of the preceding computing objects having a causal relationship with the detected security event 502, or to each computing object having a causal relationship to another computing object in the sequence of events preceding the detected security event 502.”) Furthermore, figure 5 of RAY shows a graph of different type of objects and related actions that are associated together. Therefore, RAY teaches this limitation.
With respect to the arguments of claim 10, Applicant has argued that NABEEL-RAY fails to teach wherein each of the analysis is performed by at least one machine learning model. Examiner respectfully disagrees. NABEEL teaches ([NABEEL, para. 0052] “The machine learning module 128 may be configured to include a classification module 130, an association module 132, a graph-building module 134, and an assessment module 136.”) ([NABEEL, para. 0075] “providing data to a machine learning module, wherein the machine learning module was previously trained on a plurality of IP address attributes and a plurality of domain attributes”) The machine learning of Nabeel is the one that includes classification and graph building. Therefore, the analysis is done by the machine learning module.
With respect to the arguments of claim 13, Applicant has argued that NABEEL-RAY fails to teach wherein the analysis of a sequential association between objects uses information about at least three objects having an association. Examiner respectfully disagrees. NABEEL teaches ([NABEEL, para. 0071] “For example, the association module 132 deems two domains associated if they share at least on dedicated IP address. This association method is referred to as G-IP within this disclosure. In example graph 200, domains D4 216, D5 220, and D6 224 share the dedicated IP6 222. This association method combined with the association rules used to build graph 300, combine to produce the example graph 400”). Applicant has argued that the claim recites of “triplets spanning different object types”. This language is not recited in the claim. The claim broadly recites of “at least three objects having an association”. NABEEL teaches this limitation.
With respect to the arguments of claim 14, Applicant has argued that NABEEL-RAY fails to teach wherein the analysis of a group association between objects uses information about at least four objects, three of which have an association to a fourth. Examiner respectfully disagrees. NABEEL teaches (NABEEL, para. 0073] “FIG. 6 illustrates an example graph 600 built by the method of the present disclosure based on the example IP address and domain graph 200. The graph 600 is expanded beyond the prior example graphs 300, 400, and 500 by implementing all association rules used to create these prior graphs. This association method is referred to as G-IP-Domain within this disclosure. This combination of association rules produces the example graph 600, containing six of the eight domains found in the example IP address and domain graph 200.”). As can be seen in figure 6 of Nabeel D2, D3, D5, and D6 all share a link to D4. This teaches the limitation of group association between objects uses information about at least four objects, three of which have an association to a fourth. The fourth object is D4 which shares a link to other objects. Applicant has argued that group association trains on a hub object of a second type and at least three first type objects. However, this is not recited in the claim language. NABEEL teaches this limitation.
With respect to the arguments of claim 15, Applicant has argued that NABEEL-RAY fails to teach wherein the access to an object that is classified as malicious is restricted to prevent the spread of malicious activity by one of the following: blocking access to the website to which the object is associated; opening the website to which the object is associated in a browser that runs in protected mode; and pausing a transition to the website, and informing a user that the website is associated with a malicious object. Examiner respectfully disagrees. ([RAY, para. 0062] “security management facility 122 … help control web browsing, and the like, which may provide comprehensive web access control enabling safe, productive web browsing. Web security and control may provide Internet use policies”) ([RAY, para. 0134] “As shown in step 810, the method 800 may include displaying the intermediate threat(s) and supplemental information in a user interface for user disposition, or otherwise augmenting a description of the new threat sample in a user interface with the supplemental information”) ([RAY, para. 0140] “the user interface 900 may display a window 906 with more granular information about features contributing to suspiciousness. For example, an analysis of a threat sample may return a 90% suspicion of malicious code, while a file path analysis may return a 57% suspicion, and a URL analysis may return a 77% suspicion. … Furthermore, for any particular feature (e.g., the URL analysis in FIG. 9), a number of most similar events or threat samples for that feature may be displayed”) ([RAY, para. 0085] “When a threat or other policy violation is detected by the security management facility 122, the remedial action facility 128 may be used to remediate the threat. … block requests to a particular network location or locations”). As can be seen in these citations RAY teaches of enabling safe web browsing. The blocking of requests by RAY stops a website from being loaded. RAY further teaches of displaying to a user a window that shows that a URL is suspicious. The claim recites that access to an object that is classified as malicious is restricted to prevent the spread of malicious activity by one of the following and then lists different restriction techniques. The mapping of blocking or informing a user that the website is associated with a malicious object alone is sufficient to teach this claim. Examiner suggests omitting the language “one of the following” if Applicant wants to prevent the spread of malicious activity by performing all the different features of claim 15.
Specification
The abstract of the disclosure is objected to because it contains legal phraseology of “comprising”. Examiner suggests amending the abstract to remove the legal phraseology. A corrected abstract of the disclosure is required and must be presented on a separate sheet, apart from any other text. See MPEP § 608.01(b). Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 1, 16, and 20 recite the limitation “classifying each unclassified object in each subgraph at least as malicious using classification rules”. It is unclear how each of the unclassified objects can be classified at least as malicious. This would entail that all the unclassified objects are malicious. For the purpose of examination examiner is interpreting this limitation as classifying using classification rules unclassified objects as malicious. Appropriate correction is required.
Claims 3-15, and 18-19 depend on claims 1, 16, and 20. Therefore, they also inherit the rejection.
Claim 1, 16, and 20 recite the limitation “… associations of classified malicious objects … vertices represent classified malicious objects” However, the claim originally recites of “generic information with classified malicious objects” in a previous limitation. It is unclear if the additional instances of classified malicious objects refer to the same classified malicious objects that were initially recited. For the purpose of examination Examiner is interpreting these limitation as “the classified malicious objects” to refer to the same classified malicious objects. Appropriate correction is required.
Claims 3-15, and 18-19 depend on claims 1, 16, and 20. Therefore, they also inherit the rejection.
Claim 1, 16, and 20 recite the limitation “and unclassified objects in a form of vertices … wherein vertices represent classified malicious objects and unclassified objects” However, the claim originally recites of “collecting information about unclassified objects”. It is unclear if the additional instances of unclassified objects refer to the same unclassified objects that were initially recited. For the purpose of examination Examiner is interpreting these limitation as “the unclassified objects” to refer to the same unclassified objects. Appropriate correction is required.
Claims 3-15, and 18-19 depend on claims 1, 16, and 20. Therefore, they also inherit the rejection.
Claim 1, 16, and 20 recite the limitation “said objects”. It is unclear what objects are being referred to in this limitation. For the purpose of examination examiner is interpreting this limitation as “the classified malicious objects and the unclassified objects”. Appropriate correction is required.
Claims 3-15, and 18-19 depend on claims 1, 16, and 20. Therefore, they also inherit the rejection.
Claim 1, 16, and 20 recite the limitation “or with unclassified objects that in turn have a generic information with classified malicious objects” However, the claim originally recites of “classified malicious objects”. It is unclear if the additional instances of classified malicious objects refer to the same classified malicious objects that were initially recited. For the purpose of examination Examiner is interpreting these limitation as “or with unclassified objects that in turn have a generic information with the classified malicious objects” to refer to the same unclassified objects. Appropriate correction is required.
Claims 3-15, and 18-19 depend on claims 1, 16, and 20. Therefore, they also inherit the rejection.
Claims 3 and 18 recites the limitation "the received input objects". There is insufficient antecedent basis for this limitation in the claim. For the purpose of examination examiner is interpreting this limitation as “determining the similarity of the unclassified objects”. Appropriate correction is required.
Claims 4 and 19 depend on claims 3 and 18. Therefore, they also inherit the rejection.
Claim 13 recites the limitation “analysis of a sequential association”. However, claim 1 now recites of “an analysis of a sequential association”. Examiner suggests amending claim 13 to recite “analysis of the sequential association”. Appropriate correction is required.
Claim 14 recites the limitation “analysis of a group association”. However, claim 1 now recites of “an analysis of a group association”. Examiner suggests amending claim 14 to recite “analysis of the group association”. Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3, 5-7, 9, 10, 12-16, 18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over NABEEL (US-20200382533-A1) in view of RAY (US-20210400071-A1), hereinafter NABEEL-RAY.
Regarding claim 1, NABEEL teaches “A method for classifying objects to prevent the spread of malicious activity, the method comprising: ([NABEEL, abstract] “The presently disclosed method and system exploits information and traces contained in DNS data to determine the maliciousness of a domain based on the relationship it has with other domains.”) ([NABEEL, para. 0043] “detecting these malicious domains in a timely manner is important not only to identify domains on which cyber-attacks have occurred, but also to take preventative measures by identifying these malicious domains before a cyber-attack takes place”) collecting information about unclassified objects in a network that have a generic information with classified malicious objects or with unclassified objects that in turn have a generic information with classified malicious objects; ([NABEEL, para. 0045] “building associations between domains from DNS data to reflect their meaningful connections. For example, one association may be that the domains are deployed and/or controlled by the same entity. Once the associations have been established, an inference based approach deploys an inference algorithm to assess the maliciousness of a domain based on its associations with known malicious and benign domains”) ([NABEEL, para. 0048] “two domains may be associated if they are co-hosted and belong to the same dedicated apex domain or share more than one public IP address from different hosting providers. Once these associations have been identified and graphed, an inference-based algorithm may be deployed”) generating a graph of associations of classified malicious objects and unclassified objects in a form of vertices connected with edges, wherein vertices represent classified malicious objects and unclassified objects, and edges represent associations between said objects; ([NABEEL, para. 0068] “once the graph-building module 134 has completed building a graph detailing weights and relations between the domains of the domain list 108, the assessment module 136 may employ a belief propagation algorithm to determine a likelihood that a specific domain may be malicious”) ([NABEEL, para. 0045] “Once the associations have been established, an inference based approach deploys an inference algorithm to assess the maliciousness of a domain based on its associations with known malicious and benign domains.”) ([NABEEL, para. 0093] “Given a dedicated apex domain or a subdomain belonging to this apex domain, if it was known or inferred to be malicious, it was highly likely all subdomains under this apex domain were malicious.”) ([NABEEL, para. 0069, Fig. 2] “Each domain in the graph 200 belongs to an apex domain. ”) ([NABEEL, para. 0072, Fig. 5] “FIG. 5 illustrates an example graph 500 built by the method of the present disclosure based on the example IP address and domain graph 200. … For example, the association module 132 has determined more domain associations 144 based on a dedicated apex domain relationship. For example, the association module 132 deems two domains associated if they are co-hosted domains and belong to the same dedicated apex domain. This association method is referred to as G-Domain within this disclosure. Of the eight domains in the example IP address and domain graph 200, only domains D2 206, D3 210, and D4 216 qualify for this association. This association method combined with the association rules used to build graph 300, combine to produce the example graph 500, containing five of the eight domains found in the example IP address and domain graph 200, providing an expanded graph for maliciousness analysis.”) … graph … comprising homogeneous objects ([NABEEL, para. 0068] “configured to assess the maliciousness of a domain based on a weighted domain graph. For example, once the graph-building module 134 has completed building a graph detailing weights and relations between the domains of the domain list 108”) … classifying each unclassified object in each subgraph at least as malicious using classification rules; and … ([NABEEL, para. 0068] “The assessment module 136 may be configured to assess the maliciousness of a domain based on a weighted domain graph. For example, once the graph-building module 134 has completed building a graph detailing weights and relations between the domains of the domain list 108 … the assessment module 136 may employ a path-based inference algorithm to determine the likelihood that a specific domain may be malicious”) ([NABEEL, para. 0045] “Once the associations have been established, an inference based approach deploys an inference algorithm to assess the maliciousness of a domain based on its associations with known malicious and benign domains.”) ([NABEEL, para. 0075] “the maliciousness of a domain is assessed based on the weighted domain graph. For example, an assessment module 136 may employ a random forest classification algorithm on a weighted domain graph 146 to assess the likelihood of maliciousness.”)
However, NABEEL does not teach “extracting from the generated graph of associations at least one subgraph … and containing at least one unclassified object based on at least one of the following: an analysis of a group association between objects; and an analysis of sequential association between objects; … restricting access to an object that is classified as malicious.”
In analogous teaching RAY teaches “extracting from the generated graph of associations at least one subgraph … and containing at least one unclassified object based on at least one of the following: an analysis of a group association between objects; and an analysis of sequential association between objects; ([RAY, para.0111] “The event graph 500 may similarly be traversed going forward from one or more of the root cause 504 or the security event 502 to identify one or more other computing objects affected by the root cause 504 or the security event 502. For example, the first file 516 and the second 518 potentially may be corrupted because the USB device 512 included malicious content.”) ([RAY, para.0112] “The event graph 500 may include one or more computing objects or events that are not located on a path between the security event 502 and the root cause 504. These computing objects or events may be filtered or ‘pruned’ from the event graph 500 when performing a root cause analysis or an analysis to identify other computing objects affected by the root cause 504 or the security event 502.”) ([RAY, para.0114] “The event graph 500 may be created or analyzed using rules that define one or more relationships between events and computing objects”) restricting access to an object that is classified as malicious. ([RAY, para.0103] “if a particular process executing on the endpoint is compromised, or potentially compromised or otherwise under suspicion, keys to that process may be revoked in order to prevent, e.g., data leakage or other malicious activity”) ([RAY, para.0065] “Based on reputation, potential threat sources may be blocked, quarantined, restricted, monitored, or some combination of these, before an exchange of data can be made.”).
Thus, given the teaching of RAY, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine the teaching of subgraphs by RAY into the teaching of method for classifying objects to prevent the spread of malicious activity by NABEEL. One of ordinary skill in the art would have been motivated to do so because RAY recognizes the need for improved threat management ([RAY, abstract] “Event data from these sensors is augmented with contextual information about, e.g., a source of each event in order to facilitate improved correlation, analysis, and visualization at a threat management facility for the enterprise network”).
Regarding claim 16, this claim recites of system that performs the method of claim 1. Therefore, claim 16 is rejected in a similar manner as in the rejection of claim 1.
Regarding claim 20, this claim recites of a non-transitory computer-readable medium storing thereon computer executable instructions which perform the method of claim 1. Therefore, claim 20 is rejected in a similar manner as in the rejection of claim 1.
Regarding claims 3 and 18, NABEEL-RAY teaches all limitations of claims 1 and 16. NABEEL further teaches “wherein the classification rules include at least one of the following: a similarity analysis or an analysis of objects using a machine learning model for determining the similarity of the received input objects.” ([NABEEL, para. 0052] “The machine learning module 128 may be configured to include a classification module 130, an association module 132, a graph-building module 134, and an assessment module 136.”) ([NABEEL, para. 0068] “The assessment module 136 may be configured to assess the maliciousness of a domain based on a weighted domain graph. For example, once the graph-building module 134 has completed building a graph detailing weights and relations between the domains of the domain list 108, the assessment module 136 may employ a belief propagation algorithm to determine a likelihood that a specific domain may be malicious”) ([NABEEL, para. 0063] “The machine learning module 128 may be further configured to include an association module 132. This association module 132 may be configured to associate each of the plurality of domains on the domain list 108 and the resolving IP addresses 114 found within the DNS data 104 on at least the corresponding associations, determining a plurality of domain associations 144.”).
Regarding claim 5, NABEEL-RAY teaches all limitations of claim 1. RAY further teaches “wherein the graph of associations contains only associations between objects of different types.” ([RAY, para. 0106] “The event graph 500 may include a sequence of computing objects causally related by a number of events, and which provide a description of computing activity on one or more endpoints”) ([RAY, para. 0109] “the event graph 500 may be traversed in a reverse order from a computing object associated with the security event 502 based on the sequence of events included in the event graph 500. For example, traversing backward from the action 528 leads to at least the first application 520 and the USB device 512. As part of a root cause analysis, one or more cause identification rules may be applied to one or more of the preceding computing objects having a causal relationship with the detected security event 502, or to each computing object having a causal relationship to another computing object in the sequence of events preceding the detected security event 502.”).
The same motivation and rejection to modify NABEEL with RAY as in the rejection of claim 1 applies.
Regarding claim 6, NABEEL-RAY teaches all limitations of claim 1. NABEEL further teaches “wherein the objects and object information are at least two of the following types of information: Internet Protocol (IP) address; Fully Qualified Domain Name (FQDN); Universal Resource Identifier (URI) information; domain name data, including information about a domain name registrar; information about an owner of a domain name, including a name of an owner who owns the domain name, an address of the owner of the domain name, an IP address range to which the domain name belongs on the network, and contact information for the owner of the domain name; information about an owner of the IP address, including a name and an address of the owner of the IP address; name of the computer network range; a location that corresponds to an IP address range, including country and city; contact details of an administrator; information about the IP address to which the object belongs; information about public key certificates issued for the domain name; file hash and file path; and web addresses that contain the domain name.” ([NABEEL, para. 0073] “FIG. 6 illustrates an example graph 600 built by the method of the present disclosure based on the example IP address and domain graph 200. … This association method is referred to as G-IP-Domain within this disclosure. This combination of association rules produces the example graph 600, containing six of the eight domains found in the example IP address and domain graph 200.”) ([NABEEL, para. 0053] “These domain based attributes may include the number of fully qualified domain names (“FQDNs”), the number of third level domains which an IP address hosts during a certain time period”).
Regarding claim 7, NABEEL-RAY teaches all limitations of claim 6. RAY further teaches “wherein the URI information comprises at least a page address and page load parameters.” ([RAY, para. 0065] “For instance, reputation filtering may include lists of URIs of known sources of malware or known suspicious IP addresses, code authors, code signers, or domains, that when detected may invoke an action by the threat management facility 100.”) ([RAY, para. 0147] “a URL may be encoded in an event 1106 as a hash of a URL, or as a portion of a URL, or some combination of these (e.g., a literal encoding of the top level domain, and a hash of some or all of the remaining path information).”).
The same motivation and rejection to modify NABEEL with RAY as in the rejection of claim 1 applies.
Regarding claim 9, NABEEL-RAY teaches all limitations of claim 1. RAY further teaches “wherein at least one subgraph extracts associated components that contain information about associated objects, wherein the at least one object is unclassified.” ([RAY, para. 0112] “The event graph 500 may include one or more computing objects or events that are not located on a path between the security event 502 and the root cause 504. These computing objects or events may be filtered or ‘pruned’ from the event graph 500 when performing a root cause analysis”) ([RAY, para. 0111] “The event graph 500 may similarly be traversed going forward from one or more of the root cause 504 or the security event 502 to identify one or more other computing objects affected by the root cause 504 or the security event 502. For example, the first file 516 and the second 518 potentially may be corrupted because the USB device 512 included malicious content”).
The same motivation and rejection to modify NABEEL with RAY as in the rejection of claim 1 applies.
Regarding claim 10, NABEEL-RAY teaches all limitations of claim 1. NABEEL further teaches “wherein each of the analysis is performed by at least one machine learning model.” ([NABEEL, para. 0052] “The machine learning module 128 may be configured to include a classification module 130, an association module 132, a graph-building module 134, and an assessment module 136.”) ([NABEEL, para. 0075] “providing data to a machine learning module, wherein the machine learning module was previously trained on a plurality of IP address attributes and a plurality of domain attributes”).
Regarding claim 12, NABEEL-RAY teaches all limitations of claim 1. NABEEL further teaches “wherein the sequential analysis employs at least one neighboring malicious object.” ([NABEEL, para. 0071] “For example, the association module 132 has determined more domain associations 144 based on a dedicated IP address relationship. For example, the association module 132 deems two domains associated if they share at least on dedicated IP address. This association method is referred to as G-IP within this disclosure”).
Regarding claim 13, NABEEL-RAY teaches all limitations of claim 1. NABEEL further teaches “wherein the analysis of a sequential association between objects uses information about at least three objects having an association.” ([NABEEL, para. 0071] “For example, the association module 132 deems two domains associated if they share at least on dedicated IP address. This association method is referred to as G-IP within this disclosure. In example graph 200, domains D4 216, D5 220, and D6 224 share the dedicated IP6 222. This association method combined with the association rules used to build graph 300, combine to produce the example graph 400”).
Regarding claim 14, NABEEL-RAY teaches all limitations of claim 1. NABEEL further teaches “wherein the analysis of a group association between objects uses information about at least four objects, three of which have an association to a fourth” (NABEEL, para. 0073] “FIG. 6 illustrates an example graph 600 built by the method of the present disclosure based on the example IP address and domain graph 200. The graph 600 is expanded beyond the prior example graphs 300, 400, and 500 by implementing all association rules used to create these prior graphs. This association method is referred to as G-IP-Domain within this disclosure. This combination of association rules produces the example graph 600, containing six of the eight domains found in the example IP address and domain graph 200.”) [Examiner’s note: Objects D5, D3, and D2 all share a link to D4 in figure 6 of NABEEL.]
Regarding claim 15, NABEEL-RAY teaches all limitations of claim 1. RAY further teaches “wherein the access to an object that is classified as malicious is restricted to prevent the spread of malicious activity by one of the following: blocking access to a website to which the object is associated; opening the website to which the object is associated in a browser that runs in protected mode; and pausing a transition to the website, and informing a user that the website is associated with a malicious object.” ([RAY, para. 0062] “security management facility 122 … help control web browsing, and the like, which may provide comprehensive web access control enabling safe, productive web browsing. Web security and control may provide Internet use policies”) ([RAY, para. 0134] “As shown in step 810, the method 800 may include displaying the intermediate threat(s) and supplemental information in a user interface for user disposition, or otherwise augmenting a description of the new threat sample in a user interface with the supplemental information”) ([RAY, para. 0140] “the user interface 900 may display a window 906 with more granular information about features contributing to suspiciousness. For example, an analysis of a threat sample may return a 90% suspicion of malicious code, while a file path analysis may return a 57% suspicion, and a URL analysis may return a 77% suspicion. … Furthermore, for any particular feature (e.g., the URL analysis in FIG. 9), a number of most similar events or threat samples for that feature may be displayed”) ([RAY, para. 0085] “When a threat or other policy violation is detected by the security management facility 122, the remedial action facility 128 may be used to remediate the threat. … block requests to a particular network location or locations”).
The same motivation and rejection to modify NABEEL with RAY as in the rejection of claim 1 applies.
Claims 4 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over NABEEL-RAY in view of ZHANG (US-12223056-B1).
Regarding claims 4 and 19, NABEEL-RAY teaches all limitations of claims 3 and 18. However, NABEEL-RAY does not teach “wherein the similarity analysis is implemented using a Levenshtein metric.”
In analogous teaching ZHANG teaches “wherein the similarity analysis is implemented using a Levenshtein metric.” ([ZHANG, col. 3 lines 57-60, col 4 lines 35-45] “the graph-based abusive computational node detection system 102 may implement a risky candidate computational node detection component 104 … Algorithms such as Levenshtein distance … may be used to determine similarity between attributes when constructing the input graph 106. The input graph 106 may include some seed nodes that have known labels. These labeled computational nodes are a list of confirmed abusive entities (e.g., known malicious computational nodes) and/or confirmed non-abusive entities. Labeled computational nodes may be so labeled based on historic data (e.g., data indicating that a particular computational node is associated with unauthorized scripting, uploading of malicious code, etc.).”).
Thus, given the teaching of ZHANG, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine the teaching of Levenshtein metric by ZHANG into the teaching of method for classifying objects to prevent the spread of malicious activity by NABEEL-RAY. One of ordinary skill in the art would have been motivated to do so because ZHANG recognizes the need to efficiently detect malicious nodes ([ZHANG, col. 2 lines 37-47, col 3 lines 2-8] “Some common strategies for dealing with automated abuse detection include selection of a set of potentially abusive computational nodes for human investigation based on automated detection system results. … However, while such techniques may be useful, they limit the coverage of computational node abuse detection for several reasons … graph-based abusive computational node detection systems and techniques described herein are able to model high dimensional data to determine relationships between known good computational nodes and/or known abusive computational nodes and potentially risky nodes (e.g., computational nodes under evaluation).”).
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over NABEEL-RAY in view of ADAMS (US-20210185075-A1).
Regarding claim 8, NABEEL-RAY teaches all limitations of claim 1. NABEEL teaches “wherein, the generating of the graph of associations containing classified objects and unclassified objects in the form of vertices” as seen in the rejection of claim 1. The same rejection applies.
However, NABEEL-RAY does not teach “further comprises: classifying unclassified objects that are domain names as trusted in an event that the number of requests received from the domain name system exceeds a predetermined threshold.”.
In analogous teaching ADAMS teaches “further comprises: classifying unclassified objects that are domain names as trusted in an event that the number of requests received from the domain name system exceeds a predetermined threshold.” ([ADAMS, para. 0049] “At step 209, the message security platform 110 may compute an initial set of rank-ordered external domains based on the external domains selected at step 208. … the message security platform 110 may identify a difference between the first ratio and the second ratio, and may apply a weight value to the difference based on a quantity of messages corresponding to the first number of messages and the second number of messages, which may result in a weighted difference value for the external domain (e.g., if the first number of messages and the second number of messages exceed a predetermined threshold, the external domain may correspond to a member of the supply chain that is frequently contacted or otherwise dealt with”).
Thus, given the teaching of ADAMS, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine the teaching of threshold request of a domain name requests by ADAMS into the teaching of method for classifying objects to prevent the spread of malicious activity by NABEEL-RAY. One of ordinary skill in the art would have been motivated to do so because ADAMS recognizes the need to improve security ([ADAMS, para. 0027] “Some aspects of the disclosure relate to improving enterprise security in electronic communications between an organization and its vendors and/or suppliers, trusted third party entities (which may e.g., be part of the organization's supply chain), and/or other entities.”).
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over NABEEL-RAY in view of MCNEE (US-20220150275-A1).
Regarding claim 11, NABEEL-RAY teaches all limitations of claim 10. However, NABEEL-RAY does not teach “wherein the machine learning model is trained by using boosting decision trees.”.
In analogous teaching MCNEE teaches “wherein the machine learning model is trained by using boosting decision trees.” ([MCNEE, para. 0025] “The machine learning algorithms 110 may comprise any type of machine learning algorithm capable of predictive results. … one example EPSS 100, the ensemble classifier engines 113 use logistic regression, a Bayesian classifier, or a decision tree such as a random forest or a gradient boosted tree.”).
Thus, given the teaching of MCNEE, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine the teaching of boosting decision trees by MCNEE into the teaching of method for classifying objects to prevent the spread of malicious activity by NABEEL-RAY. One of ordinary skill in the art would have been motivated to do so because MCNEE recognizes the benefits of using machine learning to detect cybersecurity threats ([MCNEE, para. 0019] “Embodiments described herein provide enhanced computer- and network-based methods, techniques, and systems for producing and using enhanced machine learning models and computer-implemented tools to investigate cybersecurity related data and threat intelligence data. … which enables security software application and platform providers to build, deploy, and manage applications for evaluating threat intelligence data that can predict malicious domains”).
Pertinent Art
The prior art made of record and not relied upon is considered pertinent to applicant’s
disclosure.
PEREIRA (US-10924503-B1): This prior art teaches of systems, methods, and computer-readable media are disclosed for systems and methods for identifying false positives in malicious domain data using network traffic data logs. Example methods may include determining a first domain name identifier in a set of domain name identifiers classified as malicious, determining a first IP address associated with the first domain name identifier, and determining first virtual private cloud (VPC) flow log data that corresponds to historical network traffic associated with the first IP address. Certain methods may include determining second VPC flow log data that corresponds to historical network traffic associated with a second IP address that is classified as non-malicious, determining, using the first VPC flow log data and the second VPC flow log data, that the first VPC flow log data is non-malicious, and determining that the first domain name identifier is to be classified as non-malicious.
ISLAM (US-10033753-B1): This prior art teaches of a method for detecting a cyber-attack features first and second analyzes. The first analysis is conducted on content of a communication to determine at least a first high quality indicator. The first high quality indicator represents a first probative value for classification. The second analysis is conducted on metadata related to the content to determine supplemental indicator(s). Each of the supplemental indicator(s) is represented by a probative value for classification. The communication is classified as being part of the cyber-attack when the first probative value exceeds a predetermined threshold without consideration of the corresponding probative values for the supplemental indicator(s). In response to the first high quality indicator failing to classify the network communication, using the corresponding probative values associated with the one or more supplemental indicators with at least the first probative value to classify the network communication as being part of the cyber-attack.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AFAQ ALI whose telephone number is (571)272-1571. The examiner can normally be reached Mon - Fri 7:30am - 5:30pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ALI SHAYANFAR can be reached at (571) 270-1050. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/A.A./
03/06/2026
/AFAQ ALI/Examiner, Art Unit 2434
/NOURA ZOUBAIR/Primary Examiner, Art Unit 2434