DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Election/Restrictions
Applicant's election without traverse of species I in the reply filed on 11/24/2025 (see page 6 of applicant’s Remark) is acknowledged.
This is a non-final office action in response to applicant’s communication filed on 11/24/2025.
Claims 1-19 are pending and being considered. Claims 20-21 are withdrawn.
Specification
The disclosure is objected to because of the following informalities:
Para. [0162] line 6, “…is deemed sale and …”, the underlined may be a typo.
Appropriate correction is required.
Claim Objections
Claims 1, 11, 13-15, 18-19 are objected to because of the following informalities:
Claim 1 recites, “A system, comprising:
one or more processors configured to:
…; and
a memory coupled to the one or more processors and configured to provide the one or more processors with instructions.”
is suggested to read,
“A system, comprising:
one or more processors; and
a memory coupled to the one or more processors and configured to provide the one or more processors with instructions, when executed by the one or more processors, to:
…”.
Claim 1 lines 7-8, “… based at least part on …” may read “… based at least in part on …”.
Claim 11 line 6, claim 13 line 1, claim 14 line 2, “the set of seed domains” should read “the set of seed malicious domains”.
Claim 15 line 3, “the one or more expanded network graphs” may read “the expanded one or more
Claim 17 line 2, “a number of seed domains” may read “a number of seed malicious domains”, or more appropriate form.
Claim 18 line 2, “… on the internet” may read “… on an internet” or more appropriate form.
Similarly, claim 19 line 3.
Appropriate correction is suggested.
Examiner Notes
Examiner cites particular paragraphs, columns and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1, 3-4, 6-8, 10, 18-19 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Nabeel et al (US20240333749A1, hereinafter, "Nabeel").
Regarding claim 1, Nabeel teaches:
A system, comprising: one or more processors (Nabeel, discloses system and method of proactively detecting malicious domain using graph representation learning, see [Abstract]. And Fig. 9 Processor 910) configured to:
determine a set of seed malicious domains (e.g., [0023] On a batch-mode basis, security system 160 first compiles a seed malicious domain list first seen on a given day and identifies other recent domains hosted on the same infrastructure where the seed malicious domains are hosted. Further refer to Fig. 4, and [0040] FIG. 4 is a flowchart of a method 400 for generating likely malicious seed domains for a batch period);
expand one or more network graphs for the set of seed malicious domains to obtain a set of network neighborhoods ([0029] Thus, after the PDNS crawl is executed, the security system 160 expands a graph in the neighborhood of seed malicious domains to likely discover additional, malicious domains that were not identified in step one);
determine a set of domains expected to be malicious from a set of toxic network neighborhoods, wherein the set of toxic network neighborhoods are determined based at least part on the set of network neighborhoods, and a particular toxic network neighborhood shares a plurality of hosting environments (e.g., [0033] Following this observation, at block 230, the security system 160 executes a PDNS crawl of recently hosted domains and expands the graph in the neighborhood of seed malicious domains to discover other likely malicious domains. While the toxicity of the neighborhood is relatively high compared to random neighborhoods, there are still many benign domains in these neighborhoods, mainly due to shared hosting on public infrastructures);
and perform an action based at least in part on the set of domains expected to be malicious (e.g., [0037] At block 280, once the GNN is trained, then the security system 160 is able to perform domain classification into benign or malicious domains to threat detection, mitigation, and quarantining. The classified domains may be provided individually to a user in response to a query related to a particular domain or as a block list to a user to aid the user in avoiding accessing (or blocking access to or communication from) domains identified as being malicious);
and a memory coupled to the one or more processors and configured to provide the one or more processors with instructions (Fig. 9, Processor and Memory).
Regarding claim 18, similarly claim 19, Nabeel teaches:
A method, a computer program product embodied in a non-transitory computer readable medium (Nabeel, discloses system and method of proactively detecting malicious domain using graph representation learning, see [Abstract]. Fig. 9, and [0090] non-transitory computer readable medium), comprising:
determining a set of toxic network neighborhoods on the internet, wherein a particular toxic network neighborhood shares a plurality of hosting environments (e.g., [0020] The security system 160 differentiates malicious domains from benign domains with much less available information than content-based approaches. A key observation is that while the toxicity, (e.g., the ratio of malicious domains to all domains), of hosting infrastructures on the Internet, in general, is very low, the same measure in the neighborhoods that previously hosted malicious domains is relatively high. Stated differently, once a given host has been found to host a malicious domain, the given domain can be assumed (and in practice found) to be more likely to host malicious domains again in the near future);
expanding one or more network graphs for the set of toxic network neighborhoods ([0029] Thus, after the PDNS crawl is executed, the security system 160 expands a graph in the neighborhood of seed malicious domains to likely discover additional, malicious domains that were not identified in step one);
determining a set of domains expected to be malicious from the set of toxic network neighborhoods (e.g., [0033] Following this observation, at block 230, the security system 160 executes a PDNS crawl of recently hosted domains and expands the graph in the neighborhood of seed malicious domains to discover other likely malicious domains. While the toxicity of the neighborhood is relatively high compared to random neighborhoods, there are still many benign domains in these neighborhoods, mainly due to shared hosting on public infrastructures);
and performing an action based at least in part on the set of domains expected to be malicious (e.g., [0037] At block 280, once the GNN is trained, then the security system 160 is able to perform domain classification into benign or malicious domains to threat detection, mitigation, and quarantining. The classified domains may be provided individually to a user in response to a query related to a particular domain or as a block list to a user to aid the user in avoiding accessing (or blocking access to or communication from) domains identified as being malicious).
Regarding claim 3, Nabeel teaches the system of claim 1,
Nabeel further teaches: wherein performing the action comprises performing a maliciousness classification for the set of domains expected be malicious (e.g., [0037] At block 280, once the GNN is trained, then the security system 160 is able to perform domain classification into benign or malicious domains to threat detection, mitigation, and quarantining. The classified domains may be provided individually to a user in response to a query related to a particular domain or as a block list to a user to aid the user in avoiding accessing (or blocking access to or communication from) domains identified as being malicious).
Regarding claim 4, Nabeel teaches the system of claim 1,
Nabeel further teaches: wherein performing the action comprises performing a crawling of the set of domains based at least in part on using a guided domain crawler (e.g., [0033] Following this observation, at block 230, the security system 160 executes a PDNS crawl of recently hosted domains and expands the graph in the neighborhood of seed malicious domains to discover other likely malicious domains).
Regarding claim 6, Nabeel teaches the system of claim 1,
Nabeel further teaches: wherein the plurality of hosting environments comprise two or more of (a) [a hosting IP address], (b) a TLS certificate, (c) an implemented phishing kit, (d) a registration record, (e) a CNAME record, (f) one or more hyperlinks comprised in a website, (g) malware files hosted at a domain, (h) a redirection chain, (i) a set of keywords, (j) a tracking identifier, and (k) a logo hosted comprised in the website (e.g., [0018] As illustrated, a malicious domain registration 110 occurs at a first time, and is made available via hosting infrastructure 120 at a second time, is issued a TLS certificate 130 at a third time, provided with host content 140 (i.e., malware files hosted at a domain) at a fourth time. And [0046] At block 425, the security system 160 identifies URLs with phishing keywords, such as popular brand impersonating keywords that are more likely to be malicious).
Regarding claim 7, Nabeel teaches the system of claim 1,
Nabeel further teaches: wherein determining the set of toxic network neighborhoods comprises identifying a set of network neighborhoods based at least in part on a set of associations among domains within the set of network neighborhoods (e.g., [0064] the security system 160 takes the five nearest neighbors from the node's neighborhood, and takes the average of the neighbor's features as the node's features. The intuition is that nodes closer to one another tend to have similar characteristics. The more IPs that the two domains are co-hosted at, the more likely there exist strong associations between those domains. The same intuition is also applied to discover the strong association between two IPs if those IPs host many common domains).
Regarding claim 8, Nabeel teaches the system of claim 1,
Nabeel further teaches: wherein the one or more processors are further configured to: obtain a stream of malicious domains from one or more domain classification sources; and determine a set of recently observed malicious domains within the stream of malicious domains (Refer to Fig. 5, and [0048] FIG. 5 is a flowchart of a method 500 for malicious ground truth generation, according to embodiments of the present disclosure. In addition to the output of the seed selection pipeline (e.g., per method 400 discussed in relation to FIG. 4), the security system 160 at block also actively queries a sample set of newly observed domains (e.g., a randomly or otherwise selected subset thereof) to enrich and diversify the batch-period list of malicious domains. In both cases, the security system 160 may use one of the most conservative thresholds of X positive consensus scanners to construct the malicious ground truth).
Regarding claim 10, Nabeel teaches the system of claim 8,
Nabeel further teaches: wherein the one or more processors are further configured to: obtain a stream of malicious IP addresses from one or more IP classification sources; and determine a set of recently observed malicious IP addresses within the stream of malicious IP addresses (e.g., Fig. 2. [0030] FIG. 2 is a flowchart of an example method 200 for the overall pipeline of detecting malicious domains for a given batch period (e.g., day), according to embodiments of the present disclosure. [0031] At block 210, the security system 160 receives a URL feed for the blocklisted URLs for the batch period. [0032] At block 220, the security system 160 extracts a set of seed domains in the batch period malicious seed domains using the daily blocklisted URLs. And [0029] The security system 160 then executes a PDNS crawl, using the second set of daily malicious domains. The PDNS crawl is executed to further identify domains with the same IP address as the malicious domains of the second set).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Nabeel as applied above to claim 1, in view of Khalil et al (US20180343272A1, hereinafter, “Khalil”).
Regarding claim 2, Nabeel teaches the system of claim 1,
While Nabeel does not specifically teach the following, in the same field of endeavor Khalil teaches:
wherein a domain is deemed to be a seed domain in response to determining that a likelihood that the domain is malicious exceeds a predefined maliciousness threshold (Khalil, discloses system and method to identify malicious web domain names, see [Abstract]. And [0129] The above experiment results suggest that to have a good tradeoff between true positives and false positives some embodiments could either have small set of seeds with low malicious thresholds or have a large set of seeds (relative to all malicious domains) while setting the threshold relatively high (between 0.7 to 0.85). In practice, however, it is not possible to know for sure whether the known malicious domains collected is large enough. Thus, the general practice of some embodiments would be to obtain as many known malicious domains as possible to form the seeds, and then set a high threshold value (e.g., 0.85) to avoid high false positives).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have employed the teachings of Khalil in the detecting malicious domains using graph representation learning of Nabeel by determining seed malicious domain based on setting of relative threshold. This would have been obvious because the person having ordinary skill in the art would have been motivated to avoid false positive for detecting malicious domains (Khalil, [Abstract]).
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Nabeel as applied above to claim 1, in view of Voros et al (US20220353284A1, hereinafter, “Voros”).
Regarding claim 5, Nabeel teaches the system of claim 1,
While Nabeel does not specifically teach the following, in the same field of endeavor Voros teaches:
wherein performing the action comprises prioritizing a classifying of the set of domains expected to be malicious over domains comprised in a non-toxic network neighborhood (Voros, discloses methods and apparatus to classify malicious infrastructure using machine learning, [Abstract]. And [0093] The classification of an infrastructure such as an IP address may be used in combination with other events or other factors to determine the likelihood of maliciousness of an application that is in communication with a given infrastructure. For example, an application with an unknown reputation that is determined to be in communication with an IP address with a likelihood of maliciousness may be determined to be a potential threat for further investigation or remediation. The application may be classified as high-risk or malicious. Additional restrictions or changes to rule thresholds may be instituted for a given network flow, application, compute instance, endpoint, or other artifact that relates to communication with a given IP address based on a classification. Other events associated with the network flow, application, compute instance, or other artifact that relates to the communication may be prioritized for investigation).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have employed the teachings of Voros in the detecting malicious domains using graph representation learning of Nabeel by classifying infrastructure such as an IP address in combination with other events or other factors to determine the likelihood of maliciousness of an application with priority for investigation. This would have been obvious because the person having ordinary skill in the art would have been motivated to detect a reputation of infrastructure associated with potentially malicious content (Voros, [Abstract]).
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Nabeel as applied above to claim 8, in view of Jursa et al (US20230283632A1, hereinafter, “Jursa”).
Regarding claim 9, Nabeel teaches the system of claim 8,
While Nabeel does not specifically teach the following, in the same field of endeavor Jursa teaches:
wherein a recently observed malicious domain corresponds to a domain for which network traffic was intercepted within a most recent predefined number of days (Jursa, discloses methods and system for detecting malicious URL redirection chains, [Abstract]. And [0022] Security module 114 uses browser plugin 116 and redirect processing module 118 to evaluate the redirects, using recent data (e.g. over a recent period of hours or days) including observed malicious landing domain data and/or redirect data for redirects associated with malicious landing domains. Examiner notes, in this case, redirect is interpreted as intercepted).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have employed the teachings of Jursa in the detecting malicious domains using graph representation learning of Nabeel by blocking the redirection domain if the rate of occurrence of the subject redirection domain exceeds a rate of occurrence threshold. This would have been obvious because the person having ordinary skill in the art would have been motivated to detect and avoid malicious URL redirection chains (Jursa, [Abstract], [0001]).
Claims 11, 13-14 are rejected under 35 U.S.C. 103 as being unpatentable over Nabeel as applied above to claim 10, in view of Chen et al (US20230188541A1, hereinafter, “Chen”).
Regarding claim 11, Nabeel teaches the system of claim 10,
While Nabeel does not specifically teach the following, in the same field of endeavor Chen teaches:
wherein: the one or more processors are further configured to: query one or more machine learning models for a predicted maliciousness classification based at least in part on one or more of the set of recently observed malicious domains and the set of recently observed IP addresses (Chen, discloses system and method for determining whether a registered domain is malicious, [Abstract]. And [0036] For example, prediction engine 174 applies a machine learning model to determine whether the newly registered domain is malicious. Applying the machine learning model to determine whether the newly registered domain is malicious may include prediction engine 174 querying machine learning model 176 (e.g., with the registration information for the domain));
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have employed the teachings of Chen in the detecting malicious domains using graph representation learning of Nabeel by querying machine learning model for newly registered domain. This would have been obvious because the person having ordinary skill in the art would have been motivated to predict whether a domain is malicious (Chen, [Abstract]).
Nabeel further teaches: and the set of seed domains is determined based at least in part on identifying domains having an associated predicted maliciousness classification that satisfies a maliciousness criteria ([0071] The goal of the security system 160 as a real-time classifier is to assess the maliciousness of any domain in the world. Thus, the aim of the real-time classifier includes further functionality beyond blocklist generation (i.e., maliciousness criteria). In blocklist generation, the seed nodes are known to be highly likely to be malicious, and within the computation graph of each node there exists at least one malicious node. And [0073] The security system 160 generates a batch-period (e.g., daily) blocklist based on the newly observed seed malicious domains gathered from a consensus feed every batch period and other data sources 820 via the graph classifier 810).
Regarding claim 13, Nabeel-Chen combination teaches the system of claim 11,
Nabeel further teaches: wherein the set of seed domains are used for a guided crawling of domains to identify a set of domains observed within an immediately preceding N days, where N is a predefined positive integer ([0051] At block 530, the security system 160 checks the WHOIS or other registration data for the domain. If the domain is new (e.g., registered or tracked for less than R or D days), or registration data are unavailable, the security system 160 labels the domain as malicious as part of the batch-period seed domains 550).
Regarding claim 14, Nabeel-Chen combination teaches the system of claim 11,
Nabeel further teaches: wherein the set of network neighborhoods is determined based at least in part on the set of seed domains ([0029] Thus, after the PDNS crawl is executed, the security system 160 expands a graph in the neighborhood of seed malicious domains to likely discover additional, malicious domains that were not identified in step one. And [0033] the security system 160 executes a PDNS crawl of recently hosted domains and expands the graph in the neighborhood of seed malicious domains to discover other likely malicious domains).
Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Nabeel-Chen as applied above to claim 11, further in view of Ron et al (US20230114721A1, hereinafter, “Ron”).
Regarding claim 12, Nabeel-Chen combination teaches the system of claim 11,
The combination of Nabeel-Chen does not specifically teach the following, in the same field of endeavor Ron teaches:
wherein the maliciousness criteria is one of: (a) a domain is within a top N most malicious domains where N is a predefined positive integer, and (b) a domain has an associated predicted maliciousness classification that exceeds a predefined maliciousness threshold (Ron, discloses method for classifying domains to malware families, [Abstract]. And [0005] … identifying one or more suspicious domains, extracting a timeframe corresponding to the one or more suspicious domains, calculating a rank coefficient between the one or more suspicious domains and a current seed domain of the corpus of malicious domains, determining whether the rank correlation coefficient exceeds a rank threshold for the one or more suspicious domains, comparing a number of suspicious domains whose correlation coefficients exceed the rank threshold to a relation threshold, and responsive to determining the number of suspicious domains whose correlation coefficients exceed the rank threshold exceeds the relation threshold, applying a tag to the suspicious domains indicating that the one or more suspicious domains correspond to a same malware family as the current seed domain).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have employed the teachings of Ron in the detecting malicious domains using graph representation learning of Nabeel-Chen by calculating a rank coefficient between suspicious domains and a current seed domain of corpus of malicious domains. This would have been obvious because the person having ordinary skill in the art would have been motivated to determine the suspicious domains correspond to a same malware family as the current seed domain (Ron, [Abstract]).
Claims 15, 17 are rejected under 35 U.S.C. 103 as being unpatentable over Nabeel-Chen as applied above to claim 14, further in view of Lem et al (US20190132344A1, hereinafter, “Lem”), and further in view of Crabtree et al (US20230362141A1, hereinafter, “Crabtree”).
Regarding claim 15, Nabeel-Chen combination teaches the system of claim 14,
The combination of Nabeel-Chen does not specifically teach, in the same field of endeavor Lem teaches:
wherein determining the set of domains expected to be malicious from the set of toxic network neighborhoods comprises: performing a clustering with respect to the one or more expanded network graphs to identify a set of network neighborhoods (Lem, discloses system and method for using graph analysis for detecting malicious activity in time evolving networks, see [Abstract]. And [0012] … wherein said analyst feedback comprises confirming, rejecting, and/or modifying the malicious predictions of said output. In further aspects, the analyst feedback step further comprises generation of malicious scores for entities with neighboring relationships to the labeled/ predicted entities, a graph clustering step, comprising clustering the output generated according to the malicious inference method, wherein clustering comprises grouping entities presented in the output according to a logic which facilitates analyst investigation);
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have employed the teachings of Lem in the detecting malicious domains using graph representation learning of Nabeel-Chen by performing graph clustering. This would have been obvious because the person having ordinary skill in the art would have been motivated to detect malicious entities and malicious behavior in a time evolving network via a graph framework by modeling activity in a network graph representing associations between entities (Lem, [Abstract]).
The combination of Nabeel-Chen-Lem does not specifically teach the following, in the same field of endeavor Crabtree teaches:
determining a toxicity level for each of the set of network neighborhoods; and determining the set of toxic network neighborhoods based at least in part on determining a subset of the set of network neighborhoods having a corresponding toxicity level above a predefined toxicity threshold (Crabtree, discloses system and method for network authentication toxicity assessment, see [Title]/[Abstract]. And [0054] ... determine the total amount of authentication objects gathered; determine what portion of the authentication objects are bad authentication objects; output a network toxicity report comprising the number of bad authentication objects as a proportion of the total amount of authentication objects gathered; and where the proportion meets or exceeds a threshold, issue a warning to a network administrator that the network’s toxicity has exceeded the threshold. And [0155] Accordingly, a useful metric in such analysis is network “toxicity,” defined as the proportion of “good” authentications in the network versus “bad” or less secure authentications).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have employed the teachings of Crabtree in the detecting malicious domains using graph representation learning of Nabeel-Chen-Lem by determining toxicity in network authentication. This would have been obvious because the person having ordinary skill in the art would have been motivated to determine what level of network “toxicity” is operationally acceptable to ensure that a network (or a process within a network) is generally safe to enable zero trust network security (Crabtree, [Abstract], [0159]).
Regarding claim 17, Nabeel-Chen-Lem-Crabtree combination teaches the system of claim 15,
Nabeel further teaches: wherein the toxicity of a network neighborhood is determined based at least in part on a number of seed domains in relation to a total number of domains within a graph for the network neighborhood ([0020] The security system 160 differentiates malicious domains from benign domains with much less available information than content-based approaches. A key observation is that while the toxicity, (e.g., the ratio of malicious domains to all domains)).
Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Nabeel-Chen-Lem-Crabtree as applied above to claim 15, further in view of Weber et al (US20190238576A1, hereinafter, “Weber”).
Regarding claim 16, Nabeel-Chen-Lem-Crabtree combination teaches the system of claim 15,
The combination of Nabeel-Chen-Lem-Crabtree does not specifically teach the following, in the same field of endeavor Weber teaches:
wherein determining the set of domains expected to be malicious from the set of toxic network neighborhoods comprises: identifying domains within a set of clusters associated with the set of toxic network neighborhoods (Weber, discloses techniques for clustering algorithm to identify additional malicious domains based on known malicious domains, see [Abstract]. And [0026] Once identified, domain identifier system 101 may send identified malicious domains 122 to domain filter 102. Domain filter 102 may then operate on network traffic on data path 131 to compare domains in the network traffic to those in identified malicious domains 122. If there is a match, domain filter 102 may block the network traffic including the matched domain, may notify the sender of that network traffic of the malicious domain, or may perform some other function. Domain identifier system 101 may also or instead provide identified malicious domains 122 to a user and may indicate the groupings of identified malicious domains 122 or may perform some other function with respect to identified malicious domains 122).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have employed the teachings of Weber in the detecting malicious domains using graph representation learning of Nabeel-Chen-Lem-Crabtree by clustering algorithm with unsupervised clustering. This would have been obvious because the person having ordinary skill in the art would have been motivated to identify additional malicious domains based on known malicious domains (Weber, [Abstract]).
Citation of References
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. The following references are cited but not been replied upon for this office action:
Nabeel et al (US20200382533A1) discloses method and system exploits information and traces contained in DNS data to determine the maliciousness of a domain based on the relationship it has with other domains.
Meshi et al (US10574681B2) discloses method of detection of known and unknown malicious domains.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL M LEE whose telephone number is (571)272-1975. The examiner can normally be reached on M-F: 8:30AM - 5:30PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Shewaye Gelagay can be reached on (571) 272-4219. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MICHAEL M LEE/Primary Examiner, Art Unit 2436