DETAILED ACTION
Authorization for Internet Communications
The examiner encourages Applicant to submit an authorization to communicate with the examiner via the Internet by making the following statement (from MPEP 502.03):
“Recognizing that Internet communications are not secure, I hereby authorize the USPTO to communicate with the undersigned and practitioners in accordance with 37 CFR 1.33 and 37 CFR 1.34 concerning any subject matter of this application by video conferencing, instant messaging, or electronic mail. I understand that a copy of these communications will be made of record in the application file.”
Please note that the above statement can only be submitted via Central Fax (not Examiner's Fax), Regular postal mail, or EFS Web using PTO/SB/439.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 08/23/2024 is being considered by the examiner.
Specification
The disclosure is objected to because of the following informalities:
Page 1, para 0001; the status of the application should be updated.
Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1 - 20 are rejected under 25 U.S.C. 101 because the claimed invention is directed to judicial exception (an abstract idea) without significantly more.
Regarding claims 1 - 20, the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Claim(s) 1 is directed to an abstract idea because “the claim merely collects data (domain names), analyzes the data (using known ML algorithm), and displays results (identifying malicious domains). These are abstract information-processing steps performed on a generic computer. See Classen (data collection and correlation), Electric Power Group (collect, analyze, display of network data).
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements i.e., “equipment comprising a processor” (see claim 1), processor and memory (see claim 13) and “pre-trained convolutional neural network” are simply a generic recitation of a computer. The claims amount to no more than performing an abstract idea by using a computer. Taking the elements both individually and as a combination, the computer components in the claims perform purely generic computer functions. Thus, the claim as a whole does not amount to significantly more than the abstract idea itself. Accordingly, the above claims are ineligible.
Claims 13 and 18 are the device and product claims with substantially similar to the abstract method claim 1.
Claims 2 – 12, 14 – 17 and 19 - 20 do not include elements that amount to significantly more than the abstract idea because all of the elements in those claims merely adds extra-solution activity to the abstract idea and/or all of the additional elements are well-understood, routine, conventional in the art.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the claims at issue are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the reference application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The USPTO internet Web site contains terminal disclaimer forms which may be used. Please visit http://www.uspto.gov/forms/. The filing date of the application will determine what form should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to http://www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
Claims 1, 4 – 13 and 16 - 18 are rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over claims 1 - 9 of U.S. Patent No 12,095,813 B2 in view of the prior art of record, Woodbridge et al (US 2019/0019058 A1) (hereinafter “Woodbridge”). Although the claims at issue are not identical, they are not patentably distinct from each other because the pending claims are substantially covered by the U.S. Patent in view of Woodbridge as shown below. Please see the following mapping table;
Instant Application No. 18/813,106
US Patent 12,095,813 B2 in view of Woodbridge
1. A method, comprising: generating, by equipment comprising a processor, a domain name image based on a domain name; generating, by the equipment, a feature vector, wherein generating the feature vector comprises applying a pre-trained convolutional neural network to the domain name image; facilitating, by the equipment, an approximate nearest neighbor search to identify a nearest neighbor associated with the feature vector, wherein the nearest neighbor comprises a previous feature vector associated with a previous domain name image generated prior to the domain name image; comparing, by the equipment, the feature vector with the previous feature vector in order to determine whether the domain name image satisfies a similarity threshold with respect to the previous domain name image; generating, by the equipment, a first group of domain names for further review in response to determining that the domain name image satisfies the similarity threshold with respect to the previous domain name image; obtaining, by the equipment, a blacklist, wherein the blacklist includes a group of malicious domain names; and identifying, by the equipment, a first malicious domain name from the first group of domain names based on the blacklist.
1. US Patent 12,095,813 A1 discloses, a method, comprising: generating, by equipment comprising a processor, a domain name image based on a domain name; generating, by the equipment, a feature vector, wherein generating the feature vector comprises applying a pre-trained convolutional neural network to the domain name image; facilitating, by the equipment, an approximate nearest neighbor search to identify a nearest neighbor associated with the feature vector, wherein the nearest neighbor comprises a previous feature vector associated with a previous domain name image generated prior to the domain name image; comparing, by the equipment, the feature vector with the previous feature vector in order to determine whether the domain name image satisfies a similarity threshold with respect to the previous domain name image; identifying, by the equipment, the domain name for further review in response to determining that the domain name image satisfies the similarity threshold with respect to the previous domain name image; and in response to determining, by the equipment, that the domain name comprises a malicious domain name, providing, by the equipment, a notification to each of a group of communication devices associated with a group of domain name service providers indicating the malicious domain name.
US Patent 12,095,813 B2 does not disclose;
obtaining, by the equipment, a blacklist, wherein the blacklist includes a group of malicious domain names; and identifying, by the equipment, a first malicious domain name from the first group of domain names based on the blacklist.
However, Woodbridge discloses;
obtaining, by the equipment, a blacklist, wherein the blacklist includes a group of malicious domain names [i.e., the present invention…to identify potential malicious URLs (page 1, para 0001), (page 3, para 0038) i.e., index 275 is searched for similar vectors, and strings are reported for which the Euclidean distance between the vector for the new string 280 and the string stored in reference index 275 is below a predefined threshold. If the closest vector is less than predetermined threshold 295, alert 296 is generated identifying new string 280 as potential spoof attack (page 2, para 0027) Note; this paragraph shows that the system identifies and flags suspicious/malicious strings i.e., potential spoof attack. Those flagged names can form or populate a blacklist i.e., a list of confirmed or suspected malicious domains i.e., in step 206, new string 280 can be received from a variety of sources. For example, all potential URLs and file name in all emails received by an email server can be sent…so that a determination can be made as to whether any of them are likely spoofs (page 2, para 0028), (see figure 2) Note; this paragraph analyzing incoming URLs and file names for whether they are likely spoofs. Those determination to be spoofs correspond to malicious domain names – the entries that would populate a blacklist]; and
identifying, by the equipment, a first malicious domain name from the first group of domain names based on the blacklist [i.e., alert 296 is generated identifying new string 280 as potentially spoof attack (page 2, para 0027) and (see figure 2) i.e., identify potentially malicious URLs…before a user inadvertently enables the malicious attack (page 1, para 0007)].
Before the effective filing date of the claimed invention it would have been obvious to a person of ordinary skill in the art to modify the teachings of US Patent 12,095,813 B2 by adapting the teachings of Woodbridge to securely identifies potential spoof attacks based on visual similarity of a received character string with a set of known, valid string (See Woodbridge; page 1, para 0006).
4. The method of claim 1, comprising in response to identifying the first malicious domain name, providing, by the equipment, a notification to each of a group of communication devices associated with a group of domain name service providers indicating the first malicious domain name.
1. US Patent 12,095,813 A1 discloses, a method, comprising: generating, by equipment comprising a processor, a domain name image based on a domain name; generating, by the equipment, a feature vector, wherein generating the feature vector comprises applying a pre-trained convolutional neural network to the domain name image; facilitating, by the equipment, an approximate nearest neighbor search to identify a nearest neighbor associated with the feature vector, wherein the nearest neighbor comprises a previous feature vector associated with a previous domain name image generated prior to the domain name image; comparing, by the equipment, the feature vector with the previous feature vector in order to determine whether the domain name image satisfies a similarity threshold with respect to the previous domain name image; identifying, by the equipment, the domain name for further review in response to determining that the domain name image satisfies the similarity threshold with respect to the previous domain name image; and in response to determining, by the equipment, that the domain name comprises a malicious domain name, providing, by the equipment, a notification to each of a group of communication devices associated with a group of domain name service providers indicating the malicious domain name.
US Patent 12,095,813 B2 does not disclose;
obtaining, by the equipment, a blacklist, wherein the blacklist includes a group of malicious domain names; and identifying, by the equipment, a first malicious domain name from the first group of domain names based on the blacklist.
However, Woodbridge discloses;
obtaining, by the equipment, a blacklist, wherein the blacklist includes a group of malicious domain names [i.e., the present invention…to identify potential malicious URLs (page 1, para 0001), (page 3, para 0038) i.e., index 275 is searched for similar vectors, and strings are reported for which the Euclidean distance between the vector for the new string 280 and the string stored in reference index 275 is below a predefined threshold. If the closest vector is less than predetermined threshold 295, alert 296 is generated identifying new string 280 as potential spoof attack (page 2, para 0027) Note; this paragraph shows that the system identifies and flags suspicious/malicious strings i.e., potential spoof attack. Those flagged names can form or populate a blacklist i.e., a list of confirmed or suspected malicious domains i.e., in step 206, new string 280 can be received from a variety of sources. For example, all potential URLs and file name in all emails received by an email server can be sent…so that a determination can be made as to whether any of them are likely spoofs (page 2, para 0028), (see figure 2) Note; this paragraph analyzing incoming URLs and file names for whether they are likely spoofs. Those determination to be spoofs correspond to malicious domain names – the entries that would populate a blacklist]; and
identifying, by the equipment, a first malicious domain name from the first group of domain names based on the blacklist [i.e., alert 296 is generated identifying new string 280 as potentially spoof attack (page 2, para 0027) and (see figure 2) i.e., identify potentially malicious URLs…before a user inadvertently enables the malicious attack (page 1, para 0007)].
Before the effective filing date of the claimed invention it would have been obvious to a person of ordinary skill in the art to modify the teachings of US Patent 12,095,813 B2 by adapting the teachings of Woodbridge to securely identifies potential spoof attacks based on visual similarity of a received character string with a set of known, valid string (See Woodbridge; page 1, para 0006).
5. The method of claim 1, wherein the blacklist includes the first malicious domain name.
1. US Patent 12,095,813 A1 discloses, a method, comprising: generating, by equipment comprising a processor, a domain name image based on a domain name; generating, by the equipment, a feature vector, wherein generating the feature vector comprises applying a pre-trained convolutional neural network to the domain name image; facilitating, by the equipment, an approximate nearest neighbor search to identify a nearest neighbor associated with the feature vector, wherein the nearest neighbor comprises a previous feature vector associated with a previous domain name image generated prior to the domain name image; comparing, by the equipment, the feature vector with the previous feature vector in order to determine whether the domain name image satisfies a similarity threshold with respect to the previous domain name image; identifying, by the equipment, the domain name for further review in response to determining that the domain name image satisfies the similarity threshold with respect to the previous domain name image; and in response to determining, by the equipment, that the domain name comprises a malicious domain name, providing, by the equipment, a notification to each of a group of communication devices associated with a group of domain name service providers indicating the malicious domain name.
US Patent 12,095,813 B2 does not disclose;
obtaining, by the equipment, a blacklist, wherein the blacklist includes a group of malicious domain names; and identifying, by the equipment, a first malicious domain name from the first group of domain names based on the blacklist.
However, Woodbridge discloses;
obtaining, by the equipment, a blacklist, wherein the blacklist includes a group of malicious domain names [i.e., the present invention…to identify potential malicious URLs (page 1, para 0001), (page 3, para 0038) i.e., index 275 is searched for similar vectors, and strings are reported for which the Euclidean distance between the vector for the new string 280 and the string stored in reference index 275 is below a predefined threshold. If the closest vector is less than predetermined threshold 295, alert 296 is generated identifying new string 280 as potential spoof attack (page 2, para 0027) Note; this paragraph shows that the system identifies and flags suspicious/malicious strings i.e., potential spoof attack. Those flagged names can form or populate a blacklist i.e., a list of confirmed or suspected malicious domains i.e., in step 206, new string 280 can be received from a variety of sources. For example, all potential URLs and file name in all emails received by an email server can be sent…so that a determination can be made as to whether any of them are likely spoofs (page 2, para 0028), (see figure 2) Note; this paragraph analyzing incoming URLs and file names for whether they are likely spoofs. Those determination to be spoofs correspond to malicious domain names – the entries that would populate a blacklist]; and
identifying, by the equipment, a first malicious domain name from the first group of domain names based on the blacklist [i.e., alert 296 is generated identifying new string 280 as potentially spoof attack (page 2, para 0027) and (see figure 2) i.e., identify potentially malicious URLs…before a user inadvertently enables the malicious attack (page 1, para 0007)].
Before the effective filing date of the claimed invention it would have been obvious to a person of ordinary skill in the art to modify the teachings of US Patent 12,095,813 B2 by adapting the teachings of Woodbridge to securely identifies potential spoof attacks based on visual similarity of a received character string with a set of known, valid string (See Woodbridge; page 1, para 0006).
6. The method of claim 1, further comprising: generating, by the equipment, the previous domain name image based on a previous domain name; generating, by the equipment, the previous feature vector by applying the pre-trained convolutional neural network to the previous domain name image; and indexing, by the equipment, the previous feature vector in a similarity search data store for use in connection with the approximate nearest neighbor search.
2. The method of claim 1, further comprising: generating, by the equipment, the previous domain name image based on a previous domain name; generating, by the equipment, the previous feature vector by applying the pre-trained convolutional neural network to the previous domain name image; and indexing, by the equipment, the previous feature vector in a similarity search data store for use in connection with the approximate nearest neighbor search.
US Patent 12,095,813 B2 does not disclose;
obtaining, by the equipment, a blacklist, wherein the blacklist includes a group of malicious domain names; and identifying, by the equipment, a first malicious domain name from the first group of domain names based on the blacklist.
However, Woodbridge discloses;
obtaining, by the equipment, a blacklist, wherein the blacklist includes a group of malicious domain names [i.e., the present invention…to identify potential malicious URLs (page 1, para 0001), (page 3, para 0038) i.e., index 275 is searched for similar vectors, and strings are reported for which the Euclidean distance between the vector for the new string 280 and the string stored in reference index 275 is below a predefined threshold. If the closest vector is less than predetermined threshold 295, alert 296 is generated identifying new string 280 as potential spoof attack (page 2, para 0027) Note; this paragraph shows that the system identifies and flags suspicious/malicious strings i.e., potential spoof attack. Those flagged names can form or populate a blacklist i.e., a list of confirmed or suspected malicious domains i.e., in step 206, new string 280 can be received from a variety of sources. For example, all potential URLs and file name in all emails received by an email server can be sent…so that a determination can be made as to whether any of them are likely spoofs (page 2, para 0028), (see figure 2) Note; this paragraph analyzing incoming URLs and file names for whether they are likely spoofs. Those determination to be spoofs correspond to malicious domain names – the entries that would populate a blacklist]; and
identifying, by the equipment, a first malicious domain name from the first group of domain names based on the blacklist [i.e., alert 296 is generated identifying new string 280 as potentially spoof attack (page 2, para 0027) and (see figure 2) i.e., identify potentially malicious URLs…before a user inadvertently enables the malicious attack (page 1, para 0007)].
Before the effective filing date of the claimed invention it would have been obvious to a person of ordinary skill in the art to modify the teachings of US Patent 12,095,813 B2 by adapting the teachings of Woodbridge to securely identifies potential spoof attacks based on visual similarity of a received character string with a set of known, valid string (See Woodbridge; page 1, para 0006).
7. The method of claim 1, wherein comparing the domain name image with the previous domain name image comprises using a siamese neural network to compare the domain name image with the previous domain name image.
3. The method of claim 1, wherein comparing the domain name image with the previous domain name image comprises using a siamese neural network to compare the domain name image with the previous domain name image.
US Patent 12,095,813 B2 does not disclose;
obtaining, by the equipment, a blacklist, wherein the blacklist includes a group of malicious domain names; and identifying, by the equipment, a first malicious domain name from the first group of domain names based on the blacklist.
However, Woodbridge discloses;
obtaining, by the equipment, a blacklist, wherein the blacklist includes a group of malicious domain names [i.e., the present invention…to identify potential malicious URLs (page 1, para 0001), (page 3, para 0038) i.e., index 275 is searched for similar vectors, and strings are reported for which the Euclidean distance between the vector for the new string 280 and the string stored in reference index 275 is below a predefined threshold. If the closest vector is less than predetermined threshold 295, alert 296 is generated identifying new string 280 as potential spoof attack (page 2, para 0027) Note; this paragraph shows that the system identifies and flags suspicious/malicious strings i.e., potential spoof attack. Those flagged names can form or populate a blacklist i.e., a list of confirmed or suspected malicious domains i.e., in step 206, new string 280 can be received from a variety of sources. For example, all potential URLs and file name in all emails received by an email server can be sent…so that a determination can be made as to whether any of them are likely spoofs (page 2, para 0028), (see figure 2) Note; this paragraph analyzing incoming URLs and file names for whether they are likely spoofs. Those determination to be spoofs correspond to malicious domain names – the entries that would populate a blacklist]; and
identifying, by the equipment, a first malicious domain name from the first group of domain names based on the blacklist [i.e., alert 296 is generated identifying new string 280 as potentially spoof attack (page 2, para 0027) and (see figure 2) i.e., identify potentially malicious URLs…before a user inadvertently enables the malicious attack (page 1, para 0007)].
Before the effective filing date of the claimed invention it would have been obvious to a person of ordinary skill in the art to modify the teachings of US Patent 12,095,813 B2 by adapting the teachings of Woodbridge to securely identifies potential spoof attacks based on visual similarity of a received character string with a set of known, valid string (See Woodbridge; page 1, para 0006).
8. The method of claim 1, wherein: the approximate nearest neighbor search identifies a group of nearest neighbors associated with the feature vector, the nearest neighbors in the group of nearest neighbors comprise previous feature vectors associated with previous domain name images generated prior to the domain name image, and the method further comprises: based on the comparing, determining, by the equipment, whether the domain name image satisfies a similarity threshold with respect to any of the previous domain name images.
4. The method of claim 1, wherein: the approximate nearest neighbor search identifies a group of nearest neighbors associated with the feature vector, the nearest neighbors in the group of nearest neighbors comprise previous feature vectors associated with previous domain name images generated prior to the domain name image, and the method further comprises: based on the comparing, determining, by the equipment, whether the domain name image satisfies a similarity threshold with respect to any of the previous domain name images.
9. The method of claim 1, wherein identifying the first malicious domain name comprises identifying the first malicious domain name by processing the first group of domain names observed in a time period, wherein processing the first group of domain names observed in the time period comprises removing, from the first group of domain names observed in the time period, a third group of domain names observed prior to the time period.
5. The method of claim 1, further comprising identifying the domain name by processing a list of domain names observed in a time period, wherein processing the list of domain names observed in the time period comprises removing, from the list of domain names observed in the time period, domain names observed prior to the time period.
US Patent 12,095,813 B2 does not disclose;
obtaining, by the equipment, a blacklist, wherein the blacklist includes a group of malicious domain names; and identifying, by the equipment, a first malicious domain name from the first group of domain names based on the blacklist.
However, Woodbridge discloses;
obtaining, by the equipment, a blacklist, wherein the blacklist includes a group of malicious domain names [i.e., the present invention…to identify potential malicious URLs (page 1, para 0001), (page 3, para 0038) i.e., index 275 is searched for similar vectors, and strings are reported for which the Euclidean distance between the vector for the new string 280 and the string stored in reference index 275 is below a predefined threshold. If the closest vector is less than predetermined threshold 295, alert 296 is generated identifying new string 280 as potential spoof attack (page 2, para 0027) Note; this paragraph shows that the system identifies and flags suspicious/malicious strings i.e., potential spoof attack. Those flagged names can form or populate a blacklist i.e., a list of confirmed or suspected malicious domains i.e., in step 206, new string 280 can be received from a variety of sources. For example, all potential URLs and file name in all emails received by an email server can be sent…so that a determination can be made as to whether any of them are likely spoofs (page 2, para 0028), (see figure 2) Note; this paragraph analyzing incoming URLs and file names for whether they are likely spoofs. Those determination to be spoofs correspond to malicious domain names – the entries that would populate a blacklist]; and
identifying, by the equipment, a first malicious domain name from the first group of domain names based on the blacklist [i.e., alert 296 is generated identifying new string 280 as potentially spoof attack (page 2, para 0027) and (see figure 2) i.e., identify potentially malicious URLs…before a user inadvertently enables the malicious attack (page 1, para 0007)].
Before the effective filing date of the claimed invention it would have been obvious to a person of ordinary skill in the art to modify the teachings of US Patent 12,095,813 B2 by adapting the teachings of Woodbridge to securely identifies potential spoof attacks based on visual similarity of a received character string with a set of known, valid string (See Woodbridge; page 1, para 0006).
10. The method of claim 9, wherein the first group of domain names observed in the time period comprises at least a portion of all domain names observed in domain name system queries processed via a domain name service provider network in the time period.
6. The method of claim 5, wherein the list of domain names observed in the time period comprises at least a portion of all domain names observed in domain name system queries processed via a domain name service provider network in the time period.
US Patent 12,095,813 B2 does not disclose;
obtaining, by the equipment, a blacklist, wherein the blacklist includes a group of malicious domain names; and identifying, by the equipment, a first malicious domain name from the first group of domain names based on the blacklist.
However, Woodbridge discloses;
obtaining, by the equipment, a blacklist, wherein the blacklist includes a group of malicious domain names [i.e., the present invention…to identify potential malicious URLs (page 1, para 0001), (page 3, para 0038) i.e., index 275 is searched for similar vectors, and strings are reported for which the Euclidean distance between the vector for the new string 280 and the string stored in reference index 275 is below a predefined threshold. If the closest vector is less than predetermined threshold 295, alert 296 is generated identifying new string 280 as potential spoof attack (page 2, para 0027) Note; this paragraph shows that the system identifies and flags suspicious/malicious strings i.e., potential spoof attack. Those flagged names can form or populate a blacklist i.e., a list of confirmed or suspected malicious domains i.e., in step 206, new string 280 can be received from a variety of sources. For example, all potential URLs and file name in all emails received by an email server can be sent…so that a determination can be made as to whether any of them are likely spoofs (page 2, para 0028), (see figure 2) Note; this paragraph analyzing incoming URLs and file names for whether they are likely spoofs. Those determination to be spoofs correspond to malicious domain names – the entries that would populate a blacklist]; and
identifying, by the equipment, a first malicious domain name from the first group of domain names based on the blacklist [i.e., alert 296 is generated identifying new string 280 as potentially spoof attack (page 2, para 0027) and (see figure 2) i.e., identify potentially malicious URLs…before a user inadvertently enables the malicious attack (page 1, para 0007)].
Before the effective filing date of the claimed invention it would have been obvious to a person of ordinary skill in the art to modify the teachings of US Patent 12,095,813 B2 by adapting the teachings of Woodbridge to securely identifies potential spoof attacks based on visual similarity of a received character string with a set of known, valid string (See Woodbridge; page 1, para 0006).
11. The method of claim 1, comprising using, by the equipment, a font fallback process to select a set of fonts for the domain name prior to generating the domain name image.
7. The method of claim 1, further comprising using, by the equipment, a font fallback process to select a set of fonts for the domain name prior to generating the domain name image.
US Patent 12,095,813 B2 does not disclose;
obtaining, by the equipment, a blacklist, wherein the blacklist includes a group of malicious domain names; and identifying, by the equipment, a first malicious domain name from the first group of domain names based on the blacklist.
However, Woodbridge discloses;
obtaining, by the equipment, a blacklist, wherein the blacklist includes a group of malicious domain names [i.e., the present invention…to identify potential malicious URLs (page 1, para 0001), (page 3, para 0038) i.e., index 275 is searched for similar vectors, and strings are reported for which the Euclidean distance between the vector for the new string 280 and the string stored in reference index 275 is below a predefined threshold. If the closest vector is less than predetermined threshold 295, alert 296 is generated identifying new string 280 as potential spoof attack (page 2, para 0027) Note; this paragraph shows that the system identifies and flags suspicious/malicious strings i.e., potential spoof attack. Those flagged names can form or populate a blacklist i.e., a list of confirmed or suspected malicious domains i.e., in step 206, new string 280 can be received from a variety of sources. For example, all potential URLs and file name in all emails received by an email server can be sent…so that a determination can be made as to whether any of them are likely spoofs (page 2, para 0028), (see figure 2) Note; this paragraph analyzing incoming URLs and file names for whether they are likely spoofs. Those determination to be spoofs correspond to malicious domain names – the entries that would populate a blacklist]; and
identifying, by the equipment, a first malicious domain name from the first group of domain names based on the blacklist [i.e., alert 296 is generated identifying new string 280 as potentially spoof attack (page 2, para 0027) and (see figure 2) i.e., identify potentially malicious URLs…before a user inadvertently enables the malicious attack (page 1, para 0007)].
Before the effective filing date of the claimed invention it would have been obvious to a person of ordinary skill in the art to modify the teachings of US Patent 12,095,813 B2 by adapting the teachings of Woodbridge to securely identifies potential spoof attacks based on visual similarity of a received character string with a set of known, valid string (See Woodbridge; page 1, para 0006).
12. The method of claim 1, further comprising adjusting, by the equipment, the similarity threshold resulting in an adjusted similarity threshold for use in subsequent comparisons of feature vectors with the previous feature vector.
8. The method of claim 1, further comprising adjusting, by the equipment, the similarity threshold resulting in an adjusted similarity threshold for use in subsequent comparisons of feature vectors with the previous feature vector.
US Patent 12,095,813 B2 does not disclose;
obtaining, by the equipment, a blacklist, wherein the blacklist includes a group of malicious domain names; and identifying, by the equipment, a first malicious domain name from the first group of domain names based on the blacklist.
However, Woodbridge discloses;
obtaining, by the equipment, a blacklist, wherein the blacklist includes a group of malicious domain names [i.e., the present invention…to identify potential malicious URLs (page 1, para 0001), (page 3, para 0038) i.e., index 275 is searched for similar vectors, and strings are reported for which the Euclidean distance between the vector for the new string 280 and the string stored in reference index 275 is below a predefined threshold. If the closest vector is less than predetermined threshold 295, alert 296 is generated identifying new string 280 as potential spoof attack (page 2, para 0027) Note; this paragraph shows that the system identifies and flags suspicious/malicious strings i.e., potential spoof attack. Those flagged names can form or populate a blacklist i.e., a list of confirmed or suspected malicious domains i.e., in step 206, new string 280 can be received from a variety of sources. For example, all potential URLs and file name in all emails received by an email server can be sent…so that a determination can be made as to whether any of them are likely spoofs (page 2, para 0028), (see figure 2) Note; this paragraph analyzing incoming URLs and file names for whether they are likely spoofs. Those determination to be spoofs correspond to malicious domain names – the entries that would populate a blacklist]; and
identifying, by the equipment, a first malicious domain name from the first group of domain names based on the blacklist [i.e., alert 296 is generated identifying new string 280 as potentially spoof attack (page 2, para 0027) and (see figure 2) i.e., identify potentially malicious URLs…before a user inadvertently enables the malicious attack (page 1, para 0007)].
Before the effective filing date of the claimed invention it would have been obvious to a person of ordinary skill in the art to modify the teachings of US Patent 12,095,813 B2 by adapting the teachings of Woodbridge to securely identifies potential spoof attacks based on visual similarity of a received character string with a set of known, valid string (See Woodbridge; page 1, para 0006).
13. A device, comprising: a processing system including a processor; and a memory that stores executable instructions that, when executed by the processing system, facilitate performance of operations, comprising: generating a domain name image based on a domain name; generating a feature vector, wherein generating the feature vector comprises applying a pre-trained convolutional neural network to the domain name image; facilitating an approximate nearest neighbor search to identify a nearest neighbor associated with the feature vector, wherein the nearest neighbor comprises a previous feature vector associated with a previous domain name image generated prior to the domain name image; comparing the feature vector with the previous feature vector in order to determine whether the domain name image satisfies a similarity threshold with respect to the previous domain name image; generating a first group of domain names for further review in response to determining that the domain name image satisfies the similarity threshold with respect to the previous domain name image; obtaining a blacklist, wherein the blacklist includes a group of malicious domain names; and identifying a first malicious domain name from the first group of domain names based on the blacklist.
9. Computing equipment, comprising: a processor; and a memory that stores executable instructions that, when executed by the processor, facilitate performance of operations, comprising: generating first domain name images based on first domain names; generating first feature vectors by applying a pre-trained convolutional neural network to the first domain name images; indexing the first feature vectors in a similarity search data store for use in connection with approximate nearest neighbor searches, wherein an approximate nearest neighbor search of the approximate nearest neighbor searches identifies nearest neighbors of a second feature vector, wherein the nearest neighbors comprise a group of the first feature vectors for comparison with the second feature vector in order to determine the second feature vector satisfies a similarity threshold with respect to any feature vector in the group of the first feature vectors resulting in a determination; based on the determination, identifying a domain name for further review; and in response to determining that the domain name comprises a malicious domain name, providing a notification to each of a group of communication devices associated with a group of domain name service providers indicating the malicious domain name
US Patent 12,095,813 B2 does not disclose;
obtaining, by the equipment, a blacklist, wherein the blacklist includes a group of malicious domain names; and identifying, by the equipment, a first malicious domain name from the first group of domain names based on the blacklist.
However, Woodbridge discloses;
obtaining, by the equipment, a blacklist, wherein the blacklist includes a group of malicious domain names [i.e., the present invention…to identify potential malicious URLs (page 1, para 0001), (page 3, para 0038) i.e., index 275 is searched for similar vectors, and strings are reported for which the Euclidean distance between the vector for the new string 280 and the string stored in reference index 275 is below a predefined threshold. If the closest vector is less than predetermined threshold 295, alert 296 is generated identifying new string 280 as potential spoof attack (page 2, para 0027) Note; this paragraph shows that the system identifies and flags suspicious/malicious strings i.e., potential spoof attack. Those flagged names can form or populate a blacklist i.e., a list of confirmed or suspected malicious domains i.e., in step 206, new string 280 can be received from a variety of sources. For example, all potential URLs and file name in all emails received by an email server can be sent…so that a determination can be made as to whether any of them are likely spoofs (page 2, para 0028), (see figure 2) Note; this paragraph analyzing incoming URLs and file names for whether they are likely spoofs. Those determination to be spoofs correspond to malicious domain names – the entries that would populate a blacklist]; and
identifying, by the equipment, a first malicious domain name from the first group of domain names based on the blacklist [i.e., alert 296 is generated identifying new string 280 as potentially spoof attack (page 2, para 0027) and (see figure 2) i.e., identify potentially malicious URLs…before a user inadvertently enables the malicious attack (page 1, para 0007)].
Before the effective filing date of the claimed invention it would have been obvious to a person of ordinary skill in the art to modify the teachings of US Patent 12,095,813 B2 by adapting the teachings of Woodbridge to securely identifies potential spoof attacks based on visual similarity of a received character string with a set of known, valid string (See Woodbridge; page 1, para 0006).
16. The device of claim 13, in response to identifying the first malicious domain name, providing a notification to each of a group of communication devices associated with a group of domain name service providers indicating the first malicious domain name.
9. Computing equipment, comprising: a processor; and a memory that stores executable instructions that, when executed by the processor, facilitate performance of operations, comprising: generating first domain name images based on first domain names; generating first feature vectors by applying a pre-trained convolutional neural network to the first domain name images; indexing the first feature vectors in a similarity search data store for use in connection with approximate nearest neighbor searches, wherein an approximate nearest neighbor search of the approximate nearest neighbor searches identifies nearest neighbors of a second feature vector, wherein the nearest neighbors comprise a group of the first feature vectors for comparison with the second feature vector in order to determine the second feature vector satisfies a similarity threshold with respect to any feature vector in the group of the first feature vectors resulting in a determination; based on the determination, identifying a domain name for further review; and in response to determining that the domain name comprises a malicious domain name, providing a notification to each of a group of communication devices associated with a group of domain name service providers indicating the malicious domain name
US Patent 12,095,813 B2 does not disclose;
obtaining, by the equipment, a blacklist, wherein the blacklist includes a group of malicious domain names; and identifying, by the equipment, a first malicious domain name from the first group of domain names based on the blacklist.
However, Woodbridge discloses;
obtaining, by the equipment, a blacklist, wherein the blacklist includes a group of malicious domain names [i.e., the present invention…to identify potential malicious URLs (page 1, para 0001), (page 3, para 0038) i.e., index 275 is searched for similar vectors, and strings are reported for which the Euclidean distance between the vector for the new string 280 and the string stored in reference index 275 is below a predefined threshold. If the closest vector is less than predetermined threshold 295, alert 296 is generated identifying new string 280 as potential spoof attack (page 2, para 0027) Note; this paragraph shows that the system identifies and flags suspicious/malicious strings i.e., potential spoof attack. Those flagged names can form or populate a blacklist i.e., a list of confirmed or suspected malicious domains i.e., in step 206, new string 280 can be received from a variety of sources. For example, all potential URLs and file name in all emails received by an email server can be sent…so that a determination can be made as to whether any of them are likely spoofs (page 2, para 0028), (see figure 2) Note; this paragraph analyzing incoming URLs and file names for whether they are likely spoofs. Those determination to be spoofs correspond to malicious domain names – the entries that would populate a blacklist]; and
identifying, by the equipment, a first malicious domain name from the first group of domain names based on the blacklist [i.e., alert 296 is generated identifying new string 280 as potentially spoof attack (page 2, para 0027) and (see figure 2) i.e., identify potentially malicious URLs…before a user inadvertently enables the malicious attack (page 1, para 0007)].
Before the effective filing date of the claimed invention it would have been obvious to a person of ordinary skill in the art to modify the teachings of US Patent 12,095,813 B2 by adapting the teachings of Woodbridge to securely identifies potential spoof attacks based on visual similarity of a received character string with a set of known, valid string (See Woodbridge; page 1, para 0006).
17. The device of claim 13, wherein the blacklist includes the first malicious domain name.
9. Computing equipment, comprising: a processor; and a memory that stores executable instructions that, when executed by the processor, facilitate performance of operations, comprising: generating first domain name images based on first domain names; generating first feature vectors by applying a pre-trained convolutional neural network to the first domain name images; indexing the first feature vectors in a similarity search data store for use in connection with approximate nearest neighbor searches, wherein an approximate nearest neighbor search of the approximate nearest neighbor searches identifies nearest neighbors of a second feature vector, wherein the nearest neighbors comprise a group of the first feature vectors for comparison with the second feature vector in order to determine the second feature vector satisfies a similarity threshold with respect to any feature vector in the group of the first feature vectors resulting in a determination; based on the determination, identifying a domain name for further review; and in response to determining that the domain name comprises a malicious domain name, providing a notification to each of a group of communication devices associated with a group of domain name service providers indicating the malicious domain name
US Patent 12,095,813 B2 does not disclose;
obtaining, by the equipment, a blacklist, wherein the blacklist includes a group of malicious domain names; and identifying, by the equipment, a first malicious domain name from the first group of domain names based on the blacklist.
However, Woodbridge discloses;
obtaining, by the equipment, a blacklist, wherein the blacklist includes a group of malicious domain names [i.e., the present invention…to identify potential malicious URLs (page 1, para 0001), (page 3, para 0038) i.e., index 275 is searched for similar vectors, and strings are reported for which the Euclidean distance between the vector for the new string 280 and the string stored in reference index 275 is below a predefined threshold. If the closest vector is less than predetermined threshold 295, alert 296 is generated identifying new string 280 as potential spoof attack (page 2, para 0027) Note; this paragraph shows that the system identifies and flags suspicious/malicious strings i.e., potential spoof attack. Those flagged names can form or populate a blacklist i.e., a list of confirmed or suspected malicious domains i.e., in step 206, new string 280 can be received from a variety of sources. For example, all potential URLs and file name in all emails received by an email server can be sent…so that a determination can be made as to whether any of them are likely spoofs (page 2, para 0028), (see figure 2) Note; this paragraph analyzing incoming URLs and file names for whether they are likely spoofs. Those determination to be spoofs correspond to malicious domain names – the entries that would populate a blacklist]; and
identifying, by the equipment, a first malicious domain name from the first group of domain names based on the blacklist [i.e., alert 296 is generated identifying new string 280 as potentially spoof attack (page 2, para 0027) and (see figure 2) i.e., identify potentially malicious URLs…before a user inadvertently enables the malicious attack (page 1, para 0007)].
Before the effective filing date of the claimed invention it would have been obvious to a person of ordinary skill in the art to modify the teachings of US Patent 12,095,813 B2 by adapting the teachings of Woodbridge to securely identifies potential spoof attacks based on visual similarity of a received character string with a set of known, valid string (See Woodbridge; page 1, para 0006).
18. A non-transitory machine-readable medium, comprising executable instructions that, when executed by a processing system including a processor, facilitate performance of operations, comprising: generating a domain name image based on a domain name; generating a feature vector, wherein generating the feature vector comprises applying a pre-trained convolutional neural network to the domain name image; facilitating an approximate nearest neighbor search to identify a nearest neighbor associated with the feature vector, wherein the nearest neighbor comprises a previous feature vector associated with a previous domain name image generated prior to the domain name image; comparing the feature vector with the previous feature vector in order to determine whether the domain name image satisfies a similarity threshold with respect to the previous domain name image; generating a first group of domain names for further review in response to determining that the domain name image satisfies the similarity threshold with respect to the previous domain name image; obtaining a blacklist, wherein the blacklist includes a group of malicious domain names; and identifying a first malicious domain name from the first group of domain names based on the blacklist.
9. Computing equipment, comprising: a processor; and a memory that stores executable instructions that, when executed by the processor, facilitate performance of operations, comprising: generating first domain name images based on first domain names; generating first feature vectors by applying a pre-trained convolutional neural network to the first domain name images; indexing the first feature vectors in a similarity search data store for use in connection with approximate nearest neighbor searches, wherein an approximate nearest neighbor search of the approximate nearest neighbor searches identifies nearest neighbors of a second feature vector, wherein the nearest neighbors comprise a group of the first feature vectors for comparison with the second feature vector in order to determine the second feature vector satisfies a similarity threshold with respect to any feature vector in the group of the first feature vectors resulting in a determination; based on the determination, identifying a domain name for further review; and in response to determining that the domain name comprises a malicious domain name, providing a notification to each of a group of communication devices associated with a group of domain name service providers indicating the malicious domain name
US Patent 12,095,813 B2 does not disclose;
obtaining, by the equipment, a blacklist, wherein the blacklist includes a group of malicious domain names; and identifying, by the equipment, a first malicious domain name from the first group of domain names based on the blacklist.
However, Woodbridge discloses;
obtaining, by the equipment, a blacklist, wherein the blacklist includes a group of malicious domain names [i.e., the present invention…to identify potential malicious URLs (page 1, para 0001), (page 3, para 0038) i.e., index 275 is searched for similar vectors, and strings are reported for which the Euclidean distance between the vector for the new string 280 and the string stored in reference index 275 is below a predefined threshold. If the closest vector is less than predetermined threshold 295, alert 296 is generated identifying new string 280 as potential spoof attack (page 2, para 0027) Note; this paragraph shows that the system identifies and flags suspicious/malicious strings i.e., potential spoof attack. Those flagged names can form or populate a blacklist i.e., a list of confirmed or suspected malicious domains i.e., in step 206, new string 280 can be received from a variety of sources. For example, all potential URLs and file name in all emails received by an email server can be sent…so that a determination can be made as to whether any of them are likely spoofs (page 2, para 0028), (see figure 2) Note; this paragraph analyzing incoming URLs and file names for whether they are likely spoofs. Those determination to be spoofs correspond to malicious domain names – the entries that would populate a blacklist]; and
identifying, by the equipment, a first malicious domain name from the first group of domain names based on the blacklist [i.e., alert 296 is generated identifying new string 280 as potentially spoof attack (page 2, para 0027) and (see figure 2) i.e., identify potentially malicious URLs…before a user inadvertently enables the malicious attack (page 1, para 0007)].
Before the effective filing date of the claimed invention it would have been obvious to a person of ordinary skill in the art to modify the teachings of US Patent 12,095,813 B2 by adapting the teachings of Woodbridge to securely identifies potential spoof attacks based on visual similarity of a received character string with a set of known, valid string (See Woodbridge; page 1, para 0006).
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1 – 8 and 11 - 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by the prior art of record, Woodbridge et al., (US 2019/0019058 A1) (hereinafter “Woodbridge”) (submitted by the applicant via IDS 08/23/2024).
Woodbridge discloses;
Regarding claim 1, a method, comprising:
generating, by equipment comprising a processor [i.e., computing device 300 comprises processor 610 (see figure 6), (page 1, para 002)], a domain name image based on a domain name [i.e., new string 280 i.e., “www.endgame.com”, www.enclgame.com (see figures 1 and 10), “google.com”, “google.com”, “cnn.com” (page 2, para 0022) is transformed into image 285 i.e., image of fixed size (e.g., 150 pixels across x 12 pixels high (page 2, para 0023) using data-image transformation engine 210 (page 2, para 0027), (see reference 206 of figure 2 and figure 5) i.e., URL as the new string 280 (page 2, para 0028), (page 3, para 0038)];
generating, by the equipment, a feature vector, wherein generating the feature vector comprises applying a pre-trained convolutional neural network to the domain name image [i.e., image 285 is converted to vector 290 using Siamese convolution neural network 220 (page 2, para 0027), (see reference 206 of figure 2 and figurer 5) i.e., the network is pre-trained on similar and dissimilar pairs (page 2, para 0022 – 0024), (see figure 3)];
facilitating, by the equipment, an approximate nearest neighbor search to identify a nearest neighbor associated with the feature vector [i.e., indexing engine 230 uses a geometric index called (randomized) KD-Trees…Several random trees can be build…in concert to improve search quality…using ten randomized KD-Trees…in this embodiment, 128 checks on each query are preformed (page 3, para 0035 - 0037) Note; the Randomized KD-Trees are a classical approximate nearest neighbor (ANN) structure i.e., Euclidean distance (page 2, para 0027), (see figure 10)], wherein the nearest neighbor comprises a previous feature vector associated with a previous domain name image generated prior to the domain name image [i.e., index 275 is searched for similar vectors i.e., closest vector i.e., vector 270 (page 2, para 0027 and 0024), (page 3, para 0035), (see reference 206 of figures 2 and figure 5), (page 3, para 0036)];
comparing, by the equipment, the feature vector with the previous feature vector in order to determine whether the domain name image satisfies a similarity threshold with respect to the previous domain name image [i.e., if the closest vector is less than predetermined threshold 295 (page 2, para 0027), (see reference 206 of figure 2 and figures 5 and 10)];
generating, by the equipment, a first group of domain names for further review in response to determining that the domain name image satisfies the similarity threshold with respect to the previous domain name image [i.e., if the closest vector is less than predetermined threshold 295, alert 296 is generated identifying new string 280 as potentially spoof attach (page 2, para 0027), (see reference 206 of figure 2 and figure 5) Note; the reporting or alerting of potential spoof attack triggers further review];
obtaining, by the equipment, a blacklist, wherein the blacklist includes a group of malicious domain names [i.e., the present invention…to identify potential malicious URLs (page 1, para 0001), (page 3, para 0038) i.e., index 275 is searched for similar vectors, and strings are reported for which the Euclidean distance between the vector for the new string 280 and the string stored in reference index 275 is below a predefined threshold. If the closest vector is less than predetermined threshold 295, alert 296 is generated identifying new string 280 as potential spoof attack (page 2, para 0027) Note; this paragraph shows that the system identifies and flags suspicious/malicious strings i.e., potential spoof attack. Those flagged names can form or populate a blacklist i.e., a list of confirmed or suspected malicious domains i.e., in step 206, new string 280 can be received from a variety of sources. For example, all potential URLs and file name in all emails received by an email server can be sent…so that a determination can be made as to whether any of them are likely spoofs (page 2, para 0028), (see figure 2) Note; this paragraph analyzing incoming URLs and file names for whether they are likely spoofs. Those determination to be spoofs correspond to malicious domain names – the entries that would populate a blacklist]; and
identifying, by the equipment, a first malicious domain name from the first group of domain names based on the blacklist [i.e., alert 296 is generated identifying new string 280 as potentially spoof attack (page 2, para 0027) and (see figure 2) i.e., identify potentially malicious URLs…before a user inadvertently enables the malicious attack (page 1, para 0007)].
Regarding claim 2, the method of claim 1, comprising obtaining, by the equipment, a whitelist [i.e., list (page 2, para 0025)], wherein the whitelist includes a second group of domain names [i.e., the list includes valid strings 260 comprise process names and domain names that are of interest for monitoring purposes (page 2, para 0025) i.e., the valid strings 260…are indexed using indexing engine 230 (page 2, para 0026)].
Regarding claim 3, the method of claim 2, comprising removing, by the equipment, the second group of domain names from the first group of domain names [i.e., comparing each new string to whitelisted names and discarding those that don’t meet the similarity condition (page 2, para 0026)].
Regarding claim 4, the method of claim 1, comprising in response to identifying the first malicious domain name, providing, by the equipment, a notification to each of a group of communication devices associated with a group of domain name service providers indicating the first malicious domain name [i.e., alert 296 is generated identifying new string 280 as potentially spoof attack (page 2, para 0027) and (see figure 2) i.e., identify potentially malicious URLs…before a user inadvertently enables the malicious attack (page 1, para 0007)].
Regarding claim 5, the method of claim 1, wherein the blacklist includes the first malicious domain name [i.e., in step 206, new string 280 can be received from a variety of sources. For example, all potential URLs and file name in all emails received by an email server can be sent…so that a determination can be made as to whether any of them are likely spoofs (page 2, para 0028), (see figure 2)].
Regarding claim 6, the method of claim 1, further comprising:
generating, by the equipment, the previous domain name image based on a previous domain name [i.e., transform training set 250 i.e., “google.com”, “gooogle.com” into training image 255 using data-image transformation engine 210 (page 2, para 0022 - 0023), (see reference 202 of figure 2 and figure 3) i.e., (endgame.com and enclgame.com) (page 3, para 0034)];
generating, by the equipment, the previous feature vector by applying the pre-trained convolutional neural network to the previous domain name image [i.e., input training images 255 into Siamese convolutional neural network 230, which learns to represent each image as a vector of floats (page 2, para 0024 and 0025), (page 3, para 0034), (see reference 203 of figure 2 and figure 3) i.e., feature vector 270 sub i (page 2, para 0024), (see figure 9)]; and
indexing, by the equipment, the previous feature vector in a similarity search data store for use in connection with the approximate nearest neighbor search [i.e., generate reference index 275 for vector 270 sub i using index engine 230 (page 2, para 0027), (see refence 205 of figure 2 and figure 3), (page 3, para 0035)], [i.e., index 275 is searched for similar vectors i.e., closest vector i.e., vector 270 (page 2, para 0027 and 0024), (page 3, para 0035), (see reference 206 of figures 2 and figure 5), (page 3, para 0036)].
Regarding claim 7, the method of claim 1, wherein comparing the domain name image with the previous domain name image comprises using a siamese neural network to compare the domain name image with the previous domain name image [i.e., images are converted into vectors 270 sub i using Siamese convolutional neural network 230 (page 2, para 0024 and 0025), (page 3, para 0034), (see reference 203 of figure 2 and figure 3) i.e., feature vector 270 sub i (page 2, para 0024), (see figure 9)].
Regarding claim 8, the method of claim 1, wherein:
the approximate nearest neighbor search identifies a group of nearest neighbors associated with the feature vector, the nearest neighbors in the group of nearest neighbors comprise previous feature vectors associated with previous domain name images generated prior to the domain name image [i.e., index 275 is searched for similar vectors i.e., closest vector i.e., vector 270 (page 2, para 0027 and 0024), (page 3, para 0035), (see reference 206 of figures 2 and figure 5), (page 3, para 0036)], and
the method further comprises:
based on the result of the comparing, determining, by the equipment, whether the domain name image satisfies a similarity threshold with respect to any of the previous domain name images [i.e., if the closest vector is less than predetermined threshold 295 (page 2, para 0027), (see reference 206 of figure 2 and figures 5 and 10)].
Regarding claim 11, the method of claim 1, further comprising using, by the equipment, a font fallback process to select a set of fonts [i.e., multi-channel image using different fonts case (page 2, para 0023)] for the domain name prior to generating the domain name image [i.e., using a common font (e.g., Anal TrueType font) (page 2, para 0023)].
Regarding claim 12, the method of claim 1, further comprising adjusting, by the equipment, the similarity threshold resulting in an adjusted similarity threshold for use in subsequent comparisons of feature vectors with the previous feature vector [i.e., in step 206, predetermined threshold 295 optionally can be selected by a user or administrator…a lower predetermined threshold 295…a higher predetermined threshold (page 2, para 0029)].
Regarding claim 13, a device [i.e., computing device 300 (see figure 6), (page 1, para 0020)], comprising:
a processing system including a processor [i.e., computing device 300 comprises processor 610 (see figure 6), (page 1, para 002)]; and
a memory that stores executable instructions [i.e., computing device 300 comprises memory 620 (see figure 6), (page 1, para 002)] that, when executed by the processing system, facilitate performance of operations [i.e., (see figure 6)], comprising:
generating a domain name image based on a domain name [i.e., new string 280 i.e., “www.endgame.com”, www.enclgame.com (see figures 1 and 10), “google.com”, “google.com”, “cnn.com” (page 2, para 0022) is transformed into image 285 i.e., image of fixed size (e.g., 150 pixels across x 12 pixels high (page 2, para 0023) using data-image transformation engine 210 (page 2, para 0027), (see reference 206 of figure 2 and figure 5) i.e., URL as the new string 280 (page 2, para 0028), (page 3, para 0038)];
generating a feature vector, wherein generating the feature vector comprises applying a pre-trained convolutional neural network to the domain name image [i.e., image 285 is converted to vector 290 using Siamese convolution neural network 220 (page 2, para 0027), (see reference 206 of figure 2 and figurer 5) i.e., the network is pre-trained on similar and dissimilar pairs (page 2, para 0022 – 0024), (see figure 3)];
facilitating an approximate nearest neighbor search to identify a nearest neighbor associated with the feature vector [i.e., indexing engine 230 uses a geometric index called (randomized) KD-Trees…Several random trees can be build…in concert to improve search quality…using ten randomized KD-Trees…in this embodiment, 128 checks on each query are preformed (page 3, para 0035 - 0037) Note; the Randomized KD-Trees are a classical approximate nearest neighbor (ANN) structure i.e., Euclidean distance (page 2, para 0027), (see figure 10)], wherein the nearest neighbor comprises a previous feature vector associated with a previous domain name image generated prior to the domain name image [i.e., index 275 is searched for similar vectors i.e., closest vector i.e., vector 270 (page 2, para 0027 and 0024), (page 3, para 0035), (see reference 206 of figures 2 and figure 5), (page 3, para 0036)];
comparing the feature vector with the previous feature vector in order to determine whether the domain name image satisfies a similarity threshold with respect to the previous domain name image [i.e., if the closest vector is less than predetermined threshold 295 (page 2, para 0027), (see reference 206 of figure 2 and figures 5 and 10)];
generating a first group of domain names for further review in response to determining that the domain name image satisfies the similarity threshold with respect to the previous domain name image [i.e., if the closest vector is less than predetermined threshold 295, alert 296 is generated identifying new string 280 as potentially spoof attach (page 2, para 0027), (see reference 206 of figure 2 and figure 5) Note; the reporting or alerting of potential spoof attack triggers further review];
obtaining a blacklist, wherein the blacklist includes a group of malicious domain names [i.e., the present invention…to identify potential malicious URLs (page 1, para 0001), (page 3, para 0038) i.e., index 275 is searched for similar vectors, and strings are reported for which the Euclidean distance between the vector for the new string 280 and the string stored in reference index 275 is below a predefined threshold. If the closest vector is less than predetermined threshold 295, alert 296 is generated identifying new string 280 as potential spoof attack (page 2, para 0027) Note; this paragraph shows that the system identifies and flags suspicious/malicious strings i.e., potential spoof attack. Those flagged names can form or populate a blacklist i.e., a list of confirmed or suspected malicious domains i.e., in step 206, new string 280 can be received from a variety of sources. For example, all potential URLs and file name in all emails received by an email server can be sent…so that a determination can be made as to whether any of them are likely spoofs (page 2, para 0028), (see figure 2) Note; this paragraph analyzing incoming URLs and file names for whether they are likely spoofs. Those determination to be spoofs correspond to malicious domain names – the entries that would populate a blacklist]; and
identifying a first malicious domain name from the first group of domain names based on the blacklist [i.e., alert 296 is generated identifying new string 280 as potentially spoof attack (page 2, para 0027) and (see figure 2) i.e., identify potentially malicious URLs…before a user inadvertently enables the malicious attack (page 1, para 0007)].
Regarding claim 14, the device of claim 13, wherein the operation comprise obtaining a whitelist [i.e., list (page 2, para 0025)], wherein the whitelist includes a second group of domain names [i.e., the list includes valid strings 260 comprise process names and domain names that are of interest for monitoring purposes (page 2, para 0025) i.e., the valid strings 260…are indexed using indexing engine 230 (page 2, para 0026)].
Regarding claim 15, the device of claim 14, wherein the operation comprise removing the second group of domain names from the first group of domain names [i.e., comparing each new string to whitelisted names and discarding those that don’t meet the similarity condition (page 2, para 0026)].
Regarding claim 16, the device of claim 13, in response to identifying the first malicious domain name, providing, by the equipment, a notification to each of a group of communication devices associated with a group of domain name service providers indicating the first malicious domain name [i.e., alert 296 is generated identifying new string 280 as potentially spoof attack (page 2, para 0027) and (see figure 2) i.e., identify potentially malicious URLs…before a user inadvertently enables the malicious attack (page 1, para 0007)].
Regarding claim 17, the device of claim 13, wherein the blacklist includes the first malicious domain name [i.e., in step 206, new string 280 can be received from a variety of sources. For example, all potential URLs and file name in all emails received by an email server can be sent…so that a determination can be made as to whether any of them are likely spoofs (page 2, para 0028), (see figure 2)].
Regarding claim 18, a non-transitory machine-readable medium, comprising executable instructions [i.e., non-volatile storage 640 (see figure 6) i.e., memory 620 (see figure 6)] that, when executed by a processing system including a processor [i.e., processor 610 (see figure 6)], facilitate performance of operations, comprising:
generating a domain name image based on a domain name [i.e., new string 280 i.e., “www.endgame.com”, www.enclgame.com (see figures 1 and 10), “google.com”, “google.com”, “cnn.com” (page 2, para 0022) is transformed into image 285 i.e., image of fixed size (e.g., 150 pixels across x 12 pixels high (page 2, para 0023) using data-image transformation engine 210 (page 2, para 0027), (see reference 206 of figure 2 and figure 5) i.e., URL as the new string 280 (page 2, para 0028), (page 3, para 0038)];
generating a feature vector, wherein generating the feature vector comprises applying a pre-trained convolutional neural network to the domain name image [i.e., image 285 is converted to vector 290 using Siamese convolution neural network 220 (page 2, para 0027), (see reference 206 of figure 2 and figurer 5) i.e., the network is pre-trained on similar and dissimilar pairs (page 2, para 0022 – 0024), (see figure 3)];
facilitating an approximate nearest neighbor search to identify a nearest neighbor associated with the feature vector [i.e., indexing engine 230 uses a geometric index called (randomized) KD-Trees…Several random trees can be build…in concert to improve search quality…using ten randomized KD-Trees…in this embodiment, 128 checks on each query are preformed (page 3, para 0035 - 0037) Note; the Randomized KD-Trees are a classical approximate nearest neighbor (ANN) structure i.e., Euclidean distance (page 2, para 0027), (see figure 10)], wherein the nearest neighbor comprises a previous feature vector associated with a previous domain name image generated prior to the domain name image [i.e., index 275 is searched for similar vectors i.e., closest vector i.e., vector 270 (page 2, para 0027 and 0024), (page 3, para 0035), (see reference 206 of figures 2 and figure 5), (page 3, para 0036)];
comparing the feature vector with the previous feature vector in order to determine whether the domain name image satisfies a similarity threshold with respect to the previous domain name image [i.e., if the closest vector is less than predetermined threshold 295 (page 2, para 0027), (see reference 206 of figure 2 and figures 5 and 10)];
generating a first group of domain names for further review in response to determining that the domain name image satisfies the similarity threshold with respect to the previous domain name image [i.e., if the closest vector is less than predetermined threshold 295, alert 296 is generated identifying new string 280 as potentially spoof attach (page 2, para 0027), (see reference 206 of figure 2 and figure 5) Note; the reporting or alerting of potential spoof attack triggers further review];
obtaining a blacklist, wherein the blacklist includes a group of malicious domain names [i.e., the present invention…to identify potential malicious URLs (page 1, para 0001), (page 3, para 0038) i.e., index 275 is searched for similar vectors, and strings are reported for which the Euclidean distance between the vector for the new string 280 and the string stored in reference index 275 is below a predefined threshold. If the closest vector is less than predetermined threshold 295, alert 296 is generated identifying new string 280 as potential spoof attack (page 2, para 0027) Note; this paragraph shows that the system identifies and flags suspicious/malicious strings i.e., potential spoof attack. Those flagged names can form or populate a blacklist i.e., a list of confirmed or suspected malicious domains i.e., in step 206, new string 280 can be received from a variety of sources. For example, all potential URLs and file name in all emails received by an email server can be sent…so that a determination can be made as to whether any of them are likely spoofs (page 2, para 0028), (see figure 2) Note; this paragraph analyzing incoming URLs and file names for whether they are likely spoofs. Those determination to be spoofs correspond to malicious domain names – the entries that would populate a blacklist]; and
identifying a first malicious domain name from the first group of domain names based on the blacklist [i.e., alert 296 is generated identifying new string 280 as potentially spoof attack (page 2, para 0027) and (see figure 2) i.e., identify potentially malicious URLs…before a user inadvertently enables the malicious attack (page 1, para 0007)].
Regarding claim 19, the non-transitory machine-readable medium of claim 18, wherein the operation comprise obtaining a whitelist [i.e., list (page 2, para 0025)], wherein the whitelist includes a second group of domain names [i.e., the list includes valid strings 260 comprise process names and domain names that are of interest for monitoring purposes (page 2, para 0025) i.e., the valid strings 260…are indexed using indexing engine 230 (page 2, para 0026)].
Regarding claim 20, the non-transitory machine-readable medium of claim 19, wherein the operation comprise removing the second group of domain names from the first group of domain names [i.e., comparing each new string to whitelisted names and discarding those that don’t meet the similarity condition (page 2, para 0026)].
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 9 - 10 are rejected under 35 U.S.C. 103 as being unpatentable over Woodbridge in view of Thomas et al., (US 2015/0047033 A1) (hereinafter “Thomas”).
Regarding claim 9, Woodbridge discloses; the method of claim 1 [i.e., see claim 1 above].
Woodbridge does not disclose;
wherein identifying the first malicious domain name comprises identifying the first malicious domain name by processing the first group of domain names observed in a time period, wherein processing the first group of domain names observed in the time period comprises removing, from the first group of domain names observed in the time period, a third group of domain names observed prior to the time period.
However, Thomas discloses;
wherein identifying the first malicious domain name comprises identifying the first malicious domain name by processing the first group of domain names observed in a time period [i.e., obtain a plurality of name-resolution requests received over a period of time from a plurality of DNS name servers (para 0002); generate using a sliding time window with a fixed duration, sets of lists each comprising domains requested within the fixed duration (para 0020)], wherein processing the first group of domain names observed in the time period comprises removing, from the first group of domain names observed in the time period, a third group of domain names observed prior to the time period [i.e., sliding time window lists “within the fixed duration” (0002) i.e., lists are formed per sliding window from requests within that window (para 0022 – 0024)].
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to modify the teachings of Woodbridge by adapting the teachings of Thomas to detect suspicious software (See Thomas; page 1, para 0001).
Regarding claim 10, Woodbridge discloses; the method of claim 9 [i.e., (see claim 9 above)].
Woodbridge does not disclose;
wherein the first group of domain names observed in the time period comprises at least a portion of all domain names observed in domain name system queries processed via a domain name service provider network in the time period.
However, Thomas discloses;
wherein the first group of domain names observed in the time period comprises at least a portion of all domain names observed in domain name system queries processed via a domain name service provider network in the time period [i.e., requests from a plurality of name servers (para 0002) i.e., each name server can include a recursive name server (para 0003 – 0005) i.e., processors are coupled via network to DNS servers, “DNS servers…can be…recursive name servers and convey for each request, the time, server ID, and domain (para 0015 – 0016) i.e., obtains a plurality of name resolution requests from DNS servers 112, 114 and 116 (para 0026)].
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to modify the teachings of Woodbridge by adapting the teachings of Thomas to detect suspicious software (See Thomas; page 1, para 0001).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SYED A RONI whose telephone number is (571)270-7806. The examiner can normally be reached M-F 9:00-5:00 pm (EST).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jeffrey L Nickerson can be reached at (469) 295-9235. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SYED A RONI/Primary Examiner, Art Unit 2432