Prosecution Insights
Last updated: April 19, 2026
Application No. 17/820,388

SUSPICIOUS DOMAIN DETECTION FOR THREAT INTELLIGENCE

Non-Final OA §103
Filed
Aug 17, 2022
Examiner
DO, KHANG D
Art Unit
2492
Tech Center
2400 — Computer Networks
Assignee
International Business Machines Corporation
OA Round
1 (Non-Final)
80%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
268 granted / 334 resolved
+22.2% vs TC avg
Strong +45% interview lift
Without
With
+44.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
11 currently pending
Career history
345
Total Applications
across all art units

Statute-Specific Performance

§101
11.8%
-28.2% vs TC avg
§103
48.5%
+8.5% vs TC avg
§102
11.5%
-28.5% vs TC avg
§112
18.7%
-21.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 334 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This non-final action is responsive to application filed on 08/17/2022. Claims 1-20 are pending, with claims 1, 10 and 19 being independent. Information Disclosure Statement The information disclosure statement (IDS) submitted on 08/17/2022 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Interpretation For claim 10, “a number of processor units” is a hardware device and is comprised of hardware circuits as defined in paragraph 51 of examined application. For claim 19, “a computer-readable storage medium” is not to be construed as being transitory signals per se as defined in paragraph 23 of examined application. For claims 1-20, paragraphs 88, 89, and 103 of the examined application’s specification provide a standard for ascertaining the requisite degree for “sufficiently similar”. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 7-12 and 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over Tyler et al. (US 2017/0195310, published Jul. 6, 2017), Chien (US 2018/0198796, published Jul. 12, 2018) and Kumar et al. (US 2019/0104154, published Apr. 4, 2019). As per claim 1, Tyler discloses a computer implemented method for detecting suspicious domains (Tyler Fig. 26), the computer implemented method comprising: determining, by a computer system, a homographic similarity between a target domain and a known domain (Tyler par. 67, checking whether the name of a web page or domain is suspiciously similar to that of a known legitimate site; Tyler par. 98, This approach may also be used for the homograph similarity described above); the homographic similarity being sufficiently similar to be potentially suspicious (Tyler par. 131, When the distance/similarity metric is positive and below a threshold, for example, the requesting site is classified as an unsafe site for login credentials). Tyler does not explicitly disclose: comparing, by the computer system, first ownership information for the target domain and second ownership information for the known domain to form an ownership comparison in response to the homographic similarity being sufficiently similar to be potentially suspicious; comparing, by the computer system, a set of first landing page images for the target domain and a set of second landing page images for the known domain to form an image comparison in response to a match between the first ownership information for the target domain and the second ownership information for the known domain being absent; and determining, by the computer system, a threat level for the target domain based on the image comparison. Chien teaches: comparing, by the computer system, first ownership information for the target domain and second ownership information for the known domain to form an ownership comparison (Chien par. 64, anti-phishing module 38a determines whether the owner name and country match the known information for the domain name of the URL. If a match is not found, anti-phishing module then sends an instruction at a communication step 112 for browser 34a to display a warning); a match between the first ownership information for the target domain and the second ownership information for the known domain being absent (Chien par. 64, If a match is not found, anti-phishing module then sends an instruction at a communication step 112 for browser 34a to display a warning). It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to modify the method of Tyler with the teaching of Chien in order to incorporate ownership comparison. One of ordinary skilled in the art would have been motivated because it offers the advantage of providing additional layer of identifying and prohibiting a questionable network communication. Kumar teaches: comparing, by the computer system, a set of first landing page images for the target domain and a set of second landing page images for the known domain to form an image comparison (Kumar Fig. 3B, Perform An Image Comparison Between The URL Screenshot And One Or More Screenshots Of A Webpage Family Having The Highest Confidence at 314; Kumar par. 63, a webpage family may include a plurality of webpages (e.g., Bank of America login webpages) that vary slightly); and determining, by the computer system, a threat level for the target domain based on the image comparison (Kumar Fig. 3B, Determine Whether URL is Phishing URL Or Not at 316-320). It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to modify the method of Tyler with the teaching of Kumar in order to incorporate image comparison. One of ordinary skilled in the art would have been motivated because it offers the advantage of providing additional layer of detecting phishing attacks. Tyler as modifies teaches A (determining the homographic similarity being sufficiently similar), B (determining not match in the ownership comparison), and C (performing image comparison), but does not explicitly disclose performing A, B then C. However, there are only a finite number of orders that these steps can perform: ABC, ACB, BAC, BCA, CAB, CBA. It would have been obvious to one of ordinary skill in the art at the time of effective filing date of the claimed invention to try these alternatives in an attempt to determine the efficiency of the system. As per claim 2, Tyler as modified discloses the computer implemented method of claim 1 further comprising: determining, by the computer system, the target domain to be not suspicious in response to the ownership comparison indicating a match between the first ownership information of the target domain and the second ownership information of the known domain (Chien par. 64, anti-phishing module 38a determines whether the owner name and country match the known information for the domain name of the URL. If a match is not found, anti-phishing module then sends an instruction at a communication step 112 for browser 34a to display a warning. [determining the target domain to be not suspicious when there is a match would have been obvious, if not inherent, in order to identify a questionable network communication]). The same rationale as in claim 1 applies. As per claim 3, Tyler as modified discloses the computer implemented method of claim 1, wherein determining, by the computer system, the homographic similarity between the target domain and the known domain comprises: determining, by the computer system, a first canonicalized values for the known domain (Tyler par. 67, checking whether the name of a web page or domain is suspiciously similar to that of a known legitimate site; Tyler par. 98, For name comparison, one or more embodiments may first decode an encoded internationalized ASCII string (like www.xn--bnkofolympus-x9j.com) into the corresponding Unicode characters, and then compare the Unicode string to other names using canonical representations based on display, or based on other similarity scores that take display representations into account); determining, by the computer system, a second canonicalized values for the target domain (Tyler par. 67, checking whether the name of a web page or domain is suspiciously similar to that of a known legitimate site; Tyler par. 98, For name comparison, one or more embodiments may first decode an encoded internationalized ASCII string (like www.xn--bnkofolympus-x9j.com) into the corresponding Unicode characters, and then compare the Unicode string to other names using canonical representations based on display, or based on other similarity scores that take display representations into account); and comparing, by the computer system, the first canonicalized values to the second canonicalized values to determine the homographic similarity (Tyler par. 98, For name comparison, one or more embodiments may first decode an encoded internationalized ASCII string (like www.xn--bnkofolympus-x9j.com) into the corresponding Unicode characters, and then compare the Unicode string to other names using canonical representations based on display, or based on other similarity scores that take display representations into account), wherein the homographic similarity is sufficiently similar to be potentially suspicious in response to the first canonicalized values and the second canonicalized values matching within a preselected threshold for the homographic similarity (Tyler par. 131, When the distance/similarity metric is positive and below a threshold, for example, the requesting site is classified as an unsafe site for login credentials). As per claim 7, Tyler as modified discloses the computer implemented method of claim 1, wherein determining, by the computer system, the threat level for the target domain based on the image comparison comprises: determining, by the computer system, the target domain to be a threat in response to the image comparison indicating that content in the set of first landing page images and the set of second landing page images are sufficiently similar to be confusing (Kumar Fig. 3B, Determine URL is Phishing URL at 320) and the known domain and the target domain are not owned by a same owner (Chien par. 64, If a match is not found, anti-phishing module then sends an instruction at a communication step 112 for browser 34a to display a warning). The same rationale as in claim 1 applies. As per claim 8, Tyler as modified discloses the computer implemented method of claim 1, wherein determining, by the computer system, the threat level for the target domain based on the image comparison comprises: determining, by the computer system, the target domain to be suspicious in response to the image comparison indicating that content in the set of first landing pages image and the set of second landing page images are not sufficiently similar to be confusing (Kumar Fig. 3B, Determine URL is Not Phishing URL at 318) and the known domain and the target domain are not owned by a same owner (Chien par. 64, If a match is not found, anti-phishing module then sends an instruction at a communication step 112 for browser 34a to display a warning). The same rationale as in claim 1 applies. As per claim 9, Tyler as modified discloses the computer implemented method of claim 1, wherein the target domain is a newly observed domain identified from a newly observed domain stream (Tyler Fig. 5, proxy server 501 receives and analyze target link 432a). Claims 10-12 and 16-18 do not teach or further define over the limitations in claims 1-3, 7-9 respectively. As such, claims 10-12 and 16-18 are rejected for the same reasons as set forth in claims 1-3 and 7-9 respectively. Claims 19-20 do not teach or further define over the limitations in claims 1-2 respectively. As such, claims 19-20 are rejected for the same reasons as set forth in claims 1-2 respectively. Claims 4-5 and 13-14 are rejected under 35 U.S.C. 103 as being unpatentable over Tyler et al. (US 2017/0195310, published Jul. 6, 2017), Chien (US 2018/0198796, published Jul. 12, 2018), Kumar et al. (US 2019/0104154, published Apr. 4, 2019) and O’Connor (US 2016/0352772, published Dec. 1, 2016). As per claim 4, Tyler as modified discloses the computer implemented method of claim 1, but does not explicitly disclose wherein comparing, by the computer system, the set of first landing page images for the target domain and the set of second landing page images for the known domain to form the image comparison comprises: determining, by the computer system, a cosine similarity between the set of first landing page images and the set of second landing page images. O’Connor teaches: determining, by the computer system, a cosine similarity between the set of first images and the set of second images (O’Connor Fig. 5, Determine Cosine Similarity Between Input Vectors and Corpus Vectors at 190; O’Connor par. 20, The content of the web page, such as images, text, etc. may be converted to one or more content vectors. These vectors are compared to a corpus of content vectors representing malicious content or legitimate content used in connection with malicious activity). It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to further modify the method of Tyler with the teaching of O’Connor for determining, by the computer system, a cosine similarity between the set of first landing page images and the set of second landing page images because a simple substitution of one known element (comparison of O’Connor) for another (comparison of Tyler as modified) would yield the predictable results of determining similarity of web pages. As per claim 5, Tyler as modified discloses the computer implemented method of claim 1, but does not explicitly disclose wherein comparing, by the computer system, the set of first landing page images for the target domain and the set of second landing page images for the known domain to form the image comparison comprises: determining, by the computer system, a set of known domain embeddings; determining, by the computer system, a set of target domain embeddings; and determining, by the computer system, a cosine similarity between the set of first landing page images and the set of second landing page images using the set of known domain embeddings and the set of target domain embeddings. O’Connor teaches: determining, by the computer system, a set of known domain embeddings; determining, by the computer system, a set of target domain embeddings; and determining, by the computer system, a cosine similarity between the set of first images and the set of second images using the set of known domain embeddings and the set of target domain embeddings (O’Connor Fig. 5, Determine Cosine Similarity Between Input Vectors and Corpus Vectors at 190; O’Connor par. 20, The content of the web page, such as images, text, etc. may be converted to one or more content vectors. These vectors are compared to a corpus of content vectors representing malicious content or legitimate content used in connection with malicious activity). It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to further modify the method of Tyler with the teaching of O’Connor for determining, by the computer system, a set of known domain embeddings; determining, by the computer system, a set of target domain embeddings; and determining, by the computer system, a cosine similarity between the set of first landing page images and the set of second landing page images using the set of known domain embeddings and the set of target domain embeddings because a simple substitution of one known element (comparison of O’Connor) for another (comparison of Tyler as modified) would yield the predictable results of determining similarity of web pages. Claims 13-14 do not teach or further define over the limitations in claims 4-5 respectively. As such, claims 13-14 are rejected for the same reasons as set forth in claims 4-5 respectively. Claims 6 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Tyler et al. (US 2017/0195310, published Jul. 6, 2017), Chien (US 2018/0198796, published Jul. 12, 2018), Kumar et al. (US 2019/0104154, published Apr. 4, 2019) and Kohavi (US 2022/0070216, published Mar. 3, 2022). As per claim 6, Tyler as modified discloses the computer implemented method of claim 1, but does not explicitly disclose wherein comparing, by the computer system, the set of first landing page images for the target domain and the set of second landing page images for the known domain to form the image comparison comprises: comparing, by the computer system, the set of first landing page images for the target domain and the set of second landing page images for the known domain using a machine learning model to form the image comparison, wherein the machine learning model is trained to compare images and determine a similarity between the images for the image comparison. Kohavi teaches: comparing, by the computer system, the set of first images for the target domain and the set of second images for the known domain using a machine learning model to form the image comparison, wherein the machine learning model is trained to compare images and determine a similarity between the images for the image comparison (Kohavi par. 92, When the page images under analysis match or are sufficiently similar to those of known phishing page targets/brands, the page under analysis is likely to be a phishing page. For example, the phishing page of FIG. 3A is not identical to the genuine page of FIG. 3B but an approximate visual comparison based on machine learning techniques would establish that the page of FIG. 3A is an imitation of the genuine page of FIG. 3B and therefore likely a phishing page. Machine learning techniques are improved by each detection). It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to further modify the method of Tyler with the teaching of Kohavi for comparing, by the computer system, the set of first landing page images for the target domain and the set of second landing page images for the known domain using a machine learning model to form the image comparison, wherein the machine learning model is trained to compare images and determine a similarity between the images for the image comparison because a simple substitution of one known element (comparison of Kohavi) for another (comparison of Tyler as modified) would yield the predictable results of determining similarity of web pages. Claim 15 does not teach or further define over the limitations in claim 6. As such, claim 15 is rejected for the same reasons as set forth in claim 6. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 11496510 B1; Fully Automated Target Identification Of A Phishing Web Site A method of fully automated target identification of a phishing website if a website requests input data from user with deceptive contents (logo, URL path, text in html) and a randomized/wrong data is provided and the website is redirecting to a different domain related with the logo, URL path or text in html. By determining existence of relationships, the website is detected as phishing and the phishing target is automatically identified. US 20220337625 A1; Systems And Methods For Phishing Attack Protection Based On Identity Provider Verification A computer system is provided. The computer system includes a memory and at least one processor coupled to the memory and configured to provide phishing attack protection based on identity provider verification. The at least one processor is further configured to capture an image of a browser web page to which the user has navigated and identify the domain name associated with the browser web page. The at least one processor is further configured to determine that the captured image matches an image of a known identity provider web page. The at least one processor is further configured to detect a phishing attempt in response to the determination that the images match and that the domain name associated with the browser web page differs from the domain name associated with the identity provider web page. US 20220174092 A1; Detection Of Impersonated Web Pages And Other Impersonation Methods For Web-Based Cyber Threats Identifying a malicious web page that impersonates a legitimate web page, including extracting HMTL source and a certificate for a specified web page, parsing the extracted HTML to identify objects, forms, links, templates, images and logos embedded in the HTML, and determining whether or not the HTML source harvests user credentials. If the determining is negative, then marking the specified web page as clean. If the determining is affirmative, then verifying the origin and ownership of the extracted certificate by examining its digital signature to determine a possibility of an impersonation attempt, applying image recognition to the identified images and logos, and comparing the identified images and logos to known images and brand logos of the certificate owner. If the comparing is affirmative, then mark the web page as clean. If the comparing is negative, then mark the web page as suspicious and block the web page from being accessed. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KHANG DO whose telephone number is (571)270-7837. The examiner can normally be reached Monday-Friday 8:00 - 5:00 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, RUPAL DHARIA can be reached at (571) 272-3880. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KHANG DO/Primary Examiner, Art Unit 2492
Read full office action

Prosecution Timeline

Aug 17, 2022
Application Filed
Oct 10, 2023
Response after Non-Final Action
Sep 19, 2025
Examiner Interview (Telephonic)
Sep 23, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603884
ACCESSING AN ENCRYPTED PLATFORM
2y 5m to grant Granted Apr 14, 2026
Patent 12603918
SECURITY SYSTEM FOR DETECTING MALICIOUS ACTOR'S OBSERVATION
2y 5m to grant Granted Apr 14, 2026
Patent 12580961
TRAINING TRUSTED USERS OF AN ENTERPRISE NETWORK FOR PHISHING ATTACKS ON A PER-USER BASIS
2y 5m to grant Granted Mar 17, 2026
Patent 12579287
CHAINING MESSAGE AUTHENTICATION CODES
2y 5m to grant Granted Mar 17, 2026
Patent 12542808
COMPUTER-BASED SYSTEMS FOR DETERMINING A LOOK-ALIKE DOMAIN NAMES IN WEBPAGES AND METHODS OF USE THEREOF
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
80%
Grant Probability
99%
With Interview (+44.9%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 334 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month