DETAILED ACTION
1. This office action is in response to the communication filed on 05/20/2024.
2. Claims 1-20 are pending.
Notice of Pre-AIA or AIA Status
3. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
4. Claim(s) 1-3, 5, 7-9, 12-13 and 16-20 is/are rejected under 35 U.S.C. 102(a)(1)/102(a)(2) as being anticipated by Sambamoorthy et al. (US 2022/0329626 A1, hereafter Sambamoorthy).
Regarding claim(s) 1 and 16:
Sambamoorthy discloses a system comprising: a processing system; and a computer memory comprising instructions that, when executed by the processing system (see para. 10 for a computer system/network), cause the system to perform operations of:
capturing an image of a website that is loaded on a client device (see para. 97 where a webpage/website is opened by a user; see para. 166 for a user computer (i.e., a user computer/device used by a user); see para. 21 where an image of a webpage is captured to be analyzed);
generating a set of classification scores for the website based on website request information and the image using a threat assessment machine learning model executed locally on the client device; determining a website threat score for the website based on aggregating a subset of the set of classification scores (see para. 77 where input fields, e.g., username, password, (i.e., website request information) requesting/collecting user credentials and other features from the image are extracted; see paras. 89, 91, and/or 100 where a webpage/website identification model (i.e., threat assessment machine learning model) is developed and trained by artificial intelligence and/or machine learning technique(s) to be used for identifying/detecting a webpage/website as malicious based on features of the webpage/website; see paras. 133-134 where multiple scores (i.e., classification scores) are calculated for a webpage based on credential input field(s) (i.e., website request information) on the webpage and characteristics of the webpage using the webpage/website identification model, wherein the scores are combined/sum into a risk score (i.e., website threat score); see para. 166 where the instructions for the disclosed method are executed by a user computer); and
based on the website threat score for the website satisfying one or more threat thresholds, performing one or more actions reporting the website as fraudulent (see para. 135 where the webpage is flagged as malicious (i.e., fraudulent website verdict) when the risk score exceeds a threshold risk score (i.e., threat threshold); see paras. 144-145, 148 where a security personnel, an administrator, or a recipient is prompted or notified (i.e., reported), wherein the email containing a link to the malicious webpage is deleted (i.e., preventing a user from further accessing the webpage)).
Regarding claim(s) 19:
See the rejection to claim 1.
Regarding claim(s) 2:
Sambamoorthy discloses:
determining to use the threat assessment machine learning model to assess the website for fraudulent behavior in response to detecting the website that is loaded on the client device (see para. 97 where a webpage/website is opened by a user; see paras. 89, 91, and/or 100 where a webpage/website identification model (i.e., threat assessment machine learning model) is developed and trained by artificial intelligence and/or machine learning technique(s) to be used for identifying/detecting a webpage/website as malicious based on features of the webpage/website; see para. 77 where input fields (e.g., username, password) requesting/collecting user credentials and other features from the image are extracted).
Regarding claim(s) 3:
Sambamoorthy discloses:
determining to use the threat assessment machine learning model based on verifying one or more low-computational filter conditions (see paras. 89, 91, and/or 100 where a webpage/website identification model (i.e., threat assessment machine learning model) is developed and trained by artificial intelligence and/or machine learning technique(s) to be used for identifying/detecting a webpage/website as malicious based on features of the webpage/website; see paras. 94-98 where a webpage/website is identified/detected in response to a trigger event and/or other trigger event (i.e., low-computational filter condition)).
Regarding claim(s) 5:
Sambamoorthy discloses:
wherein capturing the image of the website that is loaded on the client device includes capturing a screen capture of the website as it appears within a browser window to a user (see paras. 21, 97).
Regarding claim(s) 7:
Sambamoorthy discloses:
providing a corpus of fraudulent-based website information to a large generative model with instructions to determine website classification types and corresponding fraudulent associations; selecting a set of website classification types; and generating the threat assessment machine learning model based on the set of website classification types to generate a classification score for each of the website classification types for candidate websites (see paras. 9-10 where a spoof classification model or a fingerprint classifier (i.e., large generative model) is executed (i.e., with instructions) to compare the fingerprint of a target webpage to a corpus of untrusted/verified webpages (i.e., fraudulent-based website information) to classify the target webpage as a spoofed webpage or a trusted webpage (i.e., a set of website/webpage classification types as spoofed website/webpage and/or a trusted website/webpage is selected for classification), wherein the fingerprint comprises aggregated features (i.e., fraudulent associations) of the webpage; see para. 32 where potentially spoofed webpages are evaluated using a fingerprint classifier; see paras. 89, 91, and/or 100 where a webpage/website identification model (i.e., threat assessment machine learning model) is developed and trained by artificial intelligence and/or machine learning technique(s) to be used for identifying/detecting a webpage/website as malicious based on features of the webpage/website; see paras. 133-134 where multiple scores (i.e., classification scores) are calculated for a webpage based on credential input field(s) (i.e., website request information) on the webpage and characteristics of the webpage using the webpage/website identification model).
Regarding claim(s) 8:
Sambamoorthy discloses:
wherein the threat assessment machine learning model determines a fraudulent website verdict that the website is fraudulent before the client device receives additional user input associated with the website (see paras. 89, 91, and/or 100 where a webpage/website identification model (i.e., threat assessment machine learning model) is developed and trained by artificial intelligence and/or machine learning technique(s) to be used for identifying/detecting a webpage/website as malicious based on features of the webpage/website; see para. 135 where the webpage is flagged as malicious (i.e., fraudulent website verdict) when the risk score exceeds a threshold risk score (i.e., threat threshold); see para. 64 where the system receives feedback (i.e., additional user input associated with the website) from a user or administrator of the system indicating the accuracy or inaccuracy of the classification of the target webpage).
Regarding claim(s) 9:
Sambamoorthy discloses:
converting the image of the website to converted text before providing the converted text of the image to the threat assessment machine learning model (see paras. 25-26 where the system executes an Optical Character Recognition (OCR) model to extract text/character from the image of a webpage; see para. 85 where features of a webpage/website include text strings extracted from the image of the webpage; see paras. 89, 91, and/or 100 where a webpage/website identification model (i.e., threat assessment machine learning model) is developed and trained by artificial intelligence and/or machine learning technique(s) to be used for identifying/detecting a webpage/website as malicious based on features of the webpage/website).
Regarding claim(s) 12 and 17:
Sambamoorthy discloses:
determining that the website threat score for the website satisfies a user threat threshold; and based on the user threat threshold being satisfied, notifying a user associated with the client device that the website is fraudulent (see paras. 135-136 where the webpage is flagged as malicious (i.e., fraudulent website verdict) when the risk score exceeds a threshold risk score or a threshold similarity score; see paras. 144-145, 148 where a security personnel, an administrator, or a recipient is prompted or notified, wherein the malicious webpage is deleted).
Regarding claim(s) 13 and 18:
Sambamoorthy discloses:
determining that the website threat score for the website satisfies a global threat threshold; and based on the global threat threshold being satisfied, reporting the website to a fraudulent listener service (see paras. 135-136 where the webpage is flagged as malicious (i.e., fraudulent website verdict) when the risk score exceeds a threshold risk score or a threshold similarity score; see paras. 144-145, 148 where a security personnel, an administrator, or a recipient is prompted or notified, wherein the malicious webpage is deleted, wherein the database of malicious webpages is updated with identified malicious webpage/website, wherein the email associated with identified malicious webpage/website is transferred to a quarantine folder for reviewing; see para. 75 where a list of at-risk websites is generated and populated).
Regarding claim(s) 20:
Sambamoorthy discloses:
wherein the threat assessment classification machine learning model does not use a remote resource to determine the set of classification scores for the website (see para. 122 where a database of verified webpage templates is generated, accessed, and maintained by the system; see para. 133 where the webpage/website identification model calculates multiple scores (i.e., classification scores) for a webpage by using verified webpage templates).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
5. Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sambamoorthy in view of Shraim et al. (US 7457823 B2, hereafter Shraim).
Regarding claim(s) 4:
Sambamoorthy discloses:
wherein the one or more low-computational filter conditions (see paras. 95-98 where the website identification model is used to identify/detect a webpage/website in response to a trigger event and/or other trigger event (i.e., low-computational filter conditions)) include:
a first filter condition that verifies whether the website is associated with a commonly accessed website (see paras. 75-76 where the website/webpage trafficked at greatest frequency by a user is identified and extracted features);
a second filter condition that verifies whether the website is not included on a fraudulent website list (see para. 94 where the website identification model is used/updated in response to a new website (i.e., website that is not identified as malicious); see para. 99 for a list of malicious websites/webpages (i.e., fraudulent website list));
a third filter condition that verifies whether the website includes [a threshold number of grammar and typographical errors] (see para. 16 where features include misspelled words or a frequency of misspellings in a webpage);
a fourth filter condition that provides verification based on a previous website that linked to the website (see paras. 139, 141 where a suspicious webpage/website (i.e., previous website) linked to the webpage/website is identified/detect as a feature of the webpage/website); and
a fifth filter condition that provides verification based on client device permissions requested by the website (see paras. 138, 140 where credential input fields (i.e., client device permissions) as a feature of the webpage/website).
Sambamoorthy does not, but Shraim discloses:
a threshold number of grammar and typographical errors (see Shraim, col. 30, line 64- col. 31, line 2).
It would have been obvious to one having ordinary skill in the art to which the claimed invention pertains, before the effective filing date of the claimed invention, to modify Sambamoorthy's invention by enhancing it for a threshold number of grammar and typographical errors, as taught by Shraim, in order for analyzing a webpage for a numerous spelling and grammar errors (Shraim, col. 30, lines 64-67).
6. Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sambamoorthy in view of Niemela (US 2012/0317169 A1).
Regarding claim(s) 6:
Sambamoorthy does not, but Niemela discloses:
detecting one or more client device permissions requested by the website; determining one or more accepted permissions associated with the one or more client device permissions requested; and generating the website request information to indicate the one or more client device permissions and the one or more accepted permissions (see Niemela, paras. 16, 20-23, where web site’s features include Javascript, scripts, plug-ins, etc. (i.e., client device permissions) requested to be run/executed on a client terminal (i.e., client device) by the website; wherein features of a web site are determined, and wherein allowed features are determined; see paras. 31-32, 43 where a list of features that are allowed to be run/executed on the client terminal is created/modified).
It would have been obvious to one having ordinary skill in the art to which the claimed invention pertains, before the effective filing date of the claimed invention, to modify Sambamoorthy's invention by enhancing it for detecting one or more client device permissions requested by the website; determining one or more accepted permissions associated with the one or more client device permissions requested; and generating the website request information to indicate the one or more client device permissions and the one or more accepted permissions, as taught by Niemela, in order to determine executable web site features, and only allow approved/trusted web site features to be run/executed on a client terminal (Niemela, paras. 31-32).
7. Claim(s) 10-11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sambamoorthy in view of Hunt et al. (US 9578048 B1, hereafter Hunt).
Regarding claim(s) 10:
Sambamoorthy discloses:
wherein classification types of the threat assessment machine learning model have [a binary value] indicating a fraudulent association (see para. 10 where a webpage is classified as webpage as a spoofed webpage or a trusted webpage; see paras. 89, 91, and/or 100 where a webpage/website identification model (i.e., threat assessment machine learning model) is developed and trained by artificial intelligence and/or machine learning technique(s) to be used for identifying/detecting a webpage/website as malicious based on features of the webpage/website; see para. 119 where a webpage/website is predicted to not being a high-risk webpage/website; see para. 124 where a webpage is identified as malicious; see para. 126 a webpage is flag as malicious).
Sambamoorthy does not, but Hunt discloses:
a binary value indicating a fraudulent association (see Hunt, col. 11, lines 66-67, where a website is classified as phishing or not phishing based on applying a phishing model; see col. 4, lines 10-20, where the phishing model includes phishing rules used to determine whether the features indicate that the website is performing phishing, wherein each of the phishing rules has a binary result (1 or 0); see col. 18, lines 7-9, where an indication of whether a website is performing phishing is determined based on a result of applying the final phishing rule).
It would have been obvious to one having ordinary skill in the art to which the claimed invention pertains, before the effective filing date of the claimed invention, to modify Sambamoorthy's invention by enhancing it for a binary value indicating a fraudulent association, as taught by Hunt, in order for applying phishing rules to a website’s features to determine whether the website is performing phishing (Hunt, col. 4, lines 10-13).
Regarding claim(s) 11:
Sambamoorthy discloses:
wherein determining the website threat score for the website includes: generating a classification type subset based on identifying fraudulent associations for each of the classification types; and generating the subset of the set of classification scores based on classification scores generated from the classification type subset (see para. 10 where a webpage is classified as webpage as a spoofed webpage or a trusted webpage; see paras. 133-134 where multiple scores (i.e., classification scores) are calculated for a webpage based on credential input field(s) (i.e., website request information) on the webpage and characteristics of the webpage using the webpage/website identification model, wherein the scores are combined/sum into a risk score (i.e., website threat score); see para. 135 where the webpage is flagged as malicious (i.e., fraudulent website verdict) when the risk score exceeds a threshold risk score (i.e., threat threshold)).
8. Claim(s) 14-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sambamoorthy in view of Mushtaq (US 11595437 B1).
Regarding claim(s) 14:
Sambamoorthy does not, but Mushtaq discloses:
wherein reporting the website includes providing a fraudulent website verdict, the image of the website, and the website request information to the fraudulent listener service (see Mushtaq, fig. 8 and col. 7, lines 58-64, where a user is notified/warned/reported about threat associated with a malicious site, wherein a warning page allows a user to access a screen shot of a blocked page/site along with useful information about the threat associated with the malicious site/page; see col. 6, lines 19-25 for threat from a malicious site associated with phishing attack. In addition, see col. 11, lines 4-21; col. 25, lines 8-9).
It would have been obvious to one having ordinary skill in the art to which the claimed invention pertains, before the effective filing date of the claimed invention, to modify Sambamoorthy's invention by enhancing it for reporting the website includes providing a fraudulent website verdict, the image of the website, and the website request information to the fraudulent listener service, as taught by Mushtaq, in order for providing a warning page to a user to allow the user to access a screen shot of a malicious site along with useful information about the threat associated with the malicious site (Mushtaq, col. 7, lines 61-64).
Regarding claim(s) 15:
Sambamoorthy discloses:
receiving, from the fraudulent listener service, one or more websites to add to a fraudulent website list for blocking fraudulent websites (see para. 75 where a list of at-risk webpages/websites is populated with a malicious/spoofed webpage when an administrator identifies a spoofing threat associated with the malicious/spoofed webpage; see para. 148 where a database of malicious webpage template is updated when the administrator confirms the malicious webpage).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
MEISEL et al. (US 2025/0240323 A1), SNAPSHOT FOR ACTIVITY DETECTION AND THREAT ANALYSIS.
Jain et al. (US 12301587 B1), Automatic assessment of potentially malicious web content via web page investigator.
Grobman et al. (US 2025/0117476 A1), METHODS AND APPARATUS TO IDENTIFY STRUCTURAL SIMILARITY BETWEEN WEBPAGES.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HUAN V. DOAN whose telephone number is 571-272-3809. The examiner can normally be reached on Monday – Thursday, 9:00am – 5:00pm EST.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, PHILIP CHEA, can be reached on 571-272-3951. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HUAN V DOAN/Primary Examiner, Art Unit 2499 /PHILIP J CHEA/Supervisory Patent Examiner, Art Unit 2499