DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Summary
This action is a responsive to the application filed on 7/30/2024.
Claims 1-20 are pending and have been examined.
Claims 1-20 are rejected.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-4, 8-11, 15-18 are rejected under 35 U.S.C. 103 as being unpatentable over Orhan (US 20220174093 A1) and further in view of Yue et al. (US 20110126289 A1).
As to claim 1, Orhan teaches An apparatus, comprising: a processor; a non-transitory computer-readable medium; and instructions stored on the non-transitory computer-readable medium and translatable by the processor for implementing an anti-phishing browser plug-in and an anti-phishing module for: initiating an anti-phishing operation on the apparatus as a user enters a login credential on a web page originating from a website (See ¶ [0021], Teaches that FIGS. 3A, 3B, 3C and 3D are flowchart and depictions of another embodiment of the invention where the proposed control layer 2 detects whether the visited web page 10 is phishing or not in real-time. In step 301 the user 14 visits an unknown web page 10 (web page might be safe or malicious). In step 302 the control layer 2 checks if there is a form 8 in the web page 10. The form 8 examples are shown in FIGS. 3C and 3D. In step 303 unknown web page 10 has no input form 8. In step 304 the control layer 2 allows the user 14 to interact with the web page 10 and does not block it. For this case the web page 10 is marked as not phishing. In step 305 the form 8 is found in the web page 10. In step 306 the control layer 2 extracts fields from presented form 8. As illustrated, a first field (field1) and a second field (field2) are extracted.),
the anti-phishing operation comprising: generating a random number of phishing credentials based on the login credential (See ¶ [0021], Teaches that In step 307 random credentials are being generated for a first field (field1) and a second field (field2) and form 8 is submitted in background using these random data.);
randomly selecting, from the random number of phishing credentials, a phishing credential (See ¶ [0021], Teaches that In step 307 random credentials are being generated for a first field (field1) and a second field (field2) and form 8 is submitted in background using these random data.);
and causing a browser application on the apparatus to submit the phishing credential to the website on behalf of the user (See ¶ [0021], Teaches that In step 307 random credentials are being generated for a first field (field1) and a second field (field2) and form 8 is submitted in background using these random data.);
and depending upon whether the phishing credential is accepted by the website, blocking or allowing access to the website (See ¶ [0021], Teaches that In step 308 a response page retrieved after form 8 submission is being collected and the content of the response is analyzed in background. It is checked whether the response web page of random credentials of submitted form includes any input form or not. In step 309 the response page has no input form 8. In step 310 the control layer 2 marks unknown web page 10 as phishing and blocks it. In step 315 form 8 has different fields than tan the original form. In step 316 the control layer 2 marks unknown web page 10 as phishing and blocks it. In step 318 the control layer 2 allows the user 14 to continue using the web site 12 or stop interaction with it. In step 319 form 8 has the same fields with the original form. In step 320 the proposed layer 2 allows the user 14 to interact with the web page 10 and does not block it.).
However, it does not expressly teach the details of generating a random number of phishing credentials based on the login credential.
Yue et al., from analogous art, teaches generating a random number of phishing credentials based on the login credential (See ¶¶ [0021]-[0022], [0024], Teaches that process step 20 intercepts the user-provided (and assumed to be valid for purpose of this description) U/P credential and uses it to generate (S−1) bogus or fake U/P credentials where the valid U/P credential serves as the base credential from which the (S−1) fake credentials are derived. A resulting set of S U/P credentials includes the valid U/P credential and the (S−1) fake U/P credentials. For the case where a user heeds the warning presented at decision block 12, process step 22 generates a set of S bogus or fake U/P credentials. The generation of S fake U/P credentials can be accomplished in a variety of ways without departing from the scope of the present invention. For example, an initial fake U/P credential could be created and (S−1) additional fake U/P credentials could be created therefrom (e.g., via a substitution rule as will be explained further below). In order to appear legitimate, the fake username should generally look like a real username (e.g., janetc, carlwork, etc.) and not a set of randomly-generated letters/numbers since most people have usernames that do not comprise random characters. The value of S should be large enough to hinder or foil a phisher's attempts to verify the U/P credentials. At the same time, the value of S cannot be too large as this will impose processing constraints and time delays on client computer 10. A number of statistical approaches can be used to determine a value of S that balances the above criteria for a given application. Applying such statistical analyses, it has been found that S should be equal to or greater than 3, while not needing to be greater than 10 for most applications, thereby minimizing effects on client computer 10.).
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Yue et al. into Orhan in order to generate S fake U/P credentials that appear legitimate by making them generally look like a real username (e.g., janetc, carlwork, etc.) and not a set of randomly-generated letters/numbers (See Yue et al. ¶ [0022]).
As to claim 2, the combination of Orhan and Yue et al. teaches the apparatus according to claim 1 above. Orhan further teaches determining whether the web page comes from a good website, a bad website, or an unknown website (See ¶ [0020], Teaches that the visited URL 16 is checked within existing blacklist 18 and whitelist 20 of the control layer 2. There are three different possible values for the web page 10 being visited: URL 16 is in whitelist 20, in blacklist 18, URL 16 is neither of the list, thus it is unknown. In step 203 URL 16 is found in whitelist, so the website 12 is known, and it is safe. In step 204 the control layer 2 allows the viewing of the webpage 10 and all further interaction. Thus, there is no further involvement of the proposed control layer 2 until the user 14 visits another web page 10. This guarantees that the user 14 is using the safe/known websites 12 and can submit any sensitive data to these websites and perform any activity on them. In step 205 the URL 16 is found in blacklist 18. In step 206 the web page 10 is blocked. In step 207 the user 14 is informed that the web page 10 is malicious/phishing. In step 208 the URL 16 is not listed in either whitelist 20 or blacklist 18 and the web page 10 is still unknown).
However, it does not expressly teach the details of wherein the instructions are further translatable by the processor for: receiving an indication that the user is entering the login credential on the web page; capturing user input including the login credential being entered on the webpage.
Yue et al., from analogous art, teaches wherein the instructions are further translatable by the processor for: receiving an indication that the user is entering the login credential on the web page; capturing user input including the login credential being entered on the webpage (See ¶¶ [0021]-[0022], Teaches that process step 20 intercepts the user-provided (and assumed to be valid for purpose of this description) U/P credential and uses it to generate (S−1) bogus or fake U/P credentials where the valid U/P credential serves as the base credential from which the (S−1) fake credentials are derived. A resulting set of S U/P credentials includes the valid U/P credential and the (S−1) fake U/P credentials. For the case where a user heeds the warning presented at decision block 12, process step 22 generates a set of S bogus or fake U/P credentials. The generation of S fake U/P credentials can be accomplished in a variety of ways without departing from the scope of the present invention. For example, an initial fake U/P credential could be created and (S−1) additional fake U/P credentials could be created therefrom (e.g., via a substitution rule as will be explained further below). In order to appear legitimate, the fake username should generally look like a real username (e.g., janetc, carlwork, etc.) and not a set of randomly-generated letters/numbers since most people have usernames that do not comprise random characters.).
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Yue et al. into the combination of Orhan and Yue et al. in order to generate S fake U/P credentials that appear legitimate by making them generally look like a real username (e.g., janetc, carlwork, etc.) and not a set of randomly-generated letters/numbers (See Yue et al. ¶ [0022]).
As to claim 3, the combination of Orhan and Yue et al. teaches the apparatus according to claim 2 above. Orhan further teaches wherein the web page resides at a universal resource locator (URL), wherein the determining comprises performing a lookup operation on the URL over an offenders database, and wherein the offenders database stores a plurality of URLs, each respective URL of the plurality of URLs having a phishing status indicative of whether the respective URL is a good URL, a phishing URL, or a new URL (See ¶ [0020], Teaches that the visited URL 16 is checked within existing blacklist 18 and whitelist 20 of the control layer 2. There are three different possible values for the web page 10 being visited: URL 16 is in whitelist 20, in blacklist 18, URL 16 is neither of the list, thus it is unknown. In step 203 URL 16 is found in whitelist, so the website 12 is known, and it is safe. In step 204 the control layer 2 allows the viewing of the webpage 10 and all further interaction. Thus, there is no further involvement of the proposed control layer 2 until the user 14 visits another web page 10. This guarantees that the user 14 is using the safe/known websites 12 and can submit any sensitive data to these websites and perform any activity on them. In step 205 the URL 16 is found in blacklist 18. In step 206 the web page 10 is blocked. In step 207 the user 14 is informed that the web page 10 is malicious/phishing. In step 208 the URL 16 is not listed in either whitelist 20 or blacklist 18 and the web page 10 is still unknown).
As to claim 4, the combination of Orhan and Yue et al. teaches the apparatus according to claim 3 above. Orhan further teaches wherein the instructions are further translatable by the processor for: responsive to not finding the URL of the web page in the offenders database, parsing the user input to obtain the login credential (See ¶ [0021], Teaches that the user 14 visits an unknown web page 10 (web page might be safe or malicious). In step 302 the control layer 2 checks if there is a form 8 in the web page 10. The form 8 examples are shown in FIGS. 3C and 3D. In step 303 unknown web page 10 has no input form 8. In step 304 the control layer 2 allows the user 14 to interact with the web page 10 and does not block it. For this case the web page 10 is marked as not phishing. In step 305 the form 8 is found in the web page 10. In step 306 the control layer 2 extracts fields from presented form 8. As illustrated, a first field (field1) and a second field (field2) are extracted.).
However, it does not expressly teach the details of performing a lookup operation on the login credential over a registration database; and responsive to finding the login credential in the registration database, setting a phishing status to indicate that the URL of the web page comes from a new website and setting a credential status to indicate that the login credential is true, wherein the initiating is performed responsive to the phishing status being set to new and the credential status being set to true.
Yue et al., from analogous art, teaches performing a lookup operation on the login credential over a registration database; and responsive to finding the login credential in the registration database, setting a phishing status to indicate that the URL of the web page comes from a new website and setting a credential status to indicate that the login credential is true, wherein the initiating is performed responsive to the phishing status being set to new and the credential status being set to true (See ¶¶ [0038]-[0039], Teaches that computer 50 uses the provided substitution rule to construct a set of derived or possible U/P credentials based upon the U/P submission that caused the failed login attempt. Next, at step 64, computer 50 compares each constructed/possible U/P credential with those in database 52. If there is a match with one of the legitimate U/P credentials, the chances are high that the U/P submission causing the failed login attempt was one generated by the present invention during a phishing attack as described above. If there is no match, the failed login generated by the U/P submission was most likely caused by an innocent error. If the failure of a login attempt is caused by a phisher who is verifying any one of the S U/P credentials it received, the above-described procedure will readily determine if the U/P submission originated from the set of S U/P credentials. Since computer 50 uses the same substitution rule to construct derived U/P credentials based on the U/P submission and then looks for a matching U/P credential with those in database 52, the probability is very high that a match is indicative of U/P submission that originated from a phisher trying to verify S U/P credentials provided thereto by a client computer 10 as described above. The legitimate website can then implement security to thwart the phisher.).
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Yue et al. into the combination of Orhan and Yue et al. in order to generate S fake U/P credentials that appear legitimate by making them generally look like a real username (e.g., janetc, carlwork, etc.) and not a set of randomly-generated letters/numbers (See Yue et al. ¶ [0022]).
As to claim 8, Orhan teaches a method, comprising: initiating, by an anti-phishing module on a user device, an anti-phishing operation as a user enters a login credential on a web page originating from a website (See ¶ [0021], Teaches that FIGS. 3A, 3B, 3C and 3D are flowchart and depictions of another embodiment of the invention where the proposed control layer 2 detects whether the visited web page 10 is phishing or not in real-time. In step 301 the user 14 visits an unknown web page 10 (web page might be safe or malicious). In step 302 the control layer 2 checks if there is a form 8 in the web page 10. The form 8 examples are shown in FIGS. 3C and 3D. In step 303 unknown web page 10 has no input form 8. In step 304 the control layer 2 allows the user 14 to interact with the web page 10 and does not block it. For this case the web page 10 is marked as not phishing. In step 305 the form 8 is found in the web page 10. In step 306 the control layer 2 extracts fields from presented form 8. As illustrated, a first field (field1) and a second field (field2) are extracted.),
the anti-phishing operation comprising: generating a random number of phishing credentials based on the login credential (See ¶ [0021], Teaches that In step 307 random credentials are being generated for a first field (field1) and a second field (field2) and form 8 is submitted in background using these random data.);
randomly selecting, from the random number of phishing credentials, a phishing credential (See ¶ [0021], Teaches that In step 307 random credentials are being generated for a first field (field1) and a second field (field2) and form 8 is submitted in background using these random data.);
and causing a browser application on the user device to submit the phishing credential to the website on behalf of the user (See ¶ [0021], Teaches that In step 307 random credentials are being generated for a first field (field1) and a second field (field2) and form 8 is submitted in background using these random data.);
and depending upon whether the phishing credential is accepted by the website, blocking or allowing access to the website (See ¶ [0021], Teaches that In step 308 a response page retrieved after form 8 submission is being collected and the content of the response is analyzed in background. It is checked whether the response web page of random credentials of submitted form includes any input form or not. In step 309 the response page has no input form 8. In step 310 the control layer 2 marks unknown web page 10 as phishing and blocks it. In step 315 form 8 has different fields than tan the original form. In step 316 the control layer 2 marks unknown web page 10 as phishing and blocks it. In step 318 the control layer 2 allows the user 14 to continue using the web site 12 or stop interaction with it. In step 319 form 8 has the same fields with the original form. In step 320 the proposed layer 2 allows the user 14 to interact with the web page 10 and does not block it.).
However, it does not expressly teach the details of generating a random number of phishing credentials based on the login credential.
Yue et al., from analogous art, teaches generating a random number of phishing credentials based on the login credential (See ¶¶ [0021]-[0022], [0024], Teaches that process step 20 intercepts the user-provided (and assumed to be valid for purpose of this description) U/P credential and uses it to generate (S−1) bogus or fake U/P credentials where the valid U/P credential serves as the base credential from which the (S−1) fake credentials are derived. A resulting set of S U/P credentials includes the valid U/P credential and the (S−1) fake U/P credentials. For the case where a user heeds the warning presented at decision block 12, process step 22 generates a set of S bogus or fake U/P credentials. The generation of S fake U/P credentials can be accomplished in a variety of ways without departing from the scope of the present invention. For example, an initial fake U/P credential could be created and (S−1) additional fake U/P credentials could be created therefrom (e.g., via a substitution rule as will be explained further below). In order to appear legitimate, the fake username should generally look like a real username (e.g., janetc, carlwork, etc.) and not a set of randomly-generated letters/numbers since most people have usernames that do not comprise random characters. The value of S should be large enough to hinder or foil a phisher's attempts to verify the U/P credentials. At the same time, the value of S cannot be too large as this will impose processing constraints and time delays on client computer 10. A number of statistical approaches can be used to determine a value of S that balances the above criteria for a given application. Applying such statistical analyses, it has been found that S should be equal to or greater than 3, while not needing to be greater than 10 for most applications, thereby minimizing effects on client computer 10.).
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Yue et al. into Orhan in order to generate S fake U/P credentials that appear legitimate by making them generally look like a real username (e.g., janetc, carlwork, etc.) and not a set of randomly-generated letters/numbers (See Yue et al. ¶ [0022]).
As to claim 9, the combination of Orhan and Yue et al. teaches the method according to claim 8 above. Orhan further teaches determining whether the web page comes from a good website, a bad website, or an unknown website (See ¶ [0020], Teaches that the visited URL 16 is checked within existing blacklist 18 and whitelist 20 of the control layer 2. There are three different possible values for the web page 10 being visited: URL 16 is in whitelist 20, in blacklist 18, URL 16 is neither of the list, thus it is unknown. In step 203 URL 16 is found in whitelist, so the website 12 is known, and it is safe. In step 204 the control layer 2 allows the viewing of the webpage 10 and all further interaction. Thus, there is no further involvement of the proposed control layer 2 until the user 14 visits another web page 10. This guarantees that the user 14 is using the safe/known websites 12 and can submit any sensitive data to these websites and perform any activity on them. In step 205 the URL 16 is found in blacklist 18. In step 206 the web page 10 is blocked. In step 207 the user 14 is informed that the web page 10 is malicious/phishing. In step 208 the URL 16 is not listed in either whitelist 20 or blacklist 18 and the web page 10 is still unknown).
However, it does not expressly teach the details of further comprising: receiving an indication that the user is entering the login credential on the web page; capturing user input including the login credential being entered on the webpage.
Yue et al., from analogous art, teaches further comprising: receiving an indication that the user is entering the login credential on the web page; capturing user input including the login credential being entered on the webpage (See ¶¶ [0021]-[0022], Teaches that process step 20 intercepts the user-provided (and assumed to be valid for purpose of this description) U/P credential and uses it to generate (S−1) bogus or fake U/P credentials where the valid U/P credential serves as the base credential from which the (S−1) fake credentials are derived. A resulting set of S U/P credentials includes the valid U/P credential and the (S−1) fake U/P credentials. For the case where a user heeds the warning presented at decision block 12, process step 22 generates a set of S bogus or fake U/P credentials. The generation of S fake U/P credentials can be accomplished in a variety of ways without departing from the scope of the present invention. For example, an initial fake U/P credential could be created and (S−1) additional fake U/P credentials could be created therefrom (e.g., via a substitution rule as will be explained further below). In order to appear legitimate, the fake username should generally look like a real username (e.g., janetc, carlwork, etc.) and not a set of randomly-generated letters/numbers since most people have usernames that do not comprise random characters.).
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Yue et al. into the combination of Orhan and Yue et al. in order to generate S fake U/P credentials that appear legitimate by making them generally look like a real username (e.g., janetc, carlwork, etc.) and not a set of randomly-generated letters/numbers (See Yue et al. ¶ [0022]).
As to claim 10, the combination of Orhan and Yue et al. teaches the method according to claim 9 above. Orhan further teaches wherein the web page resides at a universal resource locator (URL), wherein the determining comprises performing a lookup operation on the URL over an offenders database, and wherein the offenders database stores a plurality of URLs, each respective URL of the plurality of URLs having a phishing status indicative of whether the respective URL is a good URL, a phishing URL, or a new URL (See ¶ [0020], Teaches that the visited URL 16 is checked within existing blacklist 18 and whitelist 20 of the control layer 2. There are three different possible values for the web page 10 being visited: URL 16 is in whitelist 20, in blacklist 18, URL 16 is neither of the list, thus it is unknown. In step 203 URL 16 is found in whitelist, so the website 12 is known, and it is safe. In step 204 the control layer 2 allows the viewing of the webpage 10 and all further interaction. Thus, there is no further involvement of the proposed control layer 2 until the user 14 visits another web page 10. This guarantees that the user 14 is using the safe/known websites 12 and can submit any sensitive data to these websites and perform any activity on them. In step 205 the URL 16 is found in blacklist 18. In step 206 the web page 10 is blocked. In step 207 the user 14 is informed that the web page 10 is malicious/phishing. In step 208 the URL 16 is not listed in either whitelist 20 or blacklist 18 and the web page 10 is still unknown).
As to claim 11, the combination of Orhan and Yue et al. teaches the method according to claim 10 above. Orhan further teaches further comprising responsive to not finding the URL of the web page in the offenders database, parsing the user input to obtain the login credential (See ¶ [0021], Teaches that the user 14 visits an unknown web page 10 (web page might be safe or malicious). In step 302 the control layer 2 checks if there is a form 8 in the web page 10. The form 8 examples are shown in FIGS. 3C and 3D. In step 303 unknown web page 10 has no input form 8. In step 304 the control layer 2 allows the user 14 to interact with the web page 10 and does not block it. For this case the web page 10 is marked as not phishing. In step 305 the form 8 is found in the web page 10. In step 306 the control layer 2 extracts fields from presented form 8. As illustrated, a first field (field1) and a second field (field2) are extracted.).
However, it does not expressly teach the details of performing a lookup operation on the login credential over a registration database; and responsive to finding the login credential in the registration database, setting a phishing status to indicate that the URL of the web page comes from a new website and setting a credential status to indicate that the login credential is true, wherein the initiating is performed responsive to the phishing status being set to new and the credential status being set to true.
Yue et al., from analogous art, teaches performing a lookup operation on the login credential over a registration database; and responsive to finding the login credential in the registration database, setting a phishing status to indicate that the URL of the web page comes from a new website and setting a credential status to indicate that the login credential is true, wherein the initiating is performed responsive to the phishing status being set to new and the credential status being set to true (See ¶¶ [0038]-[0039], Teaches that computer 50 uses the provided substitution rule to construct a set of derived or possible U/P credentials based upon the U/P submission that caused the failed login attempt. Next, at step 64, computer 50 compares each constructed/possible U/P credential with those in database 52. If there is a match with one of the legitimate U/P credentials, the chances are high that the U/P submission causing the failed login attempt was one generated by the present invention during a phishing attack as described above. If there is no match, the failed login generated by the U/P submission was most likely caused by an innocent error. If the failure of a login attempt is caused by a phisher who is verifying any one of the S U/P credentials it received, the above-described procedure will readily determine if the U/P submission originated from the set of S U/P credentials. Since computer 50 uses the same substitution rule to construct derived U/P credentials based on the U/P submission and then looks for a matching U/P credential with those in database 52, the probability is very high that a match is indicative of U/P submission that originated from a phisher trying to verify S U/P credentials provided thereto by a client computer 10 as described above. The legitimate website can then implement security to thwart the phisher.).
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Yue et al. into the combination of Orhan and Yue et al. in order to generate S fake U/P credentials that appear legitimate by making them generally look like a real username (e.g., janetc, carlwork, etc.) and not a set of randomly-generated letters/numbers (See Yue et al. ¶ [0022]).
As to claim 15, Orhan teaches A computer program product comprising a non-transitory computer-readable medium storing instructions translatable by a processor for implementing an anti-phishing browser plug-in and an anti-phishing module on a user device for: initiating an anti-phishing operation on the user device as a user enters a login credential on a web page originating from a website (See ¶ [0021], Teaches that FIGS. 3A, 3B, 3C and 3D are flowchart and depictions of another embodiment of the invention where the proposed control layer 2 detects whether the visited web page 10 is phishing or not in real-time. In step 301 the user 14 visits an unknown web page 10 (web page might be safe or malicious). In step 302 the control layer 2 checks if there is a form 8 in the web page 10. The form 8 examples are shown in FIGS. 3C and 3D. In step 303 unknown web page 10 has no input form 8. In step 304 the control layer 2 allows the user 14 to interact with the web page 10 and does not block it. For this case the web page 10 is marked as not phishing. In step 305 the form 8 is found in the web page 10. In step 306 the control layer 2 extracts fields from presented form 8. As illustrated, a first field (field1) and a second field (field2) are extracted.),
the anti-phishing operation comprising: generating a random number of phishing credentials based on the login credential (See ¶ [0021], Teaches that In step 307 random credentials are being generated for a first field (field1) and a second field (field2) and form 8 is submitted in background using these random data.);
randomly selecting, from the random number of phishing credentials, a phishing credential (See ¶ [0021], Teaches that In step 307 random credentials are being generated for a first field (field1) and a second field (field2) and form 8 is submitted in background using these random data.);
and causing a browser application on the user device to submit the phishing credential to the website on behalf of the user (See ¶ [0021], Teaches that In step 307 random credentials are being generated for a first field (field1) and a second field (field2) and form 8 is submitted in background using these random data.);
and depending upon whether the phishing credential is accepted by the website, blocking or allowing access to the website (See ¶ [0021], Teaches that In step 308 a response page retrieved after form 8 submission is being collected and the content of the response is analyzed in background. It is checked whether the response web page of random credentials of submitted form includes any input form or not. In step 309 the response page has no input form 8. In step 310 the control layer 2 marks unknown web page 10 as phishing and blocks it. In step 315 form 8 has different fields than tan the original form. In step 316 the control layer 2 marks unknown web page 10 as phishing and blocks it. In step 318 the control layer 2 allows the user 14 to continue using the web site 12 or stop interaction with it. In step 319 form 8 has the same fields with the original form. In step 320 the proposed layer 2 allows the user 14 to interact with the web page 10 and does not block it.).
However, it does not expressly teach the details of generating a random number of phishing credentials based on the login credential.
Yue et al., from analogous art, teaches generating a random number of phishing credentials based on the login credential (See ¶¶ [0021]-[0022], [0024], Teaches that process step 20 intercepts the user-provided (and assumed to be valid for purpose of this description) U/P credential and uses it to generate (S−1) bogus or fake U/P credentials where the valid U/P credential serves as the base credential from which the (S−1) fake credentials are derived. A resulting set of S U/P credentials includes the valid U/P credential and the (S−1) fake U/P credentials. For the case where a user heeds the warning presented at decision block 12, process step 22 generates a set of S bogus or fake U/P credentials. The generation of S fake U/P credentials can be accomplished in a variety of ways without departing from the scope of the present invention. For example, an initial fake U/P credential could be created and (S−1) additional fake U/P credentials could be created therefrom (e.g., via a substitution rule as will be explained further below). In order to appear legitimate, the fake username should generally look like a real username (e.g., janetc, carlwork, etc.) and not a set of randomly-generated letters/numbers since most people have usernames that do not comprise random characters. The value of S should be large enough to hinder or foil a phisher's attempts to verify the U/P credentials. At the same time, the value of S cannot be too large as this will impose processing constraints and time delays on client computer 10. A number of statistical approaches can be used to determine a value of S that balances the above criteria for a given application. Applying such statistical analyses, it has been found that S should be equal to or greater than 3, while not needing to be greater than 10 for most applications, thereby minimizing effects on client computer 10.).
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Yue et al. into Orhan in order to generate S fake U/P credentials that appear legitimate by making them generally look like a real username (e.g., janetc, carlwork, etc.) and not a set of randomly-generated letters/numbers (See Yue et al. ¶ [0022]).
As to claim 16, the combination of Orhan and Yue et al. teaches the computer program product according to claim 15 above. Orhan further teaches determining whether the web page comes from a good website, a bad website, or an unknown website (See ¶ [0020], Teaches that the visited URL 16 is checked within existing blacklist 18 and whitelist 20 of the control layer 2. There are three different possible values for the web page 10 being visited: URL 16 is in whitelist 20, in blacklist 18, URL 16 is neither of the list, thus it is unknown. In step 203 URL 16 is found in whitelist, so the website 12 is known, and it is safe. In step 204 the control layer 2 allows the viewing of the webpage 10 and all further interaction. Thus, there is no further involvement of the proposed control layer 2 until the user 14 visits another web page 10. This guarantees that the user 14 is using the safe/known websites 12 and can submit any sensitive data to these websites and perform any activity on them. In step 205 the URL 16 is found in blacklist 18. In step 206 the web page 10 is blocked. In step 207 the user 14 is informed that the web page 10 is malicious/phishing. In step 208 the URL 16 is not listed in either whitelist 20 or blacklist 18 and the web page 10 is still unknown).
However, it does not expressly teach the details of wherein the instructions are further translatable by the processor for: receiving an indication that the user is entering the login credential on the web page; capturing user input including the login credential being entered on the webpage.
Yue et al., from analogous art, teaches wherein the instructions are further translatable by the processor for: receiving an indication that the user is entering the login credential on the web page; capturing user input including the login credential being entered on the webpage (See ¶¶ [0021]-[0022], Teaches that process step 20 intercepts the user-provided (and assumed to be valid for purpose of this description) U/P credential and uses it to generate (S−1) bogus or fake U/P credentials where the valid U/P credential serves as the base credential from which the (S−1) fake credentials are derived. A resulting set of S U/P credentials includes the valid U/P credential and the (S−1) fake U/P credentials. For the case where a user heeds the warning presented at decision block 12, process step 22 generates a set of S bogus or fake U/P credentials. The generation of S fake U/P credentials can be accomplished in a variety of ways without departing from the scope of the present invention. For example, an initial fake U/P credential could be created and (S−1) additional fake U/P credentials could be created therefrom (e.g., via a substitution rule as will be explained further below). In order to appear legitimate, the fake username should generally look like a real username (e.g., janetc, carlwork, etc.) and not a set of randomly-generated letters/numbers since most people have usernames that do not comprise random characters.).
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Yue et al. into the combination of Orhan and Yue et al. in order to generate S fake U/P credentials that appear legitimate by making them generally look like a real username (e.g., janetc, carlwork, etc.) and not a set of randomly-generated letters/numbers (See Yue et al. ¶ [0022]).
As to claim 17, the combination of Orhan and Yue et al. teaches the computer program product according to claim 16 above. Orhan further teaches wherein the web page resides at a universal resource locator (URL), wherein the determining comprises performing a lookup operation on the URL over an offenders database, and wherein the offenders database stores a plurality of URLs, each respective URL of the plurality of URLs having a phishing status indicative of whether the respective URL is a good URL, a phishing URL, or a new URL (See ¶ [0020], Teaches that the visited URL 16 is checked within existing blacklist 18 and whitelist 20 of the control layer 2. There are three different possible values for the web page 10 being visited: URL 16 is in whitelist 20, in blacklist 18, URL 16 is neither of the list, thus it is unknown. In step 203 URL 16 is found in whitelist, so the website 12 is known, and it is safe. In step 204 the control layer 2 allows the viewing of the webpage 10 and all further interaction. Thus, there is no further involvement of the proposed control layer 2 until the user 14 visits another web page 10. This guarantees that the user 14 is using the safe/known websites 12 and can submit any sensitive data to these websites and perform any activity on them. In step 205 the URL 16 is found in blacklist 18. In step 206 the web page 10 is blocked. In step 207 the user 14 is informed that the web page 10 is malicious/phishing. In step 208 the URL 16 is not listed in either whitelist 20 or blacklist 18 and the web page 10 is still unknown).
As to claim 18, the combination of Orhan and Yue et al. teaches the computer program product according to claim 17 above. Orhan further teaches wherein the instructions are further translatable by the processor for: responsive to not finding the URL of the web page in the offenders database, parsing the user input to obtain the login credential (See ¶ [0021], Teaches that the user 14 visits an unknown web page 10 (web page might be safe or malicious). In step 302 the control layer 2 checks if there is a form 8 in the web page 10. The form 8 examples are shown in FIGS. 3C and 3D. In step 303 unknown web page 10 has no input form 8. In step 304 the control layer 2 allows the user 14 to interact with the web page 10 and does not block it. For this case the web page 10 is marked as not phishing. In step 305 the form 8 is found in the web page 10. In step 306 the control layer 2 extracts fields from presented form 8. As illustrated, a first field (field1) and a second field (field2) are extracted.).
However, it does not expressly teach the details of performing a lookup operation on the login credential over a registration database; and responsive to finding the login credential in the registration database, setting a phishing status to indicate that the URL of the web page comes from a new website and setting a credential status to indicate that the login credential is true, wherein the initiating is performed responsive to the phishing status being set to new and the credential status being set to true.
Yue et al., from analogous art, teaches performing a lookup operation on the login credential over a registration database; and responsive to finding the login credential in the registration database, setting a phishing status to indicate that the URL of the web page comes from a new website and setting a credential status to indicate that the login credential is true, wherein the initiating is performed responsive to the phishing status being set to new and the credential status being set to true (See ¶¶ [0038]-[0039], Teaches that computer 50 uses the provided substitution rule to construct a set of derived or possible U/P credentials based upon the U/P submission that caused the failed login attempt. Next, at step 64, computer 50 compares each constructed/possible U/P credential with those in database 52. If there is a match with one of the legitimate U/P credentials, the chances are high that the U/P submission causing the failed login attempt was one generated by the present invention during a phishing attack as described above. If there is no match, the failed login generated by the U/P submission was most likely caused by an innocent error. If the failure of a login attempt is caused by a phisher who is verifying any one of the S U/P credentials it received, the above-described procedure will readily determine if the U/P submission originated from the set of S U/P credentials. Since computer 50 uses the same substitution rule to construct derived U/P credentials based on the U/P submission and then looks for a matching U/P credential with those in database 52, the probability is very high that a match is indicative of U/P submission that originated from a phisher trying to verify S U/P credentials provided thereto by a client computer 10 as described above. The legitimate website can then implement security to thwart the phisher.).
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Yue et al. into the combination of Orhan and Yue et al. in order to generate S fake U/P credentials that appear legitimate by making them generally look like a real username (e.g., janetc, carlwork, etc.) and not a set of randomly-generated letters/numbers (See Yue et al. ¶ [0022]).
Claims 5-7, 12-14, 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Orhan (US 20220174093 A1) and Yue et al. (US 20110126289 A1) and further in view of Singh (US 20230362193 A1).
As to claim 5, the combination of Orhan and Yue et al. teaches the apparatus according to claim 4 above. However, it does not expressly teach the details of wherein the instructions are further translatable by the processor for: updating the offenders database to reflect whether the website passed or failed the anti-phishing operation.
Singh, from analogous art, teaches wherein the instructions are further translatable by the processor for: updating the offenders database to reflect whether the website passed or failed the anti-phishing operation (See ¶¶ [0110]-[0111], [0125], Teaches that in the case of an affirmative or successful reply from the suspicious site, the system may determine to execute a precautionary operation, such as terminating the request. This is because an affirmative or successful reply to an incorrect login may indicate the suspicious site is a phishing site, as described in the examples of FIGS. 2B-2C and 3A-3B above. In some examples, the system may terminate the visual instance of the browser and/or may render a notification for the user, such as via a dialog. In some examples, the system may add the site to a blacklist of unsafe or phishing sites, and may thereafter forbid visiting the site. In some examples, the system may share such a blacklist across user sessions or workspace sessions, such as by updating a repository on the workspace server or in the cloud, and/or may update the shared blacklist periodically by repeating the method 500. In the case of an unsuccessful reply or an error message, the system may determine to proceed with the request. This is because an unsuccessful reply or an error message may indicate the suspicious site is not a phishing site, as described above. In some examples, the system may add the site to a whitelist of safe or legitimate sites, and may thereafter allow visiting the site. In some examples, the system may share such a whitelist across user sessions or workspace sessions, such as by updating a repository on the workspace server or in the cloud, and/or may update the shared whitelist periodically by repeating the method 500. Next, responsive to the sign in control having reappeared, the workspace application, workspace server, or VDA can proceed 614 with the request. For example, the system may add the site to a whitelist of safe or legitimate sites, share such a whitelist across user sessions or workspace sessions, and/or thereafter allow visiting the site.).
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Singh into the combination of Orhan and Yue et al. in order to receive a request to visit the suspected website, sending an incorrect password to the suspected website, receiving a reply from the suspected website, and determining, based on the reply to the incorrect password, whether to execute a precautionary operation (See Singh ¶ [0014]).
As to claim 6, the combination of Orhan and Yue et al. teaches the apparatus according to claim 1 above. However, it does not expressly teach the details of wherein the instructions are further translatable by the processor for: responsive to the phishing credential being accepted by the website, generating a message indicating that the website has failed the phishing operation and, therefore, access to the website is to be blocked.
Singh, from analogous art, teaches wherein the instructions are further translatable by the processor for: responsive to the phishing credential being accepted by the website, generating a message indicating that the website has failed the phishing operation and, therefore, access to the website is to be blocked (See ¶¶ [0110]-[0111], [0125], Teaches that in the case of an affirmative or successful reply from the suspicious site, the system may determine to execute a precautionary operation, such as terminating the request. This is because an affirmative or successful reply to an incorrect login may indicate the suspicious site is a phishing site, as described in the examples of FIGS. 2B-2C and 3A-3B above. In some examples, the system may terminate the visual instance of the browser and/or may render a notification for the user, such as via a dialog. In some examples, the system may add the site to a blacklist of unsafe or phishing sites, and may thereafter forbid visiting the site. In some examples, the system may share such a blacklist across user sessions or workspace sessions, such as by updating a repository on the workspace server or in the cloud, and/or may update the shared blacklist periodically by repeating the method 500. In the case of an unsuccessful reply or an error message, the system may determine to proceed with the request. This is because an unsuccessful reply or an error message may indicate the suspicious site is not a phishing site, as described above. In some examples, the system may add the site to a whitelist of safe or legitimate sites, and may thereafter allow visiting the site. In some examples, the system may share such a whitelist across user sessions or workspace sessions, such as by updating a repository on the workspace server or in the cloud, and/or may update the shared whitelist periodically by repeating the method 500. Next, responsive to the sign in control having reappeared, the workspace application, workspace server, or VDA can proceed 614 with the request. For example, the system may add the site to a whitelist of safe or legitimate sites, share such a whitelist across user sessions or workspace sessions, and/or thereafter allow visiting the site.).
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Singh into the combination of Orhan and Yue et al. in order to receive a request to visit the suspected website, sending an incorrect password to the suspected website, receiving a reply from the suspected website, and determining, based on the reply to the incorrect password, whether to execute a precautionary operation (See Singh ¶ [0014]).
As to claim 7, the combination of Orhan and Yue et al. teaches the apparatus according to claim 1 above. However, it does not expressly teach the details of wherein the instructions are further translatable by the processor for: responsive to the phishing credential being rejected by the website, generating a message indicating that the website has passed the phishing operation and, therefore, access to the website is allowed.
Singh, from analogous art, teaches wherein the instructions are further translatable by the processor for: responsive to the phishing credential being rejected by the website, generating a message indicating that the website has passed the phishing operation and, therefore, access to the website is allowed (See ¶¶ [0110]-[0111], [0125], Teaches that in the case of an affirmative or successful reply from the suspicious site, the system may determine to execute a precautionary operation, such as terminating the request. This is because an affirmative or successful reply to an incorrect login may indicate the suspicious site is a phishing site, as described in the examples of FIGS. 2B-2C and 3A-3B above. In some examples, the system may terminate the visual instance of the browser and/or may render a notification for the user, such as via a dialog. In some examples, the system may add the site to a blacklist of unsafe or phishing sites, and may thereafter forbid visiting the site. In some examples, the system may share such a blacklist across user sessions or workspace sessions, such as by updating a repository on the workspace server or in the cloud, and/or may update the shared blacklist periodically by repeating the method 500. In the case of an unsuccessful reply or an error message, the system may determine to proceed with the request. This is because an unsuccessful reply or an error message may indicate the suspicious site is not a phishing site, as described above. In some examples, the system may add the site to a whitelist of safe or legitimate sites, and may thereafter allow visiting the site. In some examples, the system may share such a whitelist across user sessions or workspace sessions, such as by updating a repository on the workspace server or in the cloud, and/or may update the shared whitelist periodically by repeating the method 500. Next, responsive to the sign in control having reappeared, the workspace application, workspace server, or VDA can proceed 614 with the request. For example, the system may add the site to a whitelist of safe or legitimate sites, share such a whitelist across user sessions or workspace sessions, and/or thereafter allow visiting the site. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to notify the user if the site passes the test.).
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Singh into the combination of Orhan and Yue et al. in order to receive a request to visit the suspected website, sending an incorrect password to the suspected website, receiving a reply from the suspected website, and determining, based on the reply to the incorrect password, whether to execute a precautionary operation (See Singh ¶ [0014]).
As to claim 12, the combination of Orhan and Yue et al. teaches the method according to claim 11 above. However, it does not expressly teach the details of further comprising: updating the offenders database to reflect whether the website passed or failed the anti-phishing operation.
Singh, from analogous art, teaches further comprising: updating the offenders database to reflect whether the website passed or failed the anti-phishing operation (See ¶¶ [0110]-[0111], [0125], Teaches that in the case of an affirmative or successful reply from the suspicious site, the system may determine to execute a precautionary operation, such as terminating the request. This is because an affirmative or successful reply to an incorrect login may indicate the suspicious site is a phishing site, as described in the examples of FIGS. 2B-2C and 3A-3B above. In some examples, the system may terminate the visual instance of the browser and/or may render a notification for the user, such as via a dialog. In some examples, the system may add the site to a blacklist of unsafe or phishing sites, and may thereafter forbid visiting the site. In some examples, the system may share such a blacklist across user sessions or workspace sessions, such as by updating a repository on the workspace server or in the cloud, and/or may update the shared blacklist periodically by repeating the method 500. In the case of an unsuccessful reply or an error message, the system may determine to proceed with the request. This is because an unsuccessful reply or an error message may indicate the suspicious site is not a phishing site, as described above. In some examples, the system may add the site to a whitelist of safe or legitimate sites, and may thereafter allow visiting the site. In some examples, the system may share such a whitelist across user sessions or workspace sessions, such as by updating a repository on the workspace server or in the cloud, and/or may update the shared whitelist periodically by repeating the method 500. Next, responsive to the sign in control having reappeared, the workspace application, workspace server, or VDA can proceed 614 with the request. For example, the system may add the site to a whitelist of safe or legitimate sites, share such a whitelist across user sessions or workspace sessions, and/or thereafter allow visiting the site.).
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Singh into the combination of Orhan and Yue et al. in order to receive a request to visit the suspected website, sending an incorrect password to the suspected website, receiving a reply from the suspected website, and determining, based on the reply to the incorrect password, whether to execute a precautionary operation (See Singh ¶ [0014]).
As to claim 13, the combination of Orhan and Yue et al. teaches the method according to claim 8 above. However, it does not expressly teach the details of further comprising: responsive to the phishing credential being accepted by the website, generating a message indicating that the website has failed the phishing operation and, therefore, access to the website is to be blocked.
Singh, from analogous art, teaches further comprising: responsive to the phishing credential being accepted by the website, generating a message indicating that the website has failed the phishing operation and, therefore, access to the website is to be blocked (See ¶¶ [0110]-[0111], [0125], Teaches that in the case of an affirmative or successful reply from the suspicious site, the system may determine to execute a precautionary operation, such as terminating the request. This is because an affirmative or successful reply to an incorrect login may indicate the suspicious site is a phishing site, as described in the examples of FIGS. 2B-2C and 3A-3B above. In some examples, the system may terminate the visual instance of the browser and/or may render a notification for the user, such as via a dialog. In some examples, the system may add the site to a blacklist of unsafe or phishing sites, and may thereafter forbid visiting the site. In some examples, the system may share such a blacklist across user sessions or workspace sessions, such as by updating a repository on the workspace server or in the cloud, and/or may update the shared blacklist periodically by repeating the method 500. In the case of an unsuccessful reply or an error message, the system may determine to proceed with the request. This is because an unsuccessful reply or an error message may indicate the suspicious site is not a phishing site, as described above. In some examples, the system may add the site to a whitelist of safe or legitimate sites, and may thereafter allow visiting the site. In some examples, the system may share such a whitelist across user sessions or workspace sessions, such as by updating a repository on the workspace server or in the cloud, and/or may update the shared whitelist periodically by repeating the method 500. Next, responsive to the sign in control having reappeared, the workspace application, workspace server, or VDA can proceed 614 with the request. For example, the system may add the site to a whitelist of safe or legitimate sites, share such a whitelist across user sessions or workspace sessions, and/or thereafter allow visiting the site.).
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Singh into the combination of Orhan and Yue et al. in order to receive a request to visit the suspected website, sending an incorrect password to the suspected website, receiving a reply from the suspected website, and determining, based on the reply to the incorrect password, whether to execute a precautionary operation (See Singh ¶ [0014]).
As to claim 14, the combination of Orhan and Yue et al. teaches the method according to claim 8 above. However, it does not expressly teach the details of further comprising: responsive to the phishing credential being rejected by the website, generating a message indicating that the website has passed the phishing operation and, therefore, access to the website is allowed.
Singh, from analogous art, teaches further comprising: responsive to the phishing credential being rejected by the website, generating a message indicating that the website has passed the phishing operation and, therefore, access to the website is allowed (See ¶¶ [0110]-[0111], [0125], Teaches that in the case of an affirmative or successful reply from the suspicious site, the system may determine to execute a precautionary operation, such as terminating the request. This is because an affirmative or successful reply to an incorrect login may indicate the suspicious site is a phishing site, as described in the examples of FIGS. 2B-2C and 3A-3B above. In some examples, the system may terminate the visual instance of the browser and/or may render a notification for the user, such as via a dialog. In some examples, the system may add the site to a blacklist of unsafe or phishing sites, and may thereafter forbid visiting the site. In some examples, the system may share such a blacklist across user sessions or workspace sessions, such as by updating a repository on the workspace server or in the cloud, and/or may update the shared blacklist periodically by repeating the method 500. In the case of an unsuccessful reply or an error message, the system may determine to proceed with the request. This is because an unsuccessful reply or an error message may indicate the suspicious site is not a phishing site, as described above. In some examples, the system may add the site to a whitelist of safe or legitimate sites, and may thereafter allow visiting the site. In some examples, the system may share such a whitelist across user sessions or workspace sessions, such as by updating a repository on the workspace server or in the cloud, and/or may update the shared whitelist periodically by repeating the method 500. Next, responsive to the sign in control having reappeared, the workspace application, workspace server, or VDA can proceed 614 with the request. For example, the system may add the site to a whitelist of safe or legitimate sites, share such a whitelist across user sessions or workspace sessions, and/or thereafter allow visiting the site. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to notify the user if the site passes the test.).
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Singh into the combination of Orhan and Yue et al. in order to receive a request to visit the suspected website, sending an incorrect password to the suspected website, receiving a reply from the suspected website, and determining, based on the reply to the incorrect password, whether to execute a precautionary operation (See Singh ¶ [0014]).
As to claim 19, the combination of Orhan and Yue et al. teaches the computer program product according to claim 18 above. However, it does not expressly teach the details of wherein the instructions are further translatable by the processor for: updating the offenders database to reflect whether the website passed or failed the anti-phishing operation.
Singh, from analogous art, teaches wherein the instructions are further translatable by the processor for: updating the offenders database to reflect whether the website passed or failed the anti-phishing operation (See ¶¶ [0110]-[0111], [0125], Teaches that in the case of an affirmative or successful reply from the suspicious site, the system may determine to execute a precautionary operation, such as terminating the request. This is because an affirmative or successful reply to an incorrect login may indicate the suspicious site is a phishing site, as described in the examples of FIGS. 2B-2C and 3A-3B above. In some examples, the system may terminate the visual instance of the browser and/or may render a notification for the user, such as via a dialog. In some examples, the system may add the site to a blacklist of unsafe or phishing sites, and may thereafter forbid visiting the site. In some examples, the system may share such a blacklist across user sessions or workspace sessions, such as by updating a repository on the workspace server or in the cloud, and/or may update the shared blacklist periodically by repeating the method 500. In the case of an unsuccessful reply or an error message, the system may determine to proceed with the request. This is because an unsuccessful reply or an error message may indicate the suspicious site is not a phishing site, as described above. In some examples, the system may add the site to a whitelist of safe or legitimate sites, and may thereafter allow visiting the site. In some examples, the system may share such a whitelist across user sessions or workspace sessions, such as by updating a repository on the workspace server or in the cloud, and/or may update the shared whitelist periodically by repeating the method 500. Next, responsive to the sign in control having reappeared, the workspace application, workspace server, or VDA can proceed 614 with the request. For example, the system may add the site to a whitelist of safe or legitimate sites, share such a whitelist across user sessions or workspace sessions, and/or thereafter allow visiting the site.).
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Singh into the combination of Orhan and Yue et al. in order to receive a request to visit the suspected website, sending an incorrect password to the suspected website, receiving a reply from the suspected website, and determining, based on the reply to the incorrect password, whether to execute a precautionary operation (See Singh ¶ [0014]).
As to claim 20, the combination of Orhan and Yue et al. teaches the computer program product according to claim 15 above. However, it does not expressly teach the details of wherein the instructions are further translatable by the processor for: responsive to the phishing credential being accepted by the website, generating a message indicating that the website has failed the phishing operation and, therefore, access to the website is to be blocked; and responsive to the phishing credential being rejected by the website, generating a message indicating that the website has passed the phishing operation and, therefore, access to the website is allowed.
Singh, from analogous art, teaches wherein the instructions are further translatable by the processor for: responsive to the phishing credential being accepted by the website, generating a message indicating that the website has failed the phishing operation and, therefore, access to the website is to be blocked; and responsive to the phishing credential being rejected by the website, generating a message indicating that the website has passed the phishing operation and, therefore, access to the website is allowed (See ¶¶ [0110]-[0111], [0125], Teaches that in the case of an affirmative or successful reply from the suspicious site, the system may determine to execute a precautionary operation, such as terminating the request. This is because an affirmative or successful reply to an incorrect login may indicate the suspicious site is a phishing site, as described in the examples of FIGS. 2B-2C and 3A-3B above. In some examples, the system may terminate the visual instance of the browser and/or may render a notification for the user, such as via a dialog. In some examples, the system may add the site to a blacklist of unsafe or phishing sites, and may thereafter forbid visiting the site. In some examples, the system may share such a blacklist across user sessions or workspace sessions, such as by updating a repository on the workspace server or in the cloud, and/or may update the shared blacklist periodically by repeating the method 500. In the case of an unsuccessful reply or an error message, the system may determine to proceed with the request. This is because an unsuccessful reply or an error message may indicate the suspicious site is not a phishing site, as described above. In some examples, the system may add the site to a whitelist of safe or legitimate sites, and may thereafter allow visiting the site. In some examples, the system may share such a whitelist across user sessions or workspace sessions, such as by updating a repository on the workspace server or in the cloud, and/or may update the shared whitelist periodically by repeating the method 500. Next, responsive to the sign in control having reappeared, the workspace application, workspace server, or VDA can proceed 614 with the request. For example, the system may add the site to a whitelist of safe or legitimate sites, share such a whitelist across user sessions or workspace sessions, and/or thereafter allow visiting the site. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to notify the user if the site passes the test.).
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Singh into the combination of Orhan and Yue et al. in order to receive a request to visit the suspected website, sending an incorrect password to the suspected website, receiving a reply from the suspected website, and determining, based on the reply to the incorrect password, whether to execute a precautionary operation (See Singh ¶ [0014]).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
GOUTAL et al. (US 20190327268 A1) teaches A computer-implemented method of preventing leakage of user credentials to phishing websites may comprise capturing user credentials input to website; updating a stored list of trusted website credentials upon determining that the domain of the URL of the website is present in a stored list of trusted websites; generating a hash of the captured user credentials; determining whether the hashed user credentials matches one of the hashed user credentials in the list of trusted website credentials; and when a match is found, requesting input whether the website is trusted or whether the website is unknown and/or untrusted; sending the URL to a remote computer server when the input indicates that the website is unknown and/or untrusted and disallowing submission of the user credentials to the website; adding the domain of the URL to the stored list of trusted websites, adding the generated hash of the captured user credentials to a stored list of trusted website credentials and allowing submission of the user credentials to the website.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to James R Hollister whose telephone number is (571)270-3152. The examiner can normally be reached Mon - Fri 7:30 am - 4:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Philip Chea can be reached at (571) 272-3951. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
James Hollister
/J.R.H./Examiner, Art Unit 2499 3/4/26
/PHILIP J CHEA/Supervisory Patent Examiner, Art Unit 2499