Detailed Action
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Applicant is reminded that in order for a patent issuing on the instant application to obtain priority under 35 U.S.C. 119(a)-(d) or (f), 365(a) or (b), or 386(a) or (b), based on priority papers filed in a parent or related Application No. 18/766,585 (to which the present application claims the benefit under 35 U.S.C. 120, 121, 365(c), or 386(c) or is a reissue application of a patent issued on the related application), a claim for such foreign priority must be timely made in this application. To satisfy the requirement of 37 CFR 1.55 for a certified copy of the foreign application, applicant may simply identify the parent nonprovisional application or patent for which reissue is sought containing the certified copy.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
Claim 1, “a confirmation unit for confirming…”
Claim 1, ”a check unit for checking…”
Claim 1, “an automatic determination unit for grasping whether the uploaded message matches a previously stored URL message and outputting”
Claim 1, “a guidance unit for transmitting the number of times of transmitting the message and probability to the user terminal”
Claim 2, “feature identification unit for storing“
Claim 3, “a site registration unit for registering the URL”
Claim 4, “an organization linking unit for transferring”
Claim 5, “a click blocking unit for permanently deleting”
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 1-5 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim limitation(s) invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
Claim 1, “a confirmation unit for confirming…”
Claim 1, ”a check unit for checking…”
Claim 1, “an automatic determination unit for grasping whether the uploaded message matches a previously stored URL message and outputting”
Claim 1, “a guidance unit for transmitting the number of times of transmitting the message and probability to the user terminal”
Claim 2, “identification unit for storing“
Claim 3, “a site registration unit for registering the URL”
Claim 4, “an organization linking unit for transferring”
Claim 5, “a click blocking unit for permanently deleting”.
The above claim limitations with the above placeholders invoke U.S.C. 112(f) 6th paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. The specification is devoid of adequate structure to perform the claimed function. In particular, the specifications nearly states the claimed functions of above placeholders in the paragraphs listed (Note that the paragraphs recite the placeholders are indefinite):
[0013] “by hardware, a unit realized by software, and a unit realized using both”
[0030] “The confirmation unit 310 registers the uploaded message as a URL message, and compares the message with messages continuously uploaded by the check unit 320 and counts whether the messages are the same”
[0030] “the check unit 320 may check the transmitter information,
transmission time, and content of the message”
[0032] “The automatic determination unit 330 may grasp whether the uploaded message matches a previously stored URL message and output, when the uploaded message matches the previously URL message, the accumulated number of times of checking and a probability of being a scam message on the basis of the number of times of checking”
[0034] “The guidance unit 340 may transmit the number of times of transmitting the message and the probability to the user terminal 100”
[0035] “…the feature identification unit 350 may store the message as a URL
message and count the message to increase the number of times whenever a check is requested”
[0051] “When a previously stored URL message satisfies preset conditions, the site registration unit 360 may register the URL in the URL message as a scam site”
[0052] “the click blocking unit 380 may permanently delete the message from the user terminal 100 and block the originating number of the message”.
There is no disclosure of any particular structure, either explicitly or inherently, to perform the above functions. The use of the above placeholder terms is not adequate structure for performing the claimed functions above because it does not describe a particular structure for performing the function. As would be recognized by those of ordinary skill in the art, the claimed functions refers to verifying users or devices and can be performed in any number of ways of hardware, software or a combination of the two. The specification does not provide sufficient details such that one of ordinary skill in the art would understand which filter structure or structures perform(s) the claimed function. Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph.
Claim 1 recites “…a user terminal for uploading, when a message including a URL is received, the message, and outputting the number of times of transmitting the message and a probability of being a scam message; and an automatic determination service providing server…outputting, when the uploaded message matches the previously URL message, an accumulated number of times of transmitting the message and a probability of being a scam message”, emphasis in bold-italic. The above excerpt of claim 1 recites that the user terminal is performing the “outputting” and the automatic determination service providing server is performing the same “outputting”. It is not clear from the above recitation which of the above entitles, i.e. user terminal or server, is actually performing the “outputting”. For examination purpose, the “outputting” is being performed by the server, consistent with the instant application in publication [0052] “…and informs the user terminal 100 of the possibility of being a scam message while informing how many identical messages have been transmitted (number of times)” and further illustrated in Figure 4 S4700.
Applicant may:
(a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph;
(b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)).
If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either:
(a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181.
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claim 1-5 rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. As described above invention.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-5 is/are rejected under 35 U.S.C. 103 as being unpatentable by Yoon (KR 101060122 B1), in view of Sandke (US 20170331843 A1), and in further view of Kim (US 20250086275 A1).
Regarding claim 1, Yoon teaches a system for automatically determining a scam message including a scam URL ([Content of Invention] “The present invention also provides a method and apparatus for providing a message to a called party by analyzing a spam type or sender information using a message content, a URL included in a message, or an attached file, Another object of the present invention is to provide a method and an apparatus for processing spam messages”; the system comprising:
a user terminal (communication terminal 30; fig 1.) for uploading, when a message including a URL is received, the message ([0005] “As described above, damage caused by spam messages such as phishing messages that engage in fraudulent behavior to mobile communication terminal users in various and intelligent ways as well as advertisement-type spam messages for advertisement is increasing to users”; [0025] “Referring to fig. 1, the entire system according to the present embodiment includes a transmitting terminal 10, a plurality of receiving terminals 30-1, 30-2, ... , 30-n hereinafter, collectively referred to as 30, 30), and a spam recognition server 50”), further disclosed in Yoon [0040-0041] Terminals 30 sends to spam recognition server 50 inquiries including the message content to determine if the message is a scam, where the inquiries includes the message content and url in the message as disclosed in [0010];
an automatic determination service providing server including a confirmation unit for confirming, when the user terminal receives a message, whether a URL is included in the message ([0027] “Upon receiving the message from the calling terminal 10, the called terminal 30 can confirm whether the corresponding message is spam by using the spam recognition server 50 connected through the communication network”; [0040] “the spam recognition server 50 uses how many received terminals 30 have transmitted a message having the same content or a message including the same attached file for a certain period of time. Similarly, when the url is included in the message, how many messages including the same url or access sites including the same access information (which may be accessed to the same site even if the url is different) are transmitted to the received terminals 30 for a predetermined period may be used to determine whether spam is present”) further disclosed in [0040-0041] Terminals 30 sends to spam recognition server 50 inquiries, which include content of the message including the message URL content as disclosed in e.g. [0010], to determine if the message is a scam by determining/confirming how many messages including the same URL;
a check unit for checking, when the URL is included, transmitter information, transmission time, and content of the message ([0014] “The step of transmitting to the spam recognition server combined through the communications network the spam query which… includes at least, one among the calling number, the message text, the attached file included in message, url within message… the spam recognition server recognizes the message as spam when receiving a spam query identical to at least one of the calling number, the message content, the attached file, and an access address to the site from a preset number or more terminal devices for a predetermined time.”, [0040] “…the spam recognition server 50 uses how many messages have been sent for a predetermined period (m hours described above) with the same calling number (i.e. transmitter information) that is not stored in the phone number information of each of the received terminals 30. In addition, as a method of using message content or an attached file, the spam recognition server 50 uses how many received terminals 30 have transmitted a message having the same content or a message (i.e. content of the message) including the same attached file for a certain period of time. Similarly, when the url is included in the message, how many messages including the same url or access sites including the same access information (which may be accessed to the same site even if the url is different) are transmitted to the received terminals 30 for a predetermined period may be used to determine whether spam is present”;【0067】“Similarly, the site analysis unit 655 of the control unit 650 functions to analyze the accessed site using the url included in the message received by the received terminal 30 in order to determine whether spam is present, extract sender information (i.e. transmitter information) and/or spam type.”, where the server 50 checks, in addition to the url in the message, for caller/sender/transmitter information and content of the message and a certain time receiving the messages corresponding to transmission time);
an automatic determination unit for grasping whether the uploaded message matches a previously stored URL message ([0041] “Here, a character comparison method such as a number, a symbol, a letter, or the like may be used as a method of determining the identity of the calling number, the message content, or the url, and a comparison method of hash values may be used to determine the identity of the attached file or the site”; [0063] “The communication unit 610 is for communicating with a terminal device (that is, the called terminal 30) through a communication network, and will be obvious to those skilled in the art, and a detailed description will be omitted; [0064] “and a comparison data storage unit 636 for storing comparison data for analyzing whether or not spam, caller information, and spam type are displayed as shown in FIG. 8”).
However, Yoon does not explicitly teach an automatic determination unit outputting an accumulated number of times of transmitting the message and a probability of being a scam message on the basis of the number of times of transmitting the message, and a guidance unit for transmitting the number of times of transmitting the message and the probability to the user terminal.
Sandke further teaches an automatic determination unit for… outputting an accumulated number of times of transmitting the message ([0085] “The method can also include steps such as updating, at 614, URL counts, updating, at 616, a count of messages the normalized URL appeared in, updating, at 618, a total email threat protection system score for the normalized URL, updating, at 620, a total IP reputation score for the normalized URL, updating, at 622, a set of customers the normalized URL has appeared in, as well as combinations thereof”); and
a probability of being a scam message on the basis of the number of times of transmitting the message ([0064] “The score may be based on how broadly the URL and/or domain has been seen within a given timeframe, how many customers received this URL, IP reputation, Spam score, or other metadata and historical data”; [0068] “The targeted attack preparation score values can range between zero and one, with zero being very susceptible to a malicious attack, and one signifying very protected from potential attacks”).
Yoon in view of Sandke are analogous in detecting suspicious URLS based on characteristics similar to those found in known spam indicators. Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to have modified Yoon to incorporate the teachings of Sandke to determine the probability based on the number of times of transmitting the message. Doing so would determine if the message has spam based on a threshold analysis and require further scrutiny (Sandke [0019] “A protected user's (e.g., an email user who has emails that are being analyzed using the present technology) email patterns may be analyzed and used to build a behavioral history of that specific user and to determine which types of email are suspicious and require further scrutiny”).
Sandke further discloses counting the count of the messages with URL seen in the a given timeframe is computed and accordingly a score/probability between zero and one is generated to reflect a Spam score. However, the combination of Yoon and Sandke fails to explicitly teach a guidance unit for transmitting the number of times of transmitting the message and the probability to the user terminal. Emphasis in italic-bold, where the messages count and the probability/score are being transmitted.
Kim further teaches a guidance unit for transmitting the number of times of transmitting the message and the probability to the user terminal ([0301] "... as a countermeasure related to the email security reporting system, warning messages, email risk score determination criteria, email security reports for users, and a status board are provided. The warning message warns users about the risk of targeted email attacks using the terms such as ‘look-alike domain’ and ‘forged header’ together with the subject of the email in the mailbox so that the users may recognize the type of malicious email that they have received. In addition, the security manager may be configured to deliver or not to deliver suspicious emails, and warning words/phrases in the message may be set and managed by group. The email risk score determination criteria provide criteria that allow users to easily and intuitively determine or recognize the risk of an email, and may be divided into methods for calculating email risk scores and implementation conditions. The email security report (for users) is displayed as a notification before the user opens the email to indicate the risk of the received email. To this end, records previously received from email addresses confirmed as a look-alike domain (received history), changes in the current delivery route and delivery route history of email transmission (delivery route), status of sender header forgery, number of malicious URLs detected (URL inspection), and risk levels of look-alike domains (e.g., TLD, low, high, and risky) are displayed in the email security report. The status board provides an overview of the real-time technical status of inbound and outbound emails that affect the operation on selected technical objects, such as the operation status, configuration, and operating environment. Here, fully functioning panels for real-time information such as the total number and status of emails, reasons for failure of inbound and outbound emails, the number of targeted email attacks received, and the like are shown to be controlled by the manager to recognize the risk and disruption of email security").
Yoon in view of Sandke and Kim are analogous in detecting suspicious URLS based on characteristics similar to those found in known spam indicators. Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to have modified Yoon to incorporate the teachings of Kim to transmit the probability based on the number of times of transmitting the message. Doing so would effectively block the targeted email attacks and provide proper inspection based on the transmission of messages (Kim [0013] “Accordingly, the present invention may provide an email security system device for effectively blocking and responding to targeted email attacks and an operation method thereof, which can effectively block the targeted email attacks and provide diagnosis reporting and an appropriate response process through a stepwise targeted email attack threat inspection process for inbound and outbound mails”).
Regarding Claim 2, Yoon in view of Sandke and Kim discloses all features of claim 1 as outlined above.
wherein the automatic determination service providing server further includes a feature identification unit for storing, when content corresponding to features transmitted to an individual is included in the message content, and a message including the features transmitted to the individual is transmitted to a plurality of users, the message as a URL message ([0062] “…the spam recognition server 50 includes a communication unit 610, a storage unit 630, and a control unit 650. The control unit 650 includes a telephone number search unit 651, A content analysis unit 653, a site analysis unit 655, a spam recognition unit 657, and an information providing unit 659; [0064] The storage unit 630 includes a spam inquiry storage unit 632 that stores data related to a spam inquiry such as a calling number, a message content, an attachment file, and a URL received from the reception terminals 30, a spam phone number. A phone number information storage unit 634 for storing information, and a comparison data storage unit 636 for storing comparison data for analyzing whether or not spam, caller information, and spam type are displayed as shown in FIG” further disclosed in [0040-0041] Terminals 30 sends to spam recognition server 50 inquiries including the message content URL based on the plurality of messages transmitted to a plurality of users to determine if the message is a scam.
Yoon does not explicitly disclose the below limitation.
Sandke further teaches counting the message to increase the number of times whenever a check is requested ([0085] “The method can also include steps such as updating, at 614, URL counts, updating, at 616, a count of messages the normalized URL appeared in, updating, at 618, a total email threat protection system score for the normalized URL, updating, at 620, a total IP reputation score for the normalized URL, updating, at 622, a set of customers the normalized URL has appeared in, as well as combinations thereof”), disclosed in Sandke [0045, 0061] System 110 checks if message includes a URL, transmitter information, transmission time, and content of message.
Yoon in view of Sandke and Kim are analogous in detecting suspicious URLS based on characteristics similar to those found in known spam indicators. Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to have modified Yoon to incorporate the teachings of Sandke to determine the probability based on the number of times of transmitting the message. Doing so would determine if the message has spam based on a threshold analysis and require further scrutiny (Sandke [0019] “A protected user's (e.g., an email user who has emails that are being analyzed using the present technology) email patterns may be analyzed and used to build a behavioral history of that specific user and to determine which types of email are suspicious and require further scrutiny”).
Regarding claim 3, Yoon in view of Sandke and Kim teaches, the method of claim 1, as outlined above.
wherein the automatic determination service providing server further includes a site registration unit for registering the URL in the URL message as a scam site when the previously stored URL message satisfies preset conditions (further disclosed in Kwon ([0051] the same URL exists in the messages received by many people) ([0049] “Here, the spam phone number information can be registered and stored in the spam recognition server 50 by a method such as input by a manager, acquisition of a spam phone number disclosed on an Internet site, or the like; [0059] “In summary, the spam recognition server 50 uses the at least one of the origination number, the message content, the attachment file, and the URL included in the spam inquiry received from the plurality of the reception terminals 30”; [0053] “Otherwise, if it is determined as spam, the spam recognition server 50 analyzes the site connected by the message content, the attachment file, and the URL, and analyzes the sender information and / or the spam type (S445)”), further disclosed in [0040-0041] where the e.g. URL is compared to previously registered URL to determine and register the message as a spam message.
Regarding claim 4, Yoon in view of Sandke and Kim teaches, the method of claim 1, as outlined above.
Yoon does not disclose the below limitations.
Sandke further teaches an organization linking unit for transferring, the URL in the URL message to at least one related organization server ([0088] “In an example sandboxing method, the method includes queuing, at 624, one or more URLs out to the sandbox environment. After the URLs have been thoroughly scanned for malicious content, by reviewing the linked content at which the URL is pointed, the results of the sandboxing are recorded by the system 110. A URIBL can be updated if URLs are added to the condemned list. The system 110 can also track results for normalized URLs by tracking condemned URLs in their normalized format and place the normalized URL in the URIBL”), transferring the URLs by placing the URLs in a queue to the sandbox environment System 110, i.e. related server),
When a message the same as the previously stored URL message is checked a preset number of times ([0086] In some embodiments, an aggregate number of customers that have received the URL can be counted and reported. The system can also compute aggregate statistics based on data collected on a URL over some selected time period, such as a recent time period. Additional statistics about the URL can also be tracked by the system 110 such as number of hops, geographical routing, and so forth”), examiner notes that counting/updating/aggregating the number of the URLs is the result of realizing the URL is previously stored and seen before. System 110 determines if message includes a URL, transmitter information, transmission time, and content of message
Yoon in view of Sandke and Kim are analogous in detecting suspicious URLS based on characteristics similar to those found in known spam indicators. Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to have modified Yoon to incorporate the teachings of Sandke to further discard the message from the user terminal after the probability exceeds a threshold value. Doing so would determine if the message has spam based on a threshold analysis and requires further scrutiny (Sandke [0019] “[0019] A protected user's (e.g., an email user who has emails that are being analyzed using the present technology) email patterns may be analyzed and used to build a behavioral history of that specific user and to determine which types of email are suspicious and require further scrutiny”).
Regarding claim 5, Yoon in view of Sandke and Kim teaches, the method of claim 1, outlined above.
Yoon teaches the system according to claim 1 further includes blocking an originating number of the message ([0005] “As a result, spam messages such as a spam message such as a fraudulent message to a mobile communication terminal user in various intelligent ways as well as an advertising spam message for advertisement are increasingly added to users. As a countermeasure against this, a method of automatically blocking reception of a representative number of spam messages starting with 060, 700, etc. is used…”; [0043] “Thereafter, the spam recognition server 50 determines whether the number of spam inquiries searched according to the search result is larger than k (S330). If the number of the searched spam messages is less than or equal to k, (S350), and provides a result of the spam decision to the called terminal 30 (S360)”; [0053] “Otherwise, if it is determined as spam, the spam recognition server 50 analyzes the site connected by the message content, the attachment file, and the URL, and analyzes the sender information and / or the spam type (S445)”).
Yoon does not disclose the below limitation.
Sandke teaches a click blocking unit for permanently deleting, when the probability of the received message for being a scam message exceeds a preset threshold value, the message from the user terminal ([0069] “In some embodiments, the method includes placing, at 506, the URL in a sandbox if the message has a targeted attack preparation score that exceeds the targeted attack preparation threshold”; [0078] “In a further example, the system 110 can read a current count for a URL or domain. The system 110 then computes a current limit per domain. In one example, the system is configured with a predetermined value for the limit. In another example, the system 110 will decay the limit based on how close to the daily limit the number of predictive sandboxes performed is. To be sure, if the current count exceeds the limit, the URL may be discarded from sandboxing”), further disclosed in Sandke [0064, 0068] sandbox will delete the URL once the score corresponds to probability exceeds a threshold value.
Yoon in view of Sandke and Kim are analogous in detecting suspicious URLS based on characteristics similar to those found in known spam indicators. Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to have modified Yoon to incorporate the teachings of Sandke to further discard the message from the user terminal after the probability exceeds a threshold value. Doing so would determine if the message has spam based on a threshold analysis and requires further scrutiny (Sandke [0019] “[0019] A protected user's (e.g., an email user who has emails that are being analyzed using the present technology) email patterns may be analyzed and used to build a behavioral history of that specific user and to determine which types of email are suspicious and require further scrutiny”).
Conclusion
The prior art made of record and not relied upon is considered pertinent to the applicant’s
disclosure:
Jakobsson (US 10277628 B1) discloses a system for detecting phishing-attempt communications that incorporate human readable content, See FIG. 20.
Yasuda (US 20050188036 A1) discloses an e-mail system for detecting whether a received email is an unsolicited email or a possible unsolicited email, See FIG. 2.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to VIVIAN D. HO whose telephone number is (571)272-9957. The examiner can normally be reached M-F 9:00 - 5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Eleni A. Shiferaw can be reached at (571) 272-3867. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/VIVIAN D HO/Examiner, Art Unit 2497
/BASSAM A NOAMAN/Primary Examiner, Art Unit 2497