DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This office action on the merits in response to the application filed on 02/07/2025.
Claims 1-2, 6-9, 11-12, and 16-19 are currently pending and have been examined.
Response to Arguments
3. Applicant's arguments filed 02/07/2025 with respect to the rejection of claim(s) 1-2, 6-9, 11-12, and 16-19 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made.
The applicant states that without acquiescing to the assertions in the Office Action, claim 1 has been amended to generally correspond to the amendments discussed during the interview. Accordingly, it is respectfully submitted that a combination of the asserted documents fail to disclose or render obvious at least "receiving results of blocked emails and communications from a plurality of data streaming sources, wherein each of the blocked emails and communications is unsolicited and is directed to a specific individual, wherein the blocked communications include each of a text message, a voicemail and a social media message" as generally recited in amended claim 1. Thus, at least for the reasons discussed and as indicated by the Examiner during the interview, Shraim, LaRosa and Ledford fails to disclose or render obvious every feature of amended claim 1.
The examiner states that Mays describes “receiving results of blocked emails and communications from a plurality of data streaming sources, wherein each of the blocked emails or communications is unsolicited and is directed to a specific individual, wherein the blocked communications include each of a text message, a voicemail and a social media message” in the Abstract Section, “Embodiments of the present invention include systems and methods for handling large numbers of messages of one or more message types. In embodiments, the scalable messaging system reads from lists of recipient addresses and template messages, mergers them into messages, removes known blacklisted addresses, facilitates the rapid delivery of these messages via a dynamic queuing and dynamic message server deployment, and stores errors and other statistics. In embodiments, the scalable messaging system may include messaging system instances at different locations.; and Column 4/line 11, FIG. 1 graphically depicts a messaging system according to embodiments of the present invention. Illustrated in FIG. 1 is a messaging system 105 for delivering messages of one or more message format types (e.g., email, SMS, voicemail, facsimile, social media, etc.) from a message campaign initiator (e.g., customer 135) to one or more groups of message recipients 145. It shall be noted that references to recipients 145 does not mean the same set of recipients in each case. Rather, recipients 145 are representative of groups of recipients that may or may not overlap (e.g., a recipient may be an email recipient and may also be a social media recipient). It shall also be noted that customer 135 is representative of numerous customers that may utilize one or more services provided by messaging system 105…the messaging system 105 comprises one or more customer service servers 125, one or more application programming interface (API) servers 130, a message delivery system 110, a data layer system 120, and one or more dynamic server managers 124.; and Column 5/line 41, Whether using a customer service server 125, API server 130, or both, a customer transacts with the messaging system 105. In embodiments, these transactions relate to, but are not limited to, set-up and command of electronic messaging plans or campaigns and gathering data about one or more messaging campaigns. In embodiments a messaging plan or campaign information may include one or more of the following information: a list of address objects (recipient address information), information about who the message should appear as being from, information about who a recipient should respond to (if desired), and a generic or template message. In embodiments, the customer may also provide other data and/or metadata. An example of the other data or metadata that may be supplied is recipient-specific information (such as names) that will be merged into an appropriate variable field in a template message so as to personalize the message for the recipients. As part of a messaging campaign or campaigns, a customer may provide preference information, such as start/stop times for the campaign, preferred times when the messages should be sent (e.g., mornings, afternoons, etc.), formatting preferences, customization preference (e.g., logos, graphics, etc.), and analytics preference (e.g., what messages were sent, who received the messages, who did not, how long did a recipient review a message, error information, unusable addressees, etc.). In embodiments, messaging system 105 may offer different levels of service for customers, and a customer may select its level of service.” Under broad reasonable interpretation, blocked emails or communications is unsolicited and is directed to a specific individual, wherein the blocked communications include each of a text message, a voicemail and a social media message is interpreted as the scalable messaging system reads from lists of recipient addresses and template messages, mergers them into messages, removes known blacklisted addresses and stores errors and other statistics. The scalable messaging system may include messaging system instances at different locations and the messaging system 105 for delivering messages of one or more message format types (e.g., email, SMS, voicemail, facsimile, social media, etc.) from a message campaign initiator (e.g., customer 135) to one or more groups of message recipients 145 in the cited prior art. Modifying the system to include receiving results of blocked emails and communications from a plurality of data streaming sources, wherein each of the blocked emails or communications is unsolicited and is directed to a specific individual, wherein the blocked communications include each of a text message, a voicemail and a social media message, results in an improved invention because applying said technique ensures that the system can detect and block unsolicited messages in real time across multiple streams of communications, thus improving the overall performance of the invention. See remarks on pg. 10-13.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-2, 6-9, 11-12, and 16-19 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Subject Matter Eligibility Criteria – Step 1:
Claims 1-2, and 6-9 are directed to a system, claims 11-12, and 16-19 are directed to a method (process). Therefore, these claims fall within the four statutory categories of invention.
Subject Matter Eligibility Criteria – Step 2A – Prong One:
Regarding Prong One of Step 2A of the Alice/Mayo test, the claim limitations are to be analyzed to determine whether, under their broadest reasonable interpretation, they “recite” a judicial exception or in other words whether a judicial exception is “set forth” or “described” in the claims. MPEP 2106.04(II)(A)(1). An “abstract idea” judicial exception is subject matter that falls within at least one of the following groups: a) certain methods of organizing human activity, b) mental processes, and/or c) mathematical concepts. MPEP 2106.04(a).
Representative independents claims 1 and 11 include limitations that recite at least one abstract idea.
Claims 1 and 11 are directed to the abstract idea of “receiving results of blocked emails and communications from a plurality of one or more data streaming sources, wherein each of the blocked emails and communications is unsolicited and is directed to a specific individual, wherein the blocked communications include each of a text message, a voicemail and a social media message; identifying a first wire instruction corresponding to a first individual from the blocked emails and communications; extracting routing numbers and corresponding account numbers of the first individual from the first wire instruction; determining whether the extracted routing numbers of the first individual have been used to attempt a payment for an unauthorized source; when the extracted routing numbers of the first individual have been determined to have been used to attempt the payment, publishing, via the real-time database, the extracted routing numbers of the first individual in real-time as topics onto a shared data bus; enhancing the extracted routing numbers of the first individual with payloads and one or more details, wherein the payloads include headers, key value pairs, day and time sent, underlying texts that evidence and support related to the blocked emails and communications directed to the specific individual, and wherein the payloads are used to identify similar spear phishing attempts; determining whether a second wire instruction contains data that matches the extracted routing numbers of the first individual stored in the real-time database; identifying one or more potential victims associated with the second wire instructions in addition to the first individual based on the match between the extracted routing numbers of the first individual included in the first wire instruction and the data contained in the second wire instruction, wherein the one or more potential victims and the first individual share same routing numbers but have differing account numbers, and the account numbers included the second wire instructions were not included in the blocked emails and communications; responsive to determining whether the second wire instruction contains the data that matches the extracted routing numbers of the first individual stored in the real-time database, determining whether the extracted routing numbers of the first individual were used in other unsolicited attempts for unauthorized sources; responsive to determining whether the second wire instruction contains the data that matches the extracted routing numbers of the first individual stored in the real-time database, alerting one or more associated users of a potential phishing attack, and alerting a bank of the one or more associated users of the potential phishing attack; performing analytics on the extracted routing numbers of the first individual for identifying other potentially targeted accounts; transmitting a notification to the identified one or more potential victims having routing numbers matching the extracted routing numbers of the first individual in addition to the specific individual, the notification including the evidence and support for the blocked emails and communications; and transmitting a warning communication to users of the other potentially targeted accounts.” Under its broadest reasonable interpretation, this claim is managing financial transactions, identifying unauthorized payment attempts and alerting users when fraud is detected, and hence falls under organizing human activity (i.e., as fundamental economic practices–mitigating risk).
Dependent Claims:
Claims 2 and 12 recites: wherein the real-time database is configured to process workloads whose states are constantly changing; further describes the abstract idea of organizing human activity (i.e., as fundamental economic practices).
Claims 6 and 16 recites: wherein the first wire instruction comprises security data requests; further describes the abstract idea of organizing human activity (i.e., as fundamental economic practices).
Claims 7 and 17 recites: wherein the first wire instruction comprises personal identifiable information; further describes the abstract idea of organizing human activity (i.e., as fundamental economic practices).
Claims 8 and 18 recites: wherein the plurality of streaming sources comprise data loss prevention systems; further describes the abstract idea of organizing human activity (i.e., as fundamental economic practices).
Claims 9 and 19 recites: wherein the plurality of streaming sources comprise a filter for unsolicited and unwanted email communications; further describes the abstract idea of organizing human activity (i.e., as fundamental economic practices).
Subject Matter Eligibility Criteria – Step 2A – Prong Two:
Claim 1 and 11 recites to a generic computer as an additional element to the judicial exception in the preamble. Viewed individually and in combination, this additional element to the identified judicial exception of Step 2A.1, amounts to no more than mere instructions for managing financial transactions, identifying unauthorized payment attempts and alerting users when fraud is detected on a generic computer. Therefore, at Step 2A.2, these additional elements do not act in combination to integrate the abstract idea into a practical application. The additional elements of claims 1 and 11 considered both individually and as an ordered combination, do not amount to significantly more than the judicial exception because the additional element of a generic computer does no more than “[s]imply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, e.g., a claim to an abstract idea requiring no more than a generic computer to perform generic computer functions that are well-understood, routine and conventional activities previously known to the industry.” See MPEP 2106.05 (citing to Alice Corp. Pty. Ltd. v. CLS Bank Int'l, 573 U.S. 208, 225 (2014)).
Therefore claims 1 and 11 is found ineligible under 35 U.S.C. 101.
Step 2B:
Viewed as a whole, instructions/method claims recite the concept of “organizing human activity” (i.e., as fundamental economic practices –mitigating risk) in managing financial transactions, identifying unauthorized payment attempts and alerting users is performed by a generic computer. The method claims do not, for example, purport to improve the functioning of the computer itself. Nor do they effect an improvement in any other technology or technical field. Instead, the claims at issue amount to nothing significantly more than an instruction to apply the abstract idea using some unspecified, generic computer. See Alice Corp. Pty. Ltd., 573 U.S. 208. Mere instructions to apply the exception using a generic computer component and limitations to a particular field of use or technological environment cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. The use of a computer server is to merely automate and/or implement the abstract idea cannot provide significantly more than the abstract idea itself (MPEP 2106.05(I)(A)(f) & (h)). Therefore, the claim is not patent eligible.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
7. Claims 1-2, 8-9, 11-12, and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Shraim et al. (US 9026507 B2), in view of LaRosa et al. (US 10116678 B2), in view of Ledford et al. (US 20170221066 A1), and further in view of Mays et al. (US 8972512 B2).
8. Regarding claims 1 and 11, Shraim discloses a system and method that addresses spear phishing attempts, the system comprising: an interface that receives blocked information from a plurality of accounts, a real-time database that stores and manages blocked information, a computer server that is coupled to the interface and the real-time database and further configured to perform steps, (Abstract, Various embodiments of the invention provide methods, systems and software for analyzing data. In particular embodiments, for example, a set of data about a web site may be analyzed to determine whether the web site is likely to be illegitimate (e.g., to be involved in a fraudulent scheme, such as a phishing scheme, Column 9/line 61, One or more email feeds 105d can provide additional data sources for the system 100. An email feed can be any source of email messages, including spam messages, as described above. (Indeed, a single incoming email message may be considered an email feed in accordance with some embodiments.) In some cases, for instance as described in more detail below, bait email addresses may be "seeded" or planted by embodiments of the invention, and/or these planted addresses can provide a source of email (i.e., an email feed). The system 100, therefore, can include an address planter 170, which is shown in detail with respect to FIG. 1B. The address planter 170 can include an email address generator 175. The address generator 175 can be in communication with a user interface 180 and/or one or more databases 185 (each of which may comprise a relational database and/or any other suitable storage mechanism). One such data store may comprise a database of userid information 185a. The userid information 185a can include a list of names, numbers and/or other identifiers that can be used to generate userids in accordance with embodiments of the invention. In some cases, the userid information 185a may be categorized (e.g., into first names, last names, modifiers, such as numbers or other characters, etc.). Another data store may comprise domain information 180. The database of domain information 180 may include a list of domains available for addresses. In many cases, these domains will be domains that are owned/managed by the operator of the address planter 170. In other cases, however, the domains might be managed by others, such as commercial and/or consumer ISPs, etc.; and Column 5/line 42, An exemplary method of analyzing a URL (which can be used to perform tests on a URL referencing a web site, as mentioned above) can comprise identifying a uniform resource locator ("URL") referencing a web site. The method may further comprise verifying that the web site referenced by the URL is active, analyzing information about a domain referenced by the URL, and/or analyzing the format of the URL. Based on a result of one or more of these verifications and analyses, the web site referenced by the URL may be categorized as a possibly fraudulent web site. Analyzing information about a domain referenced by the URL may comprise analyzing a web site associated with the URL and/or a server hosting such a web site).
determining whether the extracted routing numbers of the first individual have been used to attempt a payment for an unauthorized source; when the extracted routing numbers of the first individual have been determined to have been used to attempt the payment, publishing, via the real-time database, the extracted routing numbers of the first individual in real-time as topics onto a shared data bus; (Column 19/line 34, At block 404, one or more "safe accounts" may be created, e.g., in the customer's system. These safe accounts can be valid accounts (e.g., active credit card accounts) that do not correspond to any real account holder, and the safe accounts may be associated with fictitious personal information, including a valid (or apparently valid) identifier, such as an account number, social security number, credit card number, etc., that does not correspond to any real account holder but may be accepted as valid by the customer's system. The safe accounts thereafter can be monitored (block 406) for any transactions or access attempts. Because the safe accounts do not correspond to a real account holder, any transactions, access attempt, etc. (“account activity”) represent an illegitimate use. In addition, the safe account can be used to trace and/or track the use of the identifier, as described in more detail below, and/or to compile an evidentiary record of fraudulent activity. The method 400 can also include generating and/or planting bait email addresses, which can be used to attract spam and/or phish messages. In some cases, the bait addresses may be selected to be attractive to phishers (e.g., from attractive domains and/or using English proper names as the userids) and/or to be prioritized on harvested lists (e.g., having userids that begin with numbers, the letter a, or non-alphabetic characters, etc.). In this way, if a phisher sends a phish message to each of the addresses on a harvested list, there may be a higher probability that the bait addresses will receive the phish message relatively early in the mailing process, allowing the system to take responsive action before many actual recipients have had a chance to provide personal information in response to the phish; and Column 28/line 14, For example, with respect to an email message, the header information can be analyzed (block 525) to determine, for instance, whether the source and/or destination information in the header has been forged. If so, it is relatively more likely that the email is a phish. As another example, the routing information in the message header may be analyzed to determine whether the message originated from and/or was routed through a suspect domain, again enhancing the likelihood that the message is a phish. Any text, including without limitation the body of an email message (i.e., the body field of a data file) can then be analyzed (block 530). The analysis of the body can include searching the body for blacklisted and/or whitelisted terms; merely by way of example, a blacklisted term might include terms commonly found in phish messages, such as “free trip”; terms indicating that the message refers to personal information, such as “credit card,”; and Column 16/line 59, The master computer 210 can include (and/or be in communication with) a plurality of data sources, including without limitation the data sources 105 described above. Other data sources may be used as well. For example, the master computer can comprise an evidence database 230 and/or a database of “safe data,” 235, which can be used to generate and/or store bait email addresses and/or personal information for one or more fictitious (or real) identities, for use as discussed in detail below. (As used herein, the term “database” should be interpreted broadly to include any means of storing data, including traditional database management software, operating system file systems, and/or the like.)
determining whether a second wire instruction contains data that matches the extracted routing numbers of the first individual stored in the real-time database;
(Column 31/line 18, Often, a scammer will move a fraudulent web site (and/or pages from that site) among various servers in an attempt to perform multiple scams and/or avoid detection/prosecution. Further, some scammers purchase (or otherwise acquire) "turnkey" scamming kits comprising pre-built web pages/sites that can be hosted on a server to perform a scam. It follows, therefore, that it can be useful to provide an efficient way to compare URLs and/or web sites from a plurality of investigations. Merely by way of example, in some cases, the method 560 can include generating and/or storing (e.g., in a database, file system, etc.) a checksum and/or hash value associated with the URL and/or page(s) referenced by the URL (e.g., the page directly referenced by the URL and/or the pages crawled in block 580) (block 590). Merely by way of example, a hashing algorithm may be used to calculate a value for the URL string and/or for the contents of the referenced page(s). Alternatively, a checksum value may be calculated for the contents of these page(s). Either (or both) of these procedures may be used to provide an efficient "snapshot" of a URL, web page and/or web site. (In some cases, a discrete checksum/hash may be generated for a URL, an entire site and/or individual pages from that site). The checksum/hash value(s) may then be compared against other such values (which may be stored, as described above, in a database, file system, etc.) calculated for URLs/web sites investigated previously (block 592). If the checksum/hash value matches the value for a web site previously found to be fraudulent, the odds are good that the present site is fraudulent as well. Returning to FIG. 5A, information about the domain to which the URL resolves may be analyzed (block 540), either as a separate step or as a part of the URL analysis. Further, in determining whether a domain is suspicious, the domain may be compared to any brand information contained in the body of the message. For example, if the body of the message includes the brand name of a customer, and the URL resolves to a domain different than a domain owned by and/or associated with that customer, the URL can be considered suspicious. Upon the completion of the analysis (of any portion of a message, as discussed above, and/or of the message as a whole), the data file/message may, in some embodiments be assigned a score (block 545). Assigning a score to the data file/message can provide a quantitative measurement of the likelihood that the message is a phish, and in such embodiments, a score can be compared to a threshold score, such that a score meeting a particular threshold can result in further analysis and/or investigation, while a score not meeting that threshold can indicate a judgment that the email is not a probable phish. In some embodiments, the overall analysis of the message can result in the assignment of a single score. In other embodiments, each type of analysis (e.g., the analysis of the header, of the body, of the URL and/or of the associated domain) can result in the assignment of a separate score, and/or these separate scores can be consolidated to form a composite score that can be assigned to the message. Moreover, the individual scores for each type of analysis may themselves be composite scores. Merely by way of example, each of the tests described with respect to FIG. 5B (as well, perhaps as other tests) may result in a score, and the scores of these tests may be consolidated to form a composite URL score. In further embodiments, the analysis of each data file or email message can be performed in hierarchical fashion: the header information may be analyzed and scored, and only if that score meets a certain threshold will the correlation engine proceed to analyze the body. If not, the message is considered not to be a phish and the analysis ends. Likewise, only of the score resulting from the body analysis reaches a certain threshold will the URL be analyzed, etc.)).
alerting one or more associated users of a potential phishing attack, and alerting a bank of the one or more associated users of the potential phishing attack; performing analytics on the extracted routing numbers of the first individual for identifying other potentially targeted accounts; transmitting a notification to the identified one or more potential victims having routing numbers matching the extracted routing numbers of the first individual in addition to the specific individual, the notification including the evidence and support for the blocked emails and communications, and transmitting a warning communication to users of the other potentially targeted accounts, ( Column 14/line 47, The event manager can also prepare an automated report 145 (and/or cause another process, such as a reporting module (not shown) to generate a report), which may be analyzed by an additional technician at the monitoring center 130 (or any other location, for that matter), for the event; the report can include a summary of the investigation and/or any information obtained by the investigation. In some embodiments, the process may be completely automated, so that no human analysis is necessary. If desired (and perhaps as indicated by the customer policy 115), the event manager 135 can automatically create a customer notification 150 informing the affected customer of the event. The customer notification 150 can comprise some (or all) of the information from the report 145. Alternatively, the customer notification 150 can merely notify the customer of an event (e.g., via email, telephone, pager, etc.) allowing a customer to access a copy of the report (e.g., via a web browser, client application, etc.). Customers may also view events of interest to the using a portal, such as a dedicated web site that shows events involving that customer (e.g., where the event involves a fraud using the customer's trademarks, products, business identity, etc.). If the investigation 140 reveals that the server referenced by the URL is involved in a fraudulent attempt to collect personal information, the technician may initiate an interdiction response 155 (also referred to herein as a "technical response"). (Alternatively, the event manager 135 could be configured to initiate a response automatically without intervention by the technician). Depending on the circumstances and the embodiment, a variety of responses could be appropriate. For instance, those skilled in the art will recognize that in some cases, a server can be compromised (i.e., "hacked"), in which case the server is executing applications and/or providing services not under the control of the operator of the server. (As used in this context, the term "operator" means an entity that owns, maintains and/or otherwise is responsible for the server.) If the investigation 140 reveals that the server appears to be compromised, such that the operator of the server is merely an unwitting victim and not a participant in the fraudulent scheme, the appropriate response could simply comprise informing the operator of the server that the server has been compromised, and perhaps explaining how to repair any vulnerabilities that allowed the compromise.). Examiner interprets the term the notification including the evidence and support for the blocked emails and communications, and transmitting a warning communication to users of the other potentially targeted accounts is analogous for The event manager can also prepare an automated report… the report can include a summary of the investigation and/or any information obtained by the investigation…the event manager 135 can automatically create a customer notification 150 informing the affected customer of the event. Customers may also view events of interest to the using a portal, such as a dedicated web site that shows events involving that customer (e.g., where the event involves a fraud using the customer's trademarks, products, business identity, etc. in the cited prior art.
Shraim does not explicitly disclose identifying a first wire instruction corresponding to a first individual from the blocked emails and communications; extracting routing numbers and corresponding account numbers of the first individual from the first wire instruction.
However, LaRosa teaches identifying a first wire instruction corresponding to a first individual from the blocked emails or communications; extracting routing numbers and corresponding account numbers of the first individual from the first wire instruction; (Column 1/line 33, Criminals routinely trick people into communicating with another outside party who is privy to the conversations and social relationships that exist in a corporate environment. Here, the external criminal enters into an in-progress communication, or starts a new conversation with some context of the social relationship, in order to convince the person inside the target company to take an action that will benefit the criminal. This could be a wire transfer or to change a bank account number on a pending payment. Attackers analyze organizations to identify users who process financial routing instructions to facilitate payments as part of their positions, e.g., CFOs or those working in accounts payable, accounts receivable, procurement, etc. Attackers “phish” these users to infect their computers with malware, in some cases to gain access to their email inbox, to identify in progress financial transactions. Once attackers have the transactions identified, the criminals will create “similar” email addresses and domains in an attempt to fool their targets. For example, where the actual email address is: jim.weeble@hesiercorp.com, the fake email is presented as: jim.weeble@heseircorp.com. (Note the transposed letters in the latter domain name.) After the domains are created, the criminal will set up rules to auto-forward the real email address to the fake email address to intercept any real communications. The fake user will then “proxy” the communications from the real user through the fake email address but will change the payment instructions when the time comes for a funds transfer.; and Column 7/line 4, The email server “queue and modify” capabilities are configured to allow for integration with an email content analytics engine to implement the content acquisition capability to create the social network graph analytics from email messages flowing there through. In addition, the flow of email is controlled in the event a message is identified as fraudulent and needs to be blocked or held for review before passing to the intended recipient, as represented in FIGS. 5 and 6; and Column 6/line 49, A third aspect requiring detection in this scenario is the “ask.” Once the outsider has established communications and gained the trust of the insider, the last step is getting the insider to take action on the objective controlled by the insider. This could be to transfer documents, divulge intelligence, or to facilitate a change to a payment account for a wire transfer. In the scenarios described above, the system will use a scoring engine to assign weighted values to aspects of the detection engines. This includes the social network profiling engines and the linguistics profiling engines. Ultimately, a combination of the scoring algorithms in conjunction with each other to detect and prevent fraudulent communications or communication from an inside threat will be based on the learned behaviors from the social networks, including what is communicated in the social networks. A risk level acceptable to an organization before electronic messages are blocked, or required to be reviewed by internal investigators, is then determined. In one embodiment, a system is running an email server software and is configured to be positioned “inline” in order to monitor email communications. The email server “queue and modify” capabilities are configured to allow for integration with an email content analytics engine to implement the content acquisition capability to create the social network graph analytics from email messages flowing there through. In addition, the flow of email is controlled in the event a message is identified as fraudulent and needs to be blocked or held for review before passing to the intended recipient, as represented in FIGS. 5 and 6. Thus, as presented in FIG. 5, inbound email is analyzed to extract communications components, e.g., header and body information. The header information is analyzed for element extraction and data-gathering. The message body is extracted for Ngram analytics; and Column 5/line 59, As the text is extracted, Ngram creation occurs creating a labeled and graphed relationship of the communications directionally from the sender to the recipient of both the spoken communications and any attachments…As the Ngrams are created, a corpus database is consulted of pre-built terms and phrases supplied both by the users of the system and the makers of the system using statistical and comparative analysis of the terms and the distance of the terms from similar terms, e.g., applying the Levenshtein distance algorithm, to proactively predict the types of communications these terms are related to so they can be contextually labeled, for example, money transfer, mergers and acquisitions, product development, etc. As the linguistics profiling occurs and labels and weights are assigned to the linguistics profiles based on the importance of certain phrases and terms, this increases the relative importance of the types of linguistic communications occurring in the conversations to be used in the scoring process for prevention of criminal activity;
One of ordinary skill in the art would have recognized that applying the known technique of Shraim to the known invention of LaRosa would have yielded predictable results and resulted in an improved invention. It would have been recognized that the application of the technique would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such wire instruction features into a similar invention. Further, it would have been recognized by those of ordinary skill in the art that modifying the system to include identifying a first wire instruction corresponding to a first individual from the blocked emails or communications; extracting routing numbers and corresponding account numbers of the first individual from the first wire instruction results in an improved invention because applying said technique ensures that the system can flag or monitor potentially fraudulent transactions that contain wire instructions, thus improving the overall performance of the invention
Shraim does not explicitly disclose enhancing the extracted routing numbers of the first individual with payloads and one or more details, wherein the payloads include headers, key value pairs, day and time sent, underlying texts that evidence and support related to the blocked emails and communications directed to the specific individual, and wherein the payloads are used to identify similar spear phishing attempts.
However, LaRosa teaches enhancing the extracted routing numbers of the first individual with payloads and one or more details, wherein the payloads include headers, key value pairs, day and time sent, underlying texts that evidence and support related to the blocked emails and communications directed to the specific individual, and wherein the payloads are used to identify similar spear phishing attempts; (Column 1/ line 33, Criminals routinely trick people into communicating with another outside party who is privy to the conversations and social relationships that exist in a corporate environment. Here, the external criminal enters into an in-progress communication, or starts a new conversation with some context of the social relationship, in order to convince the person inside the target company to take an action that will benefit the criminal. This could be a wire transfer or to change a bank account number on a pending payment.; and Column 7/line 26, A connector to Active Directory is built to dynamically pull the following information from Active Directory as email messages are received in order to add the following additional attributes to the nodes labeled Email_Address on the above-referenced graph schema in FIG. 1. This will be used to assist in the analysis of quarantined email messages that have been flagged and stopped for investigations. The connector is dynamically adjusted allowing for the mapping of different AD fields to the input fields of the social graph as deemed necessary. Additional custom fields can also be added as required if additional AD attributes would be valuable to include, for example: Last Name, First Name, Title, Group; and Column 8/line 30, The DNS extract engine will take the extracted elements from the Header_Extract engine's routine and will use the DNS resolver to query for specific record information used in the learning engine and risk classification process. a. SPF Record: Resolve the SPF DNS record for the sender's DNS domain. b. DMARC Record: Resolve the DMARC DNS record for the sender's DNS domain. c. DKIM Record: Resolve the DKIM record for the sender's DNS domain. d. Sender's Domain Registration Lifetime: Lookup the date the DNS domain record was first established.; and Column 9/line 16, The message contents of the email communications contain communications attributes, will be profiled, learned, and stored to create the linguistic profile of the communications. In order to create a parameterized breakdown of constructs used in communications, the message body will be analyzed and broken into Uni-Grams, Bi-Grams, Tri-Grams, Quad-Grams, and Quint Grams. This data will be stored in a graph database for predictive modeling. When the message is extracted, it will be run through the following routines: Message Start: As message communications occur often people communicate using the same styles. Message openings will be identified and profiled per sender to identify how a user traditionally starts out his communications to use for Ngram predictive analytics comparisons… As the communications are classified, they will feed back to the social graph tagging messages with the predicted communications classifiers, e.g., financial transactions, supply chain activity, header/footer mismatches, abnormal increases in the use of formality, etc.)
One of ordinary skill in the art would have recognized that applying the known technique of Shraim to the known invention of LaRosa would have yielded predictable results and resulted in an improved invention. It would have been recognized that the application of the technique would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such payloads and spear phishing attempts features into a similar invention. Further, it would have been recognized by those of ordinary skill in the art that modifying enhancing the extracted routing numbers of the first individual with payloads and one or more details, wherein the payloads include headers, key value pairs, day and time sent, underlying texts that evidence and support related to the blocked emails and communications directed to the specific individual, and wherein the payloads are used to identify similar spear phishing attempts results in an improved invention because applying said technique ensures that the system can detect phear phishing patterns to prevent fraudulent activity more efficiently by enhancing the routing numbers with supporting evidence, thus improving the overall performance of the invention.
Shraim as modified does not explicitly disclose identifying one or more potential victims associated with the second wire instructions in addition to the first individual based on the match between the extracted routing numbers of the first individual included in the first wire instruction and the data contained in the second wire instruction, wherein the one or more potential victims and the first individual share same routing numbers but have differing account numbers, and the account numbers included the second wire instructions were not included in the blocked emails and communications; responsive to determining whether the second wire instruction contains the data that matches the extracted routing numbers of the first individual stored in the real-time database, determining whether the extracted routing numbers of the first individual were used in other unsolicited attempts for unauthorized sources.
However, Ledford teaches identifying one or more potential victims associated with the second wire instructions in addition to the first individual based on the match between the extracted routing numbers of the first individual included in the first wire instruction and the data contained in the second wire instruction, wherein the one or more potential victims and the first individual share same routing numbers but have differing account numbers, and the account numbers included the second wire instructions were not included in the blocked emails and communications; responsive to determining whether the second wire instruction contains the data that matches the extracted routing numbers of the first individual stored in the real-time database, determining whether the extracted routing numbers of the first individual were used in other unsolicited attempts for unauthorized sources, (Para. 0114-0115, Network 130 also includes a core processing system 131, an administrative system 132, and a settlement system 133. Network 130 also can include one or more databases. Generally, core processing system 131 performs processes such as payment processing, message validation, duplicate message checking, transaction state management, acknowledgements, non-payment messaging processing, administrative message processing, and system message processing. The core processing system 131 also performs processes such as message routing, transaction routing, routing to a value added service system (to be described below), and end-point fraud management. The system 131 also performs processes such as system security processes, authorization and authentication, user access management, and fraud detection. The administrative system 132 performs administrative processes such as operations processing, participant onboarding, helpdesk and customer service, control room system monitoring, data management, conducting inquiries and investigations, and bank administration. Additionally, system 132 performs reporting processes such as a dashboard, operations reporting, statistics reporting, performance reporting, pricing and billing, regulatory reporting, and internal audit reporting. System 132 also performs governance and rules management processing, maintains business rules, effects change management, participant management, audits, and risk management. The settlement service system 133 performs settlement processing to enable financial transactions to be settled, (and, in one embodiment) manages multilateral net settlement positions and/or non-multilateral net settlement positions (such as, e.g. on a transaction-by-transaction basis, settlement notifications, and transmits/receives data to/from at least one settlement facility 134. That facility 134 also can communicate with the FIs 111 and 120 by way of gateways 115 and interfaces 114.; and Para. 0273, Payments can also be made between the government and a consumer (e.g., taxes, fines, license registration/renewals, emergency funding to disaster victims, vendor/supplier payments, social security, welfare, and/or other entitlements payments, student loans, etc.).; and Para. 0133-0134, If the validation(s) performed in step 221 are determined to be successful (“Yes” in step 222), then control passes to step 223 where the network 130 updates at least one settlement position. In one example embodiment herein, the network 130 updates a multilateral net settlement position for at least one of the debtor FI 111 and the creditor FI 120. In another example embodiment herein, the network 130 updates the debtor FI 111's Position (i.e. by deducting the amount of credit transfer from the Debtor FI's available balance/position). Then, in step 224 the network 130 checks to determine whether a token service is being employed in the payment transaction (i.e., the network 130 detects whether a BANPC transaction or a non-BANPC transaction is present). The presence of a BANPC transaction is detected based on a result of the network comparing the BRNPC of the consumer's payment request to a list of routing numbers designated by the network for BANPC transactions. If no match exists (“No” in step 225), then processing proceeds where the network 130 sends the payment transaction to the creditor FI 120 (step 226) to attempt effecting a payment based on the routing number and account number included in the payment transaction message (a pacs.008 message), whereafter the creditor FI 120 begins processing the payment transaction (step 227). That FI 120 determines whether to accept or reject the payment (step 243) based on predetermined criteria, or whether the payment is pending (e.g., perhaps owing to an anti-fraud, AML, or OFAC investigation, etc.). In the case where the payment is rejected (e.g., perhaps a relevant account is closed, a token is not recognized, etc.), the FI 120 assigns a reason for rejecting the payment (step 244) and then sends a status “RJCT” negative ACK (e.g., a pacs.002 message) to the network 110 in step 245. Control then passes to step 230 which performs a tokenizing procedure (although in other