DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendments
This is a Final office action in response to applicant’s amendment filed on 12/5/2025.
Claims 1-2, 4, 13-14, 16, 20 are amended. Claims 1-20 are pending and considered.
The objections to claims 2, 4, 14, 16 due to informalities has been withdrawn in light of applicant’s amendment to the claims.
Response to Arguments
Applicant’s argument, see page 11-13 of the Remarks filed 12/5/2025 with respect to independent claims 1, 13, 20 rejected under 35 USC 102 over prior arts of record has been fully considered and are persuasive in view of applicant’s amendment to the claims. Therefore, the rejection of claims 1, 13, 20 under 35 USC 102 has been withdrawn.
However, upon updated search, prior arts, e.g., Gils is found to teach the amended limitation(s). Examiner asserts combination of Cidon and Gils teaches all limitations recited in the amended independent claims. See the updated Claim Rejections under 35 USC 103 below.
Applicant is encouraged to include innovative features into the independent claims to advance the case.
Examiner Notes
Examiner cites particular paragraphs, columns and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 13, 20 are rejected under 35 U.S.C. 103 as being unpatentable over Cidon et al (US20190028509A1, hereinafter, “Cidon”), in view of Gils et al (US20240177512A1, hereinafter, “Gils”).
Regarding claim 1, Cidon teaches:
A method for mitigating correspondence fraud (Cidon, discloses system and method for fraud detection and prevention by utilizing artificial intelligence engine that detects and blocks impersonation attacks in real time, see [Abstract]), the method comprising:
receiving, by communications hardware of a correspondence fraud mitigation system, candidate correspondence from a user device associated with a user (Fig. 2 at 202, and [0026] In the example of FIG. 2, the flowchart 200 starts at block 202, where all historical electronic messages of each individual user (i.e., correspondence associated with a user) in an entity on an electronic messaging system are collected automatically via an application programming interface (API) call to the electronic messaging system. And Fig. 1, Fraud Detection Component 108 ( i.e., communications hardware) in AI engine 104 (i.e., correspondence fraud mitigation system), and memories and processor in [0037]);
extracting, by correspondence analysis circuitry of the correspondence fraud mitigation system, one or more correspondence content data features from the candidate correspondence ([Abstract] The AI engine then analyzes the collected electronic messages for a plurality of features to identify unique communication patterns of users. And [0026] The flowchart 200 continues to block 204, where the collected electronic messages are analyzed to extract a plurality of features … Fig. 1 AI engine, i.e., correspondence analysis circuitry), [wherein the one or more correspondence content data features include one or more portions of image input data representative of the candidate correspondence received from the user device]; (see the teachings of Gils for limitations in brackets above and below)
determining, by the correspondence analysis circuitry and based on [the one or more portions of image input data] of the one or more correspondence content data features, a set of fraud patterns comprising one or more fraud patterns associated with the candidate correspondence ([Abstract] The AI engine then analyzes the collected electronic messages for a plurality of features to identify unique communication patterns of users. And [0026] The flowchart 200 continues to block 204, where the collected electronic messages are analyzed to extract a plurality of features to identify one or more unique communication patterns of each user in the entity on the electronic messaging system via AI-based classification);
determining, by the correspondence analysis circuitry and based on the set of fraud patterns, a fraud classification for the candidate correspondence ([0026] The flowchart 200 continues to block 208, where the incoming messages are identified with a high degree of accuracy as whether they are part of an impersonation attack (i.e., a fraud classification) based on the detected anomalous signals. And [0032] Once the inventory of historical electronic messages has been retrieved, the fraud detection component 108 of the AI engine 104 is configured to scan them to identify a plurality of various types of security threats to the electronic messaging system in the past. Such security threats include but are not limited to, viruses, malware, phishing emails, communication frauds and/or other types of impersonation attacks. Here, the fraud detection component 108 is configured to identify not only the communication frauds and/or other types of impersonation attacks (e.g., spear phishing attacks) and/or high-risk individuals through electronic message scanning as discussed above);
in an instance in which the fraud classification is not indicative of an authentic communication (Fig. 2 at 208, [0026] The flowchart 200 continues to block 208, where the incoming messages are identified with a high degree of accuracy as whether they are part of an impersonation attack based on the detected anomalous signals (i.e., not indicative of an authentic communication)), generating, by fraud deterrence circuitry of the correspondence fraud mitigation system and based on the fraud classification, a first set of fraud deterrence recommendations; and providing, by the communications hardware, the first set of fraud deterrence recommendations to the user device (Fig. 2 at 210 and 212, and [0026] The flowchart 200 continues to block 210, where the incoming messages are blocked and quarantined in real time if they are identified to be a part of the impersonation attack. The flowchart 200 ends at block 212, where an intended recipient of the incoming messages and/or an administrator of the electronic messaging system are notified of the attempted impersonation attack (i.e., fraud deterrence recommendations). Also Fig. 1, Fraud Detection Component 108 (i.e., fraud deterrence circuitry), Entity 114 (i.e., one or more computing devices)).
While Cidon teaches the main concept of the claimed invention for detection of fraud in collected electronic messages/correspondence, but does not specifically teach image data in the correspondence content data, in the same field of endeavor Gils teaches:
wherein the one or more correspondence content data features include one or more portions of image input data representative of the candidate correspondence received from the user device, determining … based on the one or more portions of image input data of the one or more correspondence content data features, a set of fraud patterns … (Gils, discloses systems and methods for fraud detection during transactions using identity graphs on received document image, see [Abstract] “The method may also include extracting data associated with the document image to generate extracted data comprising image data extracted from the document image”… “The method may also include processing, by a set of machine learning models, corresponding subsets of the decoded image data used as input to each machine learning model of the set of machine learning models, and further by a second machine learning model that generates a final score indicative of whether the document image depicts a fraudulent identity document”. Also refer to Fig. 1, User system 130 (i.e., user device), Identity document fraud detection system (i.e., correspondence fraud mitigation system), and [0025] For example, the identity document may be imaged using a camera, scanner, or other imaging device associated with the merchant system 120 or user system 130. Alternatively, the merchant system 120 or user system 130 may also supply a previously captured image of the requested identity document. This image purporting to depict the authentic and valid identity document is transmitted to the service subsystem 114. And [0026] In embodiments, service subsystem 114 utilizes identity document fraud detection system 112 to determine if the identity document image is fraudulent by authenticating the identity document image and document depicted therein… For example, in the ensemble of machine learning models, a document imaging fraud model may be trained to detect using pixel data (e.g., to detect if a document depicted within the image has used the incorrect type of paper, is too reflective, is too flat, or other factors indicative of fraudulent image data), an email address fraud detection model that is trained to determine if data extracted from a document image is likely to match a supplied email address, an identity graph feature extractor that generates one or more features (e.g., based on a transaction history or past fraudulent activities, such as for example, features based on total numbers of past declines, past transaction totals, past accepted transactions, past fraud determinations).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have employed the teachings of Gils in the fraud detection and prevention of Cidon by using document image in transactions for detecting whether an identity document depicted within the document image is fraudulent. This would have been obvious because the person having ordinary skill in the art would have been motivated to extract data associated with the document image in communication between user device and the Identity document fraud detection system for identity document fraud detection (Gils, [Abstract]).
Regarding claim 13, claim 13 is an apparatus claim that encompasses limitations similar to those limitations of the method claim 1. Therefore, claim 13 is rejected with the same rationale and motivation as applied against claim 1. In addition, Cidon teaches an apparatus for a correspondence fraud mitigation system for mitigating correspondence fraud, the apparatus comprising: communications hardware (Cidon, discloses system and method for fraud detection and prevention by utilizing artificial intelligence engine that detects and blocks impersonation attacks in real time, see [Abstract]. Also see Fig. 1).
Regarding claim 20, claim 20 is computer program product claim that encompasses limitations similar to those limitations of the method claim 1. Therefore, claim 20 is rejected with the same rationale and motivation as applied against claim 1. In addition, Cidon teaches a computer program product for mitigating correspondence fraud, the computer program product comprising at least one non-transitory computer-readable storage medium (Cidon, discloses system and method for fraud detection and prevention by utilizing artificial intelligence engine that detects and blocks impersonation attacks in real time, see [Abstract]. And [0037] The disclosed methods may also be at least partially embodied in the form of tangible, non-transitory machine readable storage media encoded with computer program code).
Claims 2, 14 are rejected under 35 U.S.C. 103 as being unpatentable over Cidon-Gils as applied above in claim 1, 13 respectively, further in view of Benkreira et al (US20210319527A1, hereinafter, “Benkreira”).
Regarding claim 2, similarly claim 14, Cidon-Gils combination teaches the method of claim 1, the apparatus of claim 13,
While the combination of Cidon-Gils does not teach the following, in the same field of endeavor Benkreira teaches:
wherein determining the set of fraud patterns further comprises: generating, by the correspondence analysis circuitry and based on one or more of the one or more correspondence content data features, a set of correspondence faults comprising one or more correspondence faults associated with the candidate correspondence; comparing, by the correspondence analysis circuitry, the set of correspondence faults to a set of known fraud patterns; and generating, by the correspondence analysis circuitry and based on the comparing of the set of correspondence faults to the set of known fraud patterns, the set of fraud patterns, wherein the set of fraud patterns is a subset of the set of known fraud patterns (Benkreira, discloses system and method for fraud detection during an application process to a client device based on behavioral information that indicates user behavior associated with inputting data into the application, see [Abstract]. And [0002] determining, by the system, a fraud score based on the device information and the behavior information, wherein the fraud score is determined using a machine learning model that identifies patterns from the device information and the behavior information. And [0025] Additionally, or alternatively, an ISP obtained from the device information may be compared against a list of ISPs that are known to be commonly correlated with fraud. If the ISP matches an ISP on the list (i.e., the ISP matched with ISP list is a subset of ISP list), the fraud platform may determine that the device information is indicative of fraud. In some implementations, the fraud platform may associate particular types of behavior information and/or device information with fraud based on processing behavior information and/or device information from other users).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have employed the teachings of Benkreira in the fraud detection and prevention of Cidon-Gils by comparing user behavior in obtained ISP (internet service provider) to a list of ISPs that are known to be commonly correlated with fraud. This would have been obvious because the person having ordinary skill in the art would have been motivated to base on user’s fault behavior related to commonly fraud ISP to detect fraud during application process (Benkreira, [Abstract]).
Claims 3, 15 are rejected under 35 U.S.C. 103 as being unpatentable over Cidon-Gils-Benkreira as applied above in claim 2, 14 respectively, further in view of Phatak et al (US20220006899A1, hereinafter, “Phatak”).
Regarding claim 3, similarly claim 15, Cidon-Gils-Benkreira combination teaches the method of claim 2, the apparatus of claim 14,
Benkreira further teaches: wherein generating the set of correspondence faults comprises: determining, by the correspondence analysis circuitry, a correspondence content data feature type for a correspondence content data feature (Benkreira, [0025] If the ISP matches an ISP on the list, the fraud platform may determine that the device information is indicative of fraud. In some implementations, the fraud platform may associate particular types of behavior information and/or device information with fraud based on processing behavior information and/or device information from other users);
The combination of Cidon-Gils-Benkreira does not appear to teach, in the same field of endeavor Phatak teaches:
and executing, by the correspondence analysis circuitry and based on the correspondence content data feature type, one or more of a hyperlink evaluation routine, HyperText Markup Language (HTML) element evaluation routine, image metadata evaluation routine, page script evaluation routine, source code evaluation routine, or correspondence source address evaluation routine with respect to the candidate correspondence (Phatak, discloses system and method of fraud detection engine for detecting various types of fraud at a call center and a fraud importance engine for tailoring the fraud detection operations to relative importance of fraud events, [Abstract]. And [0098] the server 202 executes an attribute value updater 206, which comprises any number of source code functions instructing the server 202 to ingest attributes from the database 204. The attribute value updater 206 queries the database 204 to fetch the attributes used by the analytics server 202 to perform the various fraud importance and fraud detection operations).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have employed the teachings of Phatak in the fraud detection and prevention of Cidon-Gils-Benkreira by executing attribute value updater to ingest attributes from the database. This would have been obvious because the person having ordinary skill in the art would have been motivated to perform various fraud importance and fraud detection operations (Phatak, [Abstract]).
Claims 4, 12, 16 are rejected under 35 U.S.C. 103 as being unpatentable over Cidon-Gils as applied above in claim 1, 13 respectively, further in view of Trivedi et al (US20230262160A1, hereinafter, “Trivedi”).
Regarding claim 4, similarly claim 16, Cidon-Gils combination teaches the method of claim 1, the apparatus of claim 13,
While the combination of Cidon-Gils does not teach the following, in the same field of endeavor Trivedi teaches:
wherein determining the set of fraud patterns further comprises: comparing, by the correspondence analysis circuitry, the one or more correspondence content data features associated with the candidate correspondence to ground-truth data associated with an enterprise (Trivedi, discloses systems and methods for voice phishing monitoring, see [Abstract]. And [0043] In some embodiments, a portion of the training data may be withheld during training and/or used to validate the trained machine-learning model. The training of the machine learning system and/or model may be configured to cause the machine learning system and/or model to learn associations between training data and ground truth data (e.g., if supervised learning) and/or learn patterns from the training data (e.g., if unsupervised learning), such that the trained machine-learning model is configured to determine an output in response to the input data based on the learned associations); detecting, by the correspondence analysis circuitry and based on comparing the one or more correspondence content data features to the ground-truth data, one or more correspondence inconsistencies (Trivedi, [0049] At step 206, the process 200 may include determining first fraud indicator data based on a number associated with the incoming call. The first fraud indicator data may classify the number associated with the incoming call as a known fraudulent number, a verified number of the entity, or an unknown number (if not a known fraudulent number and not a verified number of the entity). For example, server device 104 may query one or more data sets stored internally or externally to server device 104 using the number to determine if there is any known affiliation of the number to fraudulent activity and/or to the alleged entity that may be indicative of a likelihood of the incoming call being fraudulent); comparing, by the correspondence analysis circuitry, the one or more correspondence inconsistencies to a set of known fraud patterns; and generating, by the correspondence analysis circuitry and based on the comparing of the one or more correspondence inconsistencies to the set of known fraud patterns, the set of fraud patterns associated with the candidate correspondence, wherein the set of fraud patterns is a subset of the set of known fraud patterns (Trivedi, [0053] At step 212, the process 200 may include determining a status for the incoming call based on one or more of the first, second, and/or third fraud indicator data, wherein the status is at least one of fraudulent or confirmed).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have employed the teachings of Trivedi in the fraud detection and prevention of Cidon-Gils by training machine learning model with ground truth data. This would have been obvious because the person having ordinary skill in the art would have been motivated to monitoring phishing in voice to determine fraudulent status of incoming call (Trivedi, [Abstract]).
Regarding claim 12, Cidon-Gils combination teaches the method of claim 1,
While the combination of Cidon-Gils does not teach the following, in the same field of endeavor Trivedi teaches:
wherein the candidate correspondence is audio correspondence, and wherein the audio correspondence is received by a computing device associated with the user (Trivedi, discloses systems and methods for voice phishing monitoring, see [Abstract] For instance, a method includes receiving voice data (i.e., audio correspondence) of an incoming call to a communication device from an application associated with a user account and executing on the device, identifying an entity and interaction allegedly associated with the incoming call from the voice data, determining first fraud indicator data based on a number of the incoming call and second fraud indicator data based on a correspondence of user account interaction data to the entity and/or interaction, and providing the voice data to a trained machine learning system to receive third fraud indicator data based on content and/or a voice characteristic identified from the voice data).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have employed the teachings of Trivedi in the fraud detection and prevention of Cidon-Gils by training machine learning model with voice data. This would have been obvious because the person having ordinary skill in the art would have been motivated to monitoring phishing in voice to determine fraudulent status of incoming call (Trivedi, [Abstract]).
Claims 5, 17 are rejected under 35 U.S.C. 103 as being unpatentable over Cidon-Gils-Trivedi as applied above in claim 4, 16 respectively, further in view of Pachauri et al (US20200252802A1, hereinafter, “Pachauri”).
Regarding claim 5, similarly claim 17, Cidon-Gils-Trivedi combination teaches the method of claim 4, the apparatus of claim 16,
The combination of Cidon-Gils-Trivedi does not teach the following, in the same field of endeavor Pachauri teaches:
wherein the ground-truth data comprises data related to one or more correspondence style rules, branding rules, product data, user data, user data obfuscation rules, correspondence delivery records, or knowledge domain data (Pachauri, discloses method for fraud detection from client request features using machine learning, [Abstract]. And [0117] The label or ground truth for the input request may be determined, labeled and/or set in a ground truth preparation. In an example, the label or ground truth for the input request may be set to true (e.g., the input request is fraudulent, etc.) when a complaint from a user or an operator (i.e., user data) is received, and may be set to false (e.g., the input request is non-fraudulent, etc.) when recurring charges have been made with no complaint from a user or an operator).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have employed the teachings of Pachauri in the fraud detection and prevention of Cidon-Gils-Trivedi by training machine learning model with ground truths for the input requests. This would have been obvious because the person having ordinary skill in the art would have been motivated to apply machine learning based prediction models to compute fraud score for input request to determine whether the input request is to be accepted (Pachauri, [Abstract]).
Claims 6, 18 are rejected under 35 U.S.C. 103 as being unpatentable over Cidon-Gils as applied above in claim 1, 13 respectively, in view of Kramme et al (US11170375B1, hereinafter, “Kramme”).
Regarding claim 6, similarly claim 18, Cidon-Gils combination teaches the method of claim 1, the apparatus of claim 13,
While the combination of Cidon-Gils does not teach the following, in the same field of endeavor Kramme teaches:
wherein determining the fraud classification further comprises: generating, by the correspondence analysis circuitry and based on the set of fraud patterns, a fraud classification probability for at least one fraud classification of a plurality of fraud classifications; determining, by the correspondence analysis circuitry, whether the fraud classification probability satisfies a fraud classification threshold; and in response to determining that the fraud classification probability satisfies the fraud classification threshold: classifying, by the correspondence analysis circuitry, the candidate correspondence based on the at least one fraud classification (Kramme, discloses method of automating a fraud classification process using machine learning, see [Abstract] The method also includes retrieving first financial transaction data associated with a first financial account, and selecting, by applying the fraud classification rules to the first financial transaction data, a first fraud classification. And [Col. 30 lines 36-44] For each classification/category, the rule set 240 may output the total score, a normalized total score, an indication of whether the total score exceeded a threshold, a probability calculated based upon the total score, and/or some other indicator or measure of the likelihood that fraud of that particular type/class occurred in connection with the transaction. In the example shown in FIG. 4C, it can be seen that larger scores generally correspond to a greater probability that the respective classification is accurate).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have employed the teachings of Kramme in the fraud detection and prevention of Cidon-Gils by selecting fraud classification based on calculated probability based on a total score. This would have been obvious because the person having ordinary skill in the art would have been motivated to automating fraud classification using machine learning based on calculated probability of fraud classification based on financial transaction data (Kramme, [Abstract]).
Claims 7, 11, 19 are rejected under 35 U.S.C. 103 as being unpatentable over Cidon-Gils as applied above in claim 1, 13 respectively, in view of Mossoba et al (US20190272549A1, hereinafter, “Mossoba”).
Regarding claim 7, similarly claim 19, Cidon-Gils combination teaches the method of claim 1, the apparatus of claim 13,
While the combination of Cidon-Gils does not teach the following, in the same field of endeavor Mossoba teaches:
further comprising: determining, by the fraud deterrence circuitry, one or more user-initiated actions executed with respect to the candidate correspondence, wherein the one or more user-initiated actions are characterized by an engagement of the user with the candidate correspondence (Mossoba, discloses systems and methods of fraud protection based on extracted text from image of document, see [Abstract]. And [0045] In some embodiments, the fraud analysis module 440 may be configured to check the transaction history associated with the user to determine if the user is involved in the matter claimed in the mailing or the email); determining, by the fraud deterrence circuitry and based on the fraud classification, a risk level for a first user-initiated action of the one or more user-initiated actions; generating, by the fraud deterrence circuitry and based in part on the risk level of the first user-initiated action, a second set of fraud deterrence recommendations (Mossoba, [0047] In some embodiments, if it is determined that the user has initiated a high risk transaction (i.e., first user-initiated action) based on the information provided in the mail or the email in question, the action module 450 may be configured to perform a more proactive action, such as placing a flag on an account of the user, or even freezing an account of the user. For example, if the confidence level of fraud exceeds a threshold value, there may be a prompt where the user is asked if he or she has initiated or performed any transaction related to the mail or the email in question which may jeopardize his or her account. If yes, the action module 450 may be configured to place a flag on the account or even freeze the account);
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have employed the teachings of Mossoba in the fraud detection and prevention of Cidon-Gils by checking user high risk transaction and perform more proactive action to user account. This would have been obvious because the person having ordinary skill in the art would have been motivated to protect user from high risk fraud transaction (Mossoba, [Abstract], [0002]).
The combination of Cidon and Mossoba further teaches: providing, by the communications hardware, the second set of fraud deterrence recommendations to a computing device associated with the user (Cidon, Fig. 2 at 212, [0026] where an intended recipient of the incoming messages and/or an administrator of the electronic messaging system are notified of the attempted impersonation attack. And Mossoba, [0037] In some embodiments, even if it is determined that the received mail or email is not fraudulent, the system may provide the user with recommendations related to the claimed matter via user device 100).
Regarding claim 11, Cidon-Gils combination teaches the method of claim 1,
While the combination of Cidon-Gils does not teach the following, in the same field of endeavor Mossoba teaches:
wherein the candidate correspondence is a digital representation of printed correspondence (Mossoba, discloses systems and methods of fraud protection based on extracted text from image of document, see [Abstract] The server may receive an image of a document from a user device, wherein the document comprises at least one of a written communication or a printed communication. The server may extract text from the image of the document, compare the extracted text to the one or more stored keywords, and calculate a confidence level of fraud. And [0012] receive an image of a document from a user device, wherein the document may include at least one of a written communication, a printed communication, or an electronic message; extract at least one of text and graphics from the image of the document; compare the extracted text or graphics to one or more indicia in a database of indicia associated with a likelihood of fraud).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have employed the teachings of Mossoba in the fraud detection and prevention of Cidon-Gils by extracting text from printed document. This would have been obvious because the person having ordinary skill in the art would have been motivated to compare the extracted text to indicia in a database associated with likelihood of fraud to for fraud protection (Mossoba, [Abstract]).
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Cidon-Gils-Mossoba as applied above in claim 7, further in view of Smith et al (US12323455B1, hereinafter, “Smith”).
Regarding claim 8, Cidon-Gils-Mossoba combination teaches the method of claim 7,
The combination of Cidon-Gils-Mossoba does not teach the following, in the same field of endeavor Smith teaches:
wherein determining the one or more user-initiated actions further comprises: generating, by the fraud deterrence circuitry and based on the fraud classification, a set of risk determination questions configured to determine if the user performed one or more actions in response to receiving the candidate correspondence; providing, by the communications hardware, of at least a first risk determination question of the set of risk determination questions to the computing device associated with the user; receiving, by the communications hardware, at least a first user response to the first risk determination question; and determining, by the fraud deterrence circuitry and based in part on the first user response to the first risk determination question, the one or more user-initiated actions executed by the user with respect to the candidate correspondence (Smith, discloses systems and methods for detecting fraud attempt in a communication session via set of data associated with a communication session between a representative of an organization and a user, see [Abstract]. And [Col. 23 lines 52-61] the computer system may determine whether the CSR (“customer service representative”) provided the client additional chances or opportunities to answer a security question or provide a password. In this way, the behavior of the CSR with regard to protecting the integrity of the client's account may be quantified or characterized with a score. CSRs having a score above some threshold may be assigned to communication sessions that have clients with a risk profile or fraud risk percentage greater than a second threshold).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have employed the teachings of Smith in the fraud detection and prevention of Cidon-Gils-Mossoba by providing security question in a communication session via set of data associated with a communication session between a representative of an organization and a user. This would have been obvious because the person having ordinary skill in the art would have been motivated to protect the integrity of client’s account (Smith, [Abstract]).
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Cidon-Gils-Mossoba as applied above in claim 7, further in view of Nunes et al (US20220114594A1, hereinafter, “Nunes”).
Regarding claim 9, Cidon-Gils-Mossoba combination teaches the method of claim 7,
The combination of Cidon-Gils-Mossoba does not teach the following, in the same field of endeavor Nunes teaches:
further comprising: determining, by the fraud deterrence circuitry and based on the fraud classification of the candidate correspondence, a fraud severity level for the candidate correspondence (Nunes, discloses systems and methods for actionable insight into user interaction data, see [Abstract]. And [0030] the online service provider may receive user complaints in the form of calls, e-mails, and/or chat regarding invoices sent to them to pay for a web site domain renewal by a web hosting company… The analysis system may determine a risk level (i.e., fraud severity level) based on the derived patterns, and may alert a risk team to investigate the complaints in more detail when the risk exceeds a threshold. In some embodiments, the analysis system may also perform actions such as restricting access to the user accounts associated with the complaints when the risk level exceeds the threshold);
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have employed the teachings of Nunes in the fraud detection and prevention of Cidon-Gils-Mossoba by determining risk level. This would have been obvious because the person having ordinary skill in the art would have been motivated to use machine learning model trained to analyze user interaction data for actionable insight into the user interaction data (Nunes, [Abstract], [0001-0003]).
Cidon further teaches: and automatically executing, by the fraud deterrence circuitry and based on the fraud severity level of the candidate correspondence, at least one action associated with at least one fraud deterrence recommendation of the first set of fraud deterrence recommendations or the second set of fraud deterrence recommendations (Cidon, [0030] if an accounting individual handling financial transactions in the entity 114 on a daily basis failed to recognize a simulated impersonation attack, the fraud detection component 108 may modify the individual's electronic message processing flow on the electronic messaging system 112 so that all future electronic messages to the individual that involves financial transactions are automatically intercepted and analyzed by the message collection and analysis component 106 for risk analysis before the individual is allowed to receive and/or take any action in response to such electronic messages).
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Cidon-Gils-Mossoba-Nunes as applied above in claim 9, further in view of Smith et al (US12323455B1, hereinafter, “Smith”).
Regarding claim 10, Cidon-Gils-Mossoba-Nunes combination teaches the method of claim 9,
The combination of Cidon-Gils-Mossoba-Nunes does not teach the following, in the same field of endeavor Smith teaches:
further comprising: determining, by the fraud deterrence circuitry and based on the fraud severity level of the candidate correspondence, at least one enterprise representative associated with an enterprise with which the user is associated; and providing, by the communications hardware, a correspondence fraud alert associated with the candidate correspondence to an enterprise computing device associated with the at least one enterprise representative (Smith, discloses systems and methods for detecting fraud attempt in a communication session via set of data associated with a communication session between a representative of an organization and a user, see [Abstract]. And [Col. 12 lines 9-23] In some embodiments, the fraud detection system 12 may categorize certain contacts by individuals associated with accounts listed in the potential target account database 54 as potential suspicious activity. In the same manner, the fraud detection system 12 may classify the contact by the individual as suspicious if the individual requests information or asks questions that correspond to information or data stored in the historical fraudulent activity database 56, the external data sources 58, or the like. That is, the fraud detection system 12 may cross-reference questions or comments received from the individual contacting the customer service representative with data that is associated with known fraud methods, fraud trends, data exposed in the dark web, or the like to determine whether the individual contacting the customer service representative is suspicious).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have employed the teachings of Smith in the fraud detection and prevention of Cidon-Gils-Mossoba-Nunes by providing security question. This would have been obvious because the person having ordinary skill in the art would have been motivated to protect the integrity of client’s account (Smith, [Abstract]).
Citation of References
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. The following references are cited but not been replied upon for this office action:
Zhang et al (US20240364723A1) discloses system and method for utilizing machine learning model to identify the set of fraudulent users based on executing a set of detection models and performing pattern matching between a set of previously authenticated user activity logs and a set of newly generated user activity logs in the metadata.
Seyeditabari et al (US20240356948A1) discloses system and method for fraud electronic message detection.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is
reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL M LEE whose telephone number is (571)272-1975. The examiner can normally be reached on M-F: 8:30AM - 5:30PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, Applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Shewaye Gelagay can be reached on (571) 272-4219. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MICHAEL M LEE/Primary Examiner, Art Unit 2436