DETAILED ACTION
Status of Claims
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This action is in reply to the application filed on 06/21/2024.
Claims 1-20 are currently pending and have been examined.
Information Disclosure Statement
The information disclosure Statement(s) filed 06/21/2024 have been considered. Initialed copies of the Form 1449 are enclosed herewith.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 13, 14, and 17-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter in that the system of claim 13 comprises a fraud prevention server and recites no other structural recitations, which under broadest reasonable interpretation could be interpreted as software per se. Further, [0046] describes the server as implemented as a computer program, and there for could be interpreted as purely software. As per MPEP 2106.03, products that do not have a physical or tangible form, such as information (often referred to as "data per se") or a computer program per se (often referred to as "software per se") when claimed as a product without any structural recitations is not directed to any of the statutory categories and therefor directed towards non-statutory subject matter. Appropriate correction is required.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more, and fails step 2 of the analysis because the focus of the claims is not on the devices themselves or a practical application but rather directed towards an abstract idea, the analysis is provided below.
Step 1 (Statutory Categories) – As discussed above, claims 13, 14, and 17-20 fail step 1 of the test. Claims 1-12, and 15-16, however, pass step 1 of the subject matter eligibility test (see MPEP 2106(III)) as the claims are directed towards a system, and method.
Step 2A – Prong One (Do the claims recite an abstract idea?) - The idea is recited in the claims, in part, by:
extracting content from an ongoing call established with a first device associated with a first user;
identifying based on the content indicating that a financial transaction is associated with the ongoing call an identifier of a second user associated with the ongoing call;
initiating a first communication associated with the identifier; and
instructing a payment application server to reject the financial transaction associated with the ongoing call based on one of (i) a first response to the first communication indicating denial of the ongoing call being set-up by the second user and (ii) an absence of the first response to the first communication.
The steps recited above under Step 2A Prong One of the analysis under the broadest reasonable interpretation covers commercial or legal interactions (including marketing or sales activities or behaviors) but for the recitation of generic computer components for rejecting a transaction based on a response to a communication and content in a call. That is other than reciting a fraud prevention server, and a second device nothing in the claim elements are directed towards anything other than commercial or legal interactions. If a claim limitation, under its broadest reasonable interpretation, covers commercial or legal interactions, then it falls within the “Certain Methods of Organizing Human Activities” groupings of abstract ideas. Accordingly, the claims recite an abstract idea.
Step 2A – Prong Two (Does the claim recite additional elements that integrate the judicial exception into a practical application?) - This judicial exception is not integrated into a practical application. In particular, the claims only recite the additional elements of a fraud prevention server, and a second device. The a fraud prevention server, and a second device are recited at a high level of generality such that it amounts to no more than mere instructions to apply the exception using generic computer components and limits the judicial exception to the particular environment of computers. Mere instructions to apply the judicial exception using generic computer components and limiting the judicial exception to a particular environment are not indicative of a practical application (see MPEP 20106.05(f) and MPEP 20106.05(h)). The specification does not provide any indication that the a fraud prevention server, and a second device is other than generic computer components and are generically described as such in [0046] and [0056]. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims are directed towards an abstract idea.
Step 2B (Does the claim recite additional elements that amount to significantly more than the judicial exception?) - The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because, as discussed above, with respect to integration of the abstract idea into a practical application, using the additional elements of a fraud prevention server, and a second device to perform the steps recited in Step 2A Prong One of the analysis amounts to no more than mere instructions to apply the exception using generic computer components and limits the judicial exception to the particular environment. Mere instructions to apply an exception using generic computer components and limiting the judicial exception to a particular environment does not provide an inventive concept. With respect to extracting content, the Examiner interprets this to be akin to a mental process as this can be performed mentally and therefor an abstract process. Initiating a communication with the second device is part of the technical environment and MPEP 2106.05(d)(ii) shows it’s WURC to send and receive data over the network, as MPEP 2106.05(d)(ii) provides that receiving and transmitting data over a network (see buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network) in WURC. With respect to instructing the payment application server, this is akin to Alice, where MPEP 2106.05(d)(ii) provides that Electronic recordkeeping, Alice Corp. Pty. Ltd. v. CLS Bank Int'l, 573 U.S. 208, 224-26, 110 USPQ2d 1984-1985 (2014) is WURC, (see also creating and maintaining "shadow accounts", "create electronic records, track multiple transactions, and issue simultaneous instructions" (573 U.S. at 224-26, 110 USPQ2d at 1984-85)) is well-understood, routine, and conventional. The additional elements have been considered separately, and as an ordered combination, and do not add significantly more (also known as an “inventive concept”) to the judicial exception. The claims are not patent eligible.
The dependent claims have been given the full analysis including analyzing the additional limitations both individually and in combination as a whole. Claims 2, and 5-12 further describe commercial and legal interactions for sales activities to process the transaction based various communications and time periods for responding, being limited to the computer environment, and ineligible for the same reasons as discussed above. The dependent claims also recite the use of a set deepfake models, and training the models based the responses to the communications in claims 3 and 4. The deepfake models are recited at high level of generality, such that they are considered a generic computer component being used to perform commercial and legal interactions as claimed. As MPEP 2106.05(f) Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more. See Affinity Labs v. DirecTV, 838 F.3d 1253, 1262, 120 USPQ2d 1201, 1207 (Fed. Cir. 2016) (cellular telephone);. Further, with respect to the training and use of the models, this is akin to Recentive Analytics, Inc. v. Fox Corp., Case No. 2023-2437 (Fed. Cir. Apr. 18, 2025), where Courts found the only thing the claims disclose about the use of machine learning is that machine learning is used in a new environment and that the requirements that the machine learning model be “iteratively trained” or dynamically did not represent a technological improvement as iterative training using selected training material and dynamic adjustments based on real-time changes are incident to the very nature of machine learning. The Dependent claims when analyzed both individually and in combination are also held to be patent ineligible under 35 U.S.C. 101 for the same reasoning as above and the additional recited limitations fail to establish that the claims are not directed to an abstract idea. The additional limitations of the dependent claims when considered individually and as an ordered combination do not amount to significantly more than the abstract idea.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1, 2, 5, 10-14, and 17 are rejected under 35 U.S.C. 102(a)(1) and 102(a)(2) as being anticipated by Shahriar, et al. (US Patent Application Publication 20230319058), “Shahriar.
As per claim 1, and 13, Shahriar discloses:
A method for facilitating prevention of fraudulent transactions, the method comprising: [0002], [0026], [0032]
extracting, by a fraud prevention server, content from an ongoing call established with a first device associated with a first user; [0020], [0036] Further disclosed herein are methods and systems for authenticating a caller based on data captured from a gateway device (e.g., configured as a trusted device) located at a premises of the party. For example, if a security escalation is triggered during a communication session (e.g., a phone call or a video chat), a service provider may send a request to the gateway device to authenticate the caller (e.g., the source of the communication)… The security manager 102 may use voice-to-text conversion to generate the call data 116. The security manager 102 may use keyword searching, or machine learning to analyze the call data 116 and determine a call (e.g., from the first user device 110) includes a request for sensitive information.
identifying based on the content indicating that a financial transaction is associated with the ongoing call, by the fraud prevention server, an identifier of a second user associated with the ongoing call; [0032], [0036], [0040] The security manager 102 may use voice-to-text conversion to generate the call data 116. The security manager 102 may use keyword searching, or machine learning to analyze the call data 116 and determine a call (e.g., from the first user device 110) includes a request for sensitive information… Based on a request from the second user device 114, the user of the first user device 110 may verbally disclose sensitive information (e.g., a social security number, bank account information, credit card information, user accounts and/or passwords, etc.) or may complete a financial transaction… An identity of one or more parties to a communication (e.g., a source of the communication or a party requesting sensitive information) may be verified based on one or more forms of authentication. For example, a party requesting sensitive information may be authenticated via biometric authentication or audio/video confirmation from a trusted device. A gateway device (e.g., configured as a trusted device) located at a premises of the party to authenticate may receive a request to authenticate the party requesting the sensitive information. The gateway device may perform the authentication based on one or more monitored parameters (e.g., audio or video confirmation, pings from assigned devices on the gateway, television viewing habits for the caller, unlocked home security with assigned code, sounds produced in the caller's home environment, etc.) or data received from one or more trusted devices connected to the gateway device… The verification service 104 may be configured to verify an identity of one or more parties to the communication session.
initiating, by the fraud prevention server, a first communication with a second device associated with the identifier; and [0049], [0068] At step 212 of process 200, the second user device 114 may select a verifier from a list of available verifiers. A verifier may be a trusted device such as another device (e.g., network device 112) connected to the same network (e.g., network 108) and/or another device in the vicinity of the first user device 110. The verifier may be configured to record audio, video, or may comprise another device configured to receive a user input… At step 406, an identity of the first party to the communication session may be verified. The identity may be verified based on the interruption of the communication session. The verification may be performed based on biometric authorization or audio/video confirmation from a trusted device
instructing a payment application server, by the fraud prevention server, to reject the financial transaction associated with the ongoing call based on one of (i) a first response to the first communication indicating denial of the ongoing call being set-up by the second user and (ii) an absence of the first response to the first communication. [0042], [0052-0053], [0077] In an example use case, Alice, a friend of Bob, may start a video call with Bob from a number Bob does not recognize (e.g., step 202). Bob's phone may overlay a graphic or text on the video to notify Bob that the video call is an untrusted call (e.g., step 204). The call's capabilities may be restricted based on the untrusted nature of the call. Bob's phone may block certain responses for sensitive information (e.g., step 206) and/or may pause the call based on a request for sensitive information (e.g., step 208)… To elevate her call to verified status, Alice may request another camera in the vicinity to verify her. The video call software may present her with a list of validators near her who may verify her video… If Alice does not attempt to elevate her call to verified status or a validator cannot verify Alice as the caller, the call may be terminated or Bob may be presented with a warning that the caller cannot be verified, e.g., based on Bob's phone being unable to validate a certificate (e.g., step 220)… In response to the request to authenticate the party, at step 608, the verification information (e.g., a signed token) may be sent by the gateway device to the server device (e.g., a service provider). A signed token may be sent as metadata on an established voice call. For example, the gateway device may send an encrypted token payload (e.g., a yes/no decision) electronically either in the voice call itself or by other electronic means… Based on the verification status or receipt of the signed token, the security manager 102 may unpause, allow the communication to resume, and/or reinstate the communication. In an example, if the party requested transfer of payment using blockchain, then the signed token received from the verification service 104 may be used to submit a blockchain payment.
As per claim 2, and 14, Shahriar discloses:
parsing, by the fraud prevention server, the content to determine whether the content indicates that the financial transaction is associated with the ongoing call. [0032], [0036] The security manager 102 may use voice-to-text conversion to generate the call data 116. The security manager 102 may use keyword searching, or machine learning to analyze the call data 116 and determine a call (e.g., from the first user device 110) includes a request for sensitive information. The security manager 102 may determine, based on the call data 116, that a communication is insecure, has an unknown source, includes suspicious activity, or is potentially fraudulent. The security manager 102 may utilize one or more trust rules to associate a status of one or more parties to the communication with corresponding actions. For example, the one or more trust rules may comprise requiring a signed token prior to allowing a financial transaction between the parties to the communication… Based on a request from the second user device 114, the user of the first user device 110 may verbally disclose sensitive information (e.g., a social security number, bank account information, credit card information, user accounts and/or passwords, etc.) or may complete a financial transaction.
As per claim 5, and 17, Shahriar discloses:
determining, by the fraud prevention server, based on reception of the ongoing call on the first device, whether contact information of a caller of the ongoing call is absent in a contact list associated with the first user, wherein the content of the ongoing call is extracted upon the determination that the contact information of the caller is absent in the contact list. [0034], [0040] The security manager 102 may be configured to analyze the call data 116. For example, the security manager 102 may identify a call as having an unknown source, suspicious, or potentially fraudulent (e.g., by comparing call data 116 to screening data 118)… The screening data 118 may comprise a list and/or a database of data used for determining whether a call should be further processed using by the security manager 102… The security manager 102 may determine, based on the call data 116, that a communication is insecure, has an unknown source, includes suspicious activity, or is potentially fraudulent… The verification service 104 may be configured to verify an identity of one or more parties to the communication session. For example, the security manager 102 may send a request to the verification service 104 to verify the identity of an unknown source of a communication session (e.g., from a second user device 112). The verification service 104 may verify the identity of one or more parties to the communication session by using biometric authentication (e.g., fingerprint reading, facial recognition, eye scanning, etc.).
As per claim 10, Shahriar discloses:
communicating, by the fraud prevention server, a first notification to the first device based on the first response indicating the denial of the ongoing call being set-up by the second user, wherein the first notification indicates to the first user that the ongoing call is a fraudulent call. [0002], [0023], [0034] If a request for sensitive information is detected or a communication is identified as having an unknown source, a security escalation process may begin. A status of the communication or a source of the communication may be output (e.g., via a display) to the recipient of the communication. The status may indicate that the communication is unsecure or the source of the communication is unverified. Moreover, the communication may be paused and/or outgoing communication (e.g., audio or video) may be blocked so a user does not accidentally share sensitive information on an unsecured communication… If the gateway cannot verify that Joe is the caller, the call may be terminated or Susan may be presented with a warning that the caller cannot be verified… The security manager 102 may be configured to analyze the call data 116. For example, the security manager 102 may identify a call as having an unknown source, suspicious, or potentially fraudulent (e.g., by comparing call data 116 to screening data 118).
As per claim 11, Shahriar discloses:
wherein the content of the ongoing call corresponds to at least one of audio content and video content. [0025] Some disclosed methods and systems may prevent deep-fake-based scams by adding a “verified by” capability to the communication session (e.g., a voice call or video chat).
As per claim 12, Shahriar discloses:
wherein the first communication corresponds to one of a call, an email, an instant message, a text message, a short message service (SMS), a flash message, and a pop-up notification. [0027] To elevate her call to verified status, Alice may request another camera in the vicinity to verify her. The video call software may present her with a list of validators near her who may verify her video. These validators may be video cameras that are managed by services (e.g., service providers or equipment manufacturers) who are unlikely to be hacked or otherwise compromised. Alice may select a local camera as the validator and she may then be required to continue the call while standing in front of the validator. Alice may add that camera to her call as the validator and the validator may then compare the video being shared on the call from Alice's phone to what the validator sees. If the videos match, the validator may send an encrypted/signed certificate with the call verifying that the video Alice is sharing is the video the validator is seeing as well. Bob's phone may validate this certificate and mark the call as verified.
As per claims 13, 14, and 17, claims 13, 14, and 17 recite substantially similar limitations to those found in claims 1, 2, and 5, respectively. Therefor claims 13, 14, and 17 are rejected under the same art and rationale as claims 1, 2, and 5. Furthermore, Shahriar discloses a system and server [0078].
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 3, 4, 15, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Shahriar, et al. (US Patent Application Publication 20230319058), “Shahriar” in view of Liberman, et al. (US Patent Application Publication 20210406568), “Liberman”.
As per claim 3, Shahriar discloses the content as being an on-going call ([0020], [0036]), but does not expressly the utilizing a set of models, Liberman, however discloses the following:
executing upon extracting the content, by the fraud prevention server, a set of deepfake detection models associated with the fraud prevention server to analyze the content; and see fig. 1D and 1E, [0026-0031] For example, the detection system may receive the particular input content from the user device. The particular input content may include a video, one or more images, audio data, and/or the like. In some implementations, the detection system receives the particular input content as part of a request for a determination of whether the particular input content is a deepfake or is real… The first model result may include an indication of whether the particular input content is a deepfake or is real (e.g., not a deepfake)... The second model result may include an indication of whether the particular input content is a deepfake or is real (e.g., not a deepfake)… As shown in FIG. 1E, and by reference number 155, the detection system may process the first model result, the second model result, and the third model result, with an aggregation model, to generate a detection result indicating whether the particular input content is a deepfake or is real (e.g., not a deepfake). The aggregation model may include a logistic regression machine learning model. In some implementations, the detection result is a final probability that the particular input content is a deepfake.
determining, by the fraud prevention server, based on execution of the set of deepfake detection models, whether the ongoing call is a deepfake call to identify the identifier of the second user. [0031] As shown in FIG. 1E, and by reference number 155, the detection system may process the first model result, the second model result, and the third model result, with an aggregation model, to generate a detection result indicating whether the particular input content is a deepfake or is real (e.g., not a deepfake)… In some implementations, the detection result is a binary result with a first value indicating that the particular input content is a deepfake and a second value indicating that the particular input content is real… In some implementations, the one or more actions include the detection system preventing a financial transaction when the detection result indicates that the particular input content is a deepfake of a credential required for performing the financial transaction. For example, the particular input content may include a face image that is to be provided to verify a user for performance of the financial transaction.
It would have been obvious to one having ordinary skill in the art at the time the invention was filed to modify Shahriar with the ability to execute a set of deepfake models to detect if content is a deepfake as taught by Liberman, doing so allows different models with different detection methods to be utilized and aggregated leading to improved detection of deepfake content [0015].
As per claim 4, and 16, Shahriar does not expressly the following, Liberman, however discloses:
training, by the fraud prevention server, the set of deepfake detection models when a second response to the first communication indicates confirmation of the ongoing call being initiated by the second user. [0036]-[0037], [0091] For example, the particular input content may include a face image that is to be provided to verify a user for performance of the financial transaction. When the detection result indicates that the face image is a deepfake, the detection system may prevent the financial transaction from occurring and may notify a law enforcement agency about the attempted financial fraud… In some implementations, the one or more actions include the detection system retraining one or more of the machine learning models based on the detection result. The detection system may utilize the detection result as additional training data for retraining the one or more of the machine learning models, thereby increasing the quantity of training data available for training the one or more of the machine learning models. Accordingly, the detection system may conserve computing resources associated with identifying, obtaining, and/or generating historical data for training the one or more of the machine learning models relative to other systems for identifying, obtaining, and/or generating historical data for training machine learning models… In a twelfth implementation, alone or in combination with one or more of the first through eleventh implementations, the detection result is a binary result with a first value indicating that the input content is a deepfake and a second value indicating that the input content is real.
It would have been obvious to one having ordinary skill in the art at the time the invention was filed to modify Shahriar with the ability to retrain a set of deepfake models based on the detection results as taught by Liberman, doing so allows different models with different detection methods to be utilized and aggregated leading to improved detection of deepfake content and increases the amount of training data [0015], [0037].
As per claim 15, Shahriar discloses:
wherein the fraud prevention server further comprises: [0078]
a processor configured to: [0080]
Shahriar discloses the content as being an on-going call ([0020], [0036]), but does not expressly the utilizing a set of models, Liberman, however discloses the following:
a memory configured to store a set of deepfake detection models; and [0067]
execute, the set of deepfake detection models to analyze the content upon extracting the content; and see fig. 1D and 1E, [0026-0031] For example, the detection system may receive the particular input content from the user device. The particular input content may include a video, one or more images, audio data, and/or the like. In some implementations, the detection system receives the particular input content as part of a request for a determination of whether the particular input content is a deepfake or is real… The first model result may include an indication of whether the particular input content is a deepfake or is real (e.g., not a deepfake)... The second model result may include an indication of whether the particular input content is a deepfake or is real (e.g., not a deepfake)… As shown in FIG. 1E, and by reference number 155, the detection system may process the first model result, the second model result, and the third model result, with an aggregation model, to generate a detection result indicating whether the particular input content is a deepfake or is real (e.g., not a deepfake). The aggregation model may include a logistic regression machine learning model. In some implementations, the detection result is a final probability that the particular input content is a deepfake.
determine based on execution of the set of deepfake detection models, whether the ongoing call is a deepfake call to identify the identifier of the second user. [0031] As shown in FIG. 1E, and by reference number 155, the detection system may process the first model result, the second model result, and the third model result, with an aggregation model, to generate a detection result indicating whether the particular input content is a deepfake or is real (e.g., not a deepfake)… In some implementations, the detection result is a binary result with a first value indicating that the particular input content is a deepfake and a second value indicating that the particular input content is real… In some implementations, the one or more actions include the detection system preventing a financial transaction when the detection result indicates that the particular input content is a deepfake of a credential required for performing the financial transaction. For example, the particular input content may include a face image that is to be provided to verify a user for performance of the financial transaction.
It would have been obvious to one having ordinary skill in the art at the time the invention was filed to modify Shahriar with the ability to execute a set of deepfake models to detect if content is a deepfake as taught by Liberman, doing so allows different models with different detection methods to be utilized and aggregated leading to improved detection of deepfake content [0015].
As per claim 16, claim 16 recites substantially similar limitations to those found in claim 4. Therefor claim 16 is rejected under the same art and rationale as claim 4. Furthermore, Shahriar discloses a system and server [0078].
Claims 6 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Shahriar, et al. (US Patent Application Publication 20230319058), “Shahriar” in view of Maizels, et al. (International Publication Number WO 2024018400), “Maizels”.
As per claim 6, and 18, Shahriar discloses:
comprising retrieving, by the fraud prevention server, contact information of the second user based on the identifier of the second user from a contact list associated with the first user, wherein the first communication is initiated with the second device based on the contact information, and the contact information of the second user corresponds to at least one of a contact number, a social media username, and an email identifier of the second user. [0030], [0032], [0036], [0040] An identity of one or more parties to a communication (e.g., a source of the communication or a party requesting sensitive information) may be verified based on one or more forms of authentication. For example, a party requesting sensitive information may be authenticated via biometric authentication or audio/video confirmation from a trusted device. A gateway device (e.g., configured as a trusted device) located at a premises of the party to authenticate may receive a request to authenticate the party requesting the sensitive information. The gateway device may perform the authentication based on one or more monitored parameters (e.g., audio or video confirmation, pings from assigned devices on the gateway, television viewing habits for the caller, unlocked home security with assigned code, sounds produced in the caller's home environment, etc.) or data received from one or more trusted devices connected to the gateway device… The verification service 104 may be configured to verify an identity of one or more parties to the communication session…The first user device 110 or second user device 114 may each comprise and/or be associated with a user identifier. The user identifier may comprise a number, such as a phone number.
Shahriar does not expressly disclose the following, Maizels, however discloses
wherein the identifier of the second user is a name of the second user [0927], [0993] Consistent with some disclosed embodiments, verifying the identity includes verification of a name of the subject. Verification of the name of the subject may include correlating the identity of the subject of the communication with the name of the subject. For example, facial micromovements may be used to determine the identity of the subject. A data structure may be created based on historical data that correlates the identity of the subject using facial micromovements and the name of the subject. During the real-time transaction, a lookup in the data structure may retrieve the name of the subject and the second data stream may be generated including the name of the subject. The second data stream may be transmitted to the destination (i.e., entity) where the name may be used to verify the identity of the subject.
It would have been obvious to one having ordinary skill in the art at the time the invention was filed to modify Shahriar with the ability to identify a user based on their name as taught by Maizels, doing so allows the user to be identified based on providing their name [0927], [0993].
As per claim 18, claim 18 recites substantially similar limitations to those found in claim 6. Therefor claim 18 is rejected under the same art and rationale as claim 6. Furthermore, Shahriar discloses a system and server [0078].
Claims 7-9 and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Shahriar, et al. (US Patent Application Publication 20230319058), “Shahriar” in view of Singhal, et al. (US Patent Application Publication 20100229245), “Singhal”.
As per claim 7, and 19, Shahriar does not expressly disclose the following ,Singhal, however discloses:
setting by the fraud prevention server, a value of a first time period; and [0033] The contact by the transaction processing entity or the mobile authorization service provider via the owner's wireless mobile communication device may include a SMS text message that embeds a pre-placed security code and may include sending to the identity data owner, (i) name of the transaction initiating entity, date and time, and optionally an amount for a payment transaction. The authorization may include accept, decline or time out due to lack of response, where the time out is set based on the type of the transaction.
determining, by the fraud prevention server, whether the first response is received based on the initiation of the first communication with the second device in the first time period, wherein the payment application server is instructed to reject the financial transaction based on the absence of the first response to the first communication at an end of the first time period. [0033], [0087] The contact by the transaction processing entity or the mobile authorization service provider via the owner's wireless mobile communication device may include a SMS text message that embeds a pre-placed security code and may include sending to the identity data owner, (i) name of the transaction initiating entity, date and time, and optionally an amount for a payment transaction. The authorization may include accept, decline or time out due to lack of response, where the time out is set based on the type of the transaction… Further, the protocol in Internet type computer networks are based on state based transactions and can keep a transaction pending until authorization is obtained or not obtained and then issue an acceptance or rejection as appropriate. For that, a time out limit may be implemented by the ODFI and may be appropriately set.
It would have been obvious to one having ordinary skill in the art at the time the invention was filed to modify Shahriar with the ability to decline a transaction if no response is received after a time out as taught by Singhal, doing so further prevents fraud of a someone impersonating the user using remote authorizations [0052-0053].
As per claim 8, and 20, Shahriar does not expressly disclose the following, Singhal, however discloses:
setting, by the fraud prevention server, a value of a second time period upon setting the value of the first time period, wherein the second time period is shorter than the first time period; and [0081] The RDFI 212 may, however, reject the ACH transaction and return it to the ODFI 208 if, for example, the account had insufficient funds or the account holder indicated that the transaction was unauthorized. An RDFI 212 has a prescribed amount of time in which to perform returns, ranging from 2 to 60 days from the receipt of the ACH transaction… [0084] Such a protocol as ACH 210 may optionally be enhanced to communicate a predefined time delay in acceptance or delayed acceptance, in addition to acceptance and rejection of the transaction immediately by the receiving bank, allowing the receiving bank to seek an authorization by the true identity data owner, the bank account owner. The protocol may indicate that the approval is delayed depending upon the type of the transaction for an authorization beyond checking sufficiency of funds or other issues such as stop payment. The protocol may be based on using the current rejection protocol by adding a time delay to resubmit the transaction. Similar protocols exist in ACH such as one that communicates a stop payment order or insufficient funds as part of the rejection… [0093] In the reactive mode, the enable/disable flag 79 would be left in the disable mode at all times. When a transaction is conducted, the identity data owner would get a real time transaction advisory message. The id data owner can review these transactions and could reject a transaction from final completion, if he/she sends a reject message before expiration of a certain time limit from the time of the transaction origination. The time limit could be in hours and could be up to 18 hours, as the ACH payment systems provide for an actual fund transfer in 24 hours after the payment authorization… [0095] After the transaction is completed, then, the id data owner could press another function key to enable the enable/disable flag 79. Alternatively, the enable/disable flag 79 could be automatically enabled after a time out of, let us say five minutes, without the id data owner have to press the second function key… The mobile authorization may be implemented as defined as three operational modes of a proactive mode, a reactive mode and a combined mode.
generating by the fraud prevention server, a hold request indicating the payment application server to place the financial transaction on hold, wherein the financial transaction is placed on hold by the payment application server based on the hold request, and wherein the hold request is generated at an end of the second time period and upon the absence of the first response within the second time period. [0070], [0073], [0150-0151] The receiving bank then either accepts or rejects the transaction by using the communication protocol. The protocol enables the rejected transaction to be resubmitted again two times… The payer's bank 18, the transaction processing entity, while processing this request for payment or payment authorization puts the request on hold for a brief period of time, and via a mobile authorization system 30, that has a mobile contact database 32 and IVR/SMS subsystem 34, sends a request for authorization of the transaction to the mobile device 36 of the identity data owner, or payer entity 12…At step 108, awaiting the response by the entity from the customer for a period of time, and processing the response, where on receiving (i) a yes response approving the request, (ii) on receiving a No response declining the request and (iii) for lack of response, advising the requesting entity to present the request at a later time… At step 110, selecting and setting the period of time of response threshold based on the type of the payment request, the identification of the requesting entity, and originating location of the request, to be between 30 seconds and 18 hours.
It would have been obvious to one having ordinary skill in the art at the time the invention was filed to modify Shahriar with the ability to decline a transaction if no response is received after a time out and the ability to resubmit the transaction within a second predetermined amount of time as taught by Singhal, doing so further prevents fraud of a someone impersonating the user using remote authorizations [0052-0053].
As per claim 9, Shahriar discloses:
receiving, by the fraud prevention server, a second response to the first communication indicating confirmation of the ongoing call being set-up by the second user; and [0020], [0025] Some disclosed methods and systems may prevent deep-fake-based scams by adding a “verified by” capability to the communication session (e.g., a voice call or video chat). The verification may be based on video (e.g., or audio) recorded by a trusted video camera. An object recognition process (e.g., using a machine learning model trained to recognize specific users) may analyze the video (e.g., or audio) to determine that a person is detected and/or the person matches a specific person, such as the person for which verification is attempted… For example, if a security escalation is triggered during a communication session (e.g., a phone call or a video chat), a service provider may send a request to the gateway device to authenticate the caller (e.g., the source of the communication). The gateway may survey the caller's activities and send a signed token to the service provider.
does not expressly disclose the following, Singhal, however discloses:
transmitting based on the reception of the second response, by the fraud prevention server, a release notification to the payment application server to release the hold on the financial transaction, wherein when the second response is received after the end of the second time period and before the end of the first time period, the release notification is transmitted to the payment application server. [0029], [0073], [0150] In the system of the preferred embodiment, a transaction processing entity, after it receives an identity data driven transaction from a transaction initiating entity, puts on hold the processing of the transaction for a period of time and via the identity data owner's wireless mobile communication device, contacts the identity data owner for authorization of the transaction before the transaction processing is allowed to complete… The receiving bank, upon receiving a payment transaction authorization request record, first checks to see if it can approve the transaction. For example, the receiving bank can reject a transaction if there are insufficient funds to cover the request and also if there is a stop order that has been placed against a particular check. The receiving bank then either accepts or rejects the transaction by using the communication protocol. The protocol enables the rejected transaction to be resubmitted again two times… At step 108, awaiting the response by the entity from the customer for a period of time, and processing the response, where on receiving (i) a yes response approving the request, (ii) on receiving a No response declining the request and (iii) for lack of response, advising the requesting entity to present the request at a later time.
It would have been obvious to one having ordinary skill in the art at the time the invention was filed to modify Shahriar with the ability to decline a transaction if no response is received after a time out and the ability to resubmit the transaction within a second predetermined amount of time as taught by Singhal, doing so further prevents fraud of a someone impersonating the user using remote authorizations [0052-0053].
As per claims 19, and 20, claims 19, and 20 recite substantially similar limitations to those found in claims 7, and 8, respectively. Therefor claims 19, and 20 are rejected under the same art and rationale as claims 7 and 8. Furthermore, Shahriar discloses a system and server [0078].
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US Patent Application Publication 20230230085 to Turgeman, et al. discloses “In another example, some embodiments may perform post-processing or real-time processing for deep-fake detection, to ensure that a malicious actor or attacker did not try to spoof the user's identify by generating a deep fake video image of the user using generative machine learning technology. For example, a deep-fake detection unit may search for, and may detect, imperfect transitions between: (i) frame-portions that are attributed to a first source (e.g., a photo or a video of the genuine user), and (ii) frame-portions that were added or modified by an attacker who created a deep-fake image or video; based on imperfect or abrupt “stitch lines” between image portions, or non-smooth or non-gradual transitions between two neighboring image-portions or frame-regions; or other techniques for detecting a deep fake image or video, which may then trigger a determination to block or reject a submitted transaction.”
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GREGORY S CUNNINGHAM II whose telephone number is (313)446-6564. The examiner can normally be reached Mon-Fri 8:30am-4pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bennett Sigmond can be reached at 303-297-4411. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
GREGORY S. CUNNINGHAM II
Primary Examiner
Art Unit 3694
/GREGORY S CUNNINGHAM II/Primary Examiner, Art Unit 3694