Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
The following Non-Final office action is in response to application filed on 12/20/2024.
Priority Date: CON of Appl (#18/609,691)(01/19/2024)>Claimed Prov. (05-02-2023).
Claim Status:
Canceled claims: 1-31
Pending claims : 32-55
Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 32-55 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter.
In particular, claims are directed to a judicial exception (Abstract idea) without significantly more.
When considering subject matter eligibility under 35 U.S.C. 101, (Step-1) it must be determined whether the claim is directed to one of the four statutory categories of invention, i.e., process, machine, manufacture, or composition of matter. (Step-2A) If the claim does fall within one of the statutory categories, it must then be determined whether the claim is directed to a judicial exception (i.e., law of nature, natural phenomenon, and abstract idea), and if so, (Step-2B) it must additionally be determined whether the claim is a patent-eligible application of the exception. If an abstract idea is present in the claim, any element or combination of elements in the claim must be sufficient to ensure that the claim amounts to significantly more than the abstract idea itself.
Examples of abstract ideas grouping include: (a) Mental processes; (b) Certain methods of organizing human activities [ i. Fundamental Economic Practice; ii. Commercial or Legal Interaction; iii. Managing Personal behavior or Relations between People]; and (c) Mathematical relationships/formulas. Alice Corporation Pty. Ltd. v. CLS Bank International, et al., 573 U.S. (2014).
Analysis is based on the 2019 Revised Patent Eligibility Guidance (2019 PEG)-(see MPEP § 2106.04(II) and 2106.04(d). )-[see MPEP § 2106.04(II), and § 2106.04(d) & MPEP § 2106.05(a),(b),(c),(e )…].
[Step-1] The claims are directed to a method/system/machine, which are a statutory category of invention.
Claim 1 (exemplary) recites a series of steps for Monitoring E-Transaction Fraud.
[Step-2A]-Prong 1:The claim 1 is then analyzed to determine whether it is directed to a judicial exception:
The claim recites the limitations of:
obtaining transaction data from a transaction onramp, wherein the transaction data was published to the transaction onramp by a transaction source;
generating a combined standardized transaction data structure based on analysis of the transaction data by:
parsing the transaction data to extract details of the transaction data;
adding to the extracted details of the transaction data one or more supplemental data elements obtained from the transaction source to form a supplemental transaction data element; and
transforming the supplemental transaction data element into the combined standardized transaction data structure to comply with a standard-based data structure;
obtaining event data from an event hub, the event data including:
the transaction data; one or more business events, which had been published to the event hub from an event source by an event emitter; one or more previously generated composite risk signals; and one or more previously generated detection events generated by a decision engine;
identifying one or more risk signals based on the event data; creating or updating one or more composite risk signals based on the event data and the one or more risk signals by:
analyzing the event data using one or more internal logic rules to determine if the one or more composite risk signals need to be triggered or updated;
storing the event data for enrichment of at least one of a party, an account, a device, or an entity;
forming a plurality of discrete source signals based on the event data; and aggregating the plurality of discrete source signals to create or update the one or more composite risk signals;
generating one or more additional risk signals by: providing the one or more composite risk signals and the combined standardized transaction data to a machine learning model as input data; and
using the machine learning model to compare the one or more composite risk signals and the combined standardized transaction data to produce one or more additional risk signals;
transmitting the event data, the one or more risk signals, the one or more composite risk signals, and the one or more additional risk signals to a fraud application, the fraud application configured to make a judgement of the event data using one or more internalized logic rules by:
assigning a fraudulent transaction probability score, using a decision engine, wherein the decision engine assigns the fraudulent transaction probability score based on inputs including:
the combined standardized transaction data; the event data; the one or more risk signals; the one or more composite risk signals; and the one or more additional risk signals;
generating at least one of a detection event, a transaction alert, a case management message, or a regulatory filing message based on the fraudulent transaction probability score; and
providing at least one of the detection event, the transaction alert, the case management message, or the regulatory filing message to the event hub as additional event data.
The claimed method/system/machine simply describes series of steps for Monitoring E-Transaction Fraud.
These limitations, as drafted, are processes that, under its broadest reasonable interpretation, covers performance of the limitations via human commercial or business or transactional activities/interactions, but for the recitation of generic computer components. That is, other than reciting one or more servers/processors, devices and computer network nothing in the claim precludes the limitations from practically being performed by organizing human business activity. For example, without the structure elements language, the claim encompasses the activities that can be performed manually between the users and a third party. These limitations are directed to an abstract idea because they are business interaction/sale activity that falls within the enumerated group of “certain methods of organizing human activity” in the 2019 PEG.
[Step-2A]-Prong 2:
Next, the claim is analyzed to determine if it is integrated into a practical application. The claim recites additional limitation of using one or more servers/processors, devices and computer network to perform the steps. The processor in the steps is recited at a high level of generality, i.e., as a generic processor performing a generic computer function of processing data. This generic processor limitation is no more than mere instructions to apply the exception using generic computer component. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to the abstract idea.
[Step-2B]
Next, the claim is analyzed to determine if there are additional claim limitations that individually, or as an ordered combination, ensure that the claim amounts to significantly more than the abstract ideas (whether claim provides inventive concept).
As discussed above, the recitation of the claimed limitations amounts to mere instructions to implement the abstract idea on a processor (using the processor as a tool to implement the abstract idea). Taking the additional elements individually and in combination, the processor at each step of the process performs purely generic computer functions. As such, there is no inventive concept sufficient to transform the claimed subject matter into a patent-eligible application. The same analysis applies here, i.e., mere instructions to apply an exception using a generic computer component cannot integrate a judicial exception into a practical application at or provide an inventive concept.
Viewing the limitations as an ordered combination does not add anything further than looking at the limitations individually. When viewed either individually, or as an ordered combination, the additional limitations do not amount to a claim as a whole that is significantly more than the abstract idea itself. Therefore, the claim does not amount to significantly more than the recited abstract idea, and the claim is not patent eligible.
The analysis above applies to all statutory categories of invention including independent claims 40, and 48.
Furthermore, the dependent claims 33-39, 41-47 and 49-55 do not resolve the issues raised in the independent claims.
The dependent claims 33-39, 41-47 and 49-55 are directed towards:
Using, generating a new composite risk signal based on the at least one of the detection event, the transaction alert, the case management message, or the regulatory filing message provided to the event hub as additional event data; updating the one or more composite risk signals based on the at least one of the detection event, the transaction alert, the case management message, or the regulatory filing message provided to the event hub as additional event data; wherein the event data further includes third party data published to the event hub by a third-party provider; wherein the event data is collected in the event hub by configuring one or more upstream systems to publish the event data directly to the event hub; wherein the event data is collected in the event hub by implementing consumers that emit event data to the event hub by leveraging an existing repository in which the event data is already stored; wherein the one or more business events include at least one of: a login to an online banking account, a login to a mobile banking app, a call to an automated interactive voice response system, a call to a customer care center, a demographic or account data change, an account lifecycle event, a device lifecycle event, a card lock status change, a new contribution to a hotfile, a new contribution to a shared database, or a new contribution from a consortium; and wherein the one or more composite risk signals include at least one of: a recent call, a recent login, a recent device enrollment, a recent demographic change, a recent email risk elevation, a recent device risk elevation, a recent confirmed fraud, a recent beneficiary change, a recent high value transaction, a presence on an internal hotfile, or a presence on a national shared database.
These limitations are also part of the abstract idea identified in claim 32, and are similarly rejected under same rationale.
Accordingly, the dependent claims 33-39, 41-47 and 49-55 are rejected as ineligible for patenting under 35 U.S.C. 101 based upon the same analysis.
The instant claims are rejected under 35 USC 101 in view of The Decision in Alice Corporation Ply. Ltd. v. CLS Bank International, et al. in a unanimous decision, the Supreme Court held that the patent claims in Alice Corporation Pty. Ltd. v. CLS Bank International, et al. ("Alice Corp. ") are not patent-eligible under 35 U.S.C. § 101.
Claim Rejections - 35 USC § 103
The following is a quotation of AIA 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 32-55 are rejected under 35 U.S.C. 103 as being unpatentable over McKenna et al (US 2019/0236695-A1) in view of BIALICK et al (US 2023/0169494-A1 ).
Claims (1–31) (Cancelled).
Ref claim 32 (New), McKenna discloses a computer-implemented method comprising:
obtaining transaction data from a transaction onramp, wherein the transaction data was published to the transaction onramp by a transaction source; generating a combined standardized transaction data structure based on analysis of the transaction data (para [0024]; via a Risk-based fraud identification and analysis system…[0035], Fig.1, via a distributed system for fraud detection Ex. 100, a distributed system/ a borrower user device 110, a second dealer user device 112, a third lender user device 116 and a fraud detection computer system 120 …[0036], via received from other computer systems… [0038], via computers or devices on a network including cloud-based software services …) by:
parsing the transaction data to extract details of the transaction data; adding to the extracted details of the transaction data one or more supplemental data elements obtained from the transaction source to form a supplemental transaction data element; and transforming the supplemental transaction data element into the combined standardized transaction data structure to comply with a standard-based data structure (para [0035], Fig.1, via a distributed system for fraud detection Ex. 100, a distributed system/ a borrower user device 110, a second dealer user device 112, a third lender user device 116 and a fraud detection computer system 120 …[0036], via received from other computer systems… [0038], via computers or devices on a network including cloud-based software services …);
obtaining event data from an event hub, the event data including: the transaction data; one or more business events, which had been published to the event hub from an event source by an event emitter; one or more previously generated composite risk signals; and one or more previously generated detection events generated by a decision engine (para [0032]; via System may be identified from a borrower user perspective…identifying a signal of fraud or potential risk with data…may prevent future transaction [data] for fraud from occurring…to fix the fraudulent behavior….);
identifying one or more risk signals based on the event data; creating or updating one or more composite risk signals based on the event data and the one or more risk signals (para [0027]; via The second ML model…may indicate signals of fraud and/or predict the type fraud…) by:
[[ analyzing the event data using one or more internal logic rules to determine if the one or more composite risk signals need to be triggered or updated; ]]
storing the event data for enrichment of at least one of a party, an account, a device, or an entity; forming a plurality of discrete source signals based on the event data; and aggregating the plurality of discrete source signals to create or update the one or more composite risk signals (para [0047]; via The application module 113 to transmit application data to the fraud detection computer system 120/encoded in electronic message and transmitted via a network to API with the fraud detection computer system 120 details with Fig. 3, … [0051]; via, The segmentation module 117 may determine to an application data received and store the updated application data in the profile data store 150 for further reprocessing…);
generating one or more additional risk signals by:
providing the one or more composite risk signals and the combined standardized transaction data to a machine learning model as input data; and using the machine learning model to compare the one or more composite risk signals and the combined standardized transaction data to produce one or more additional risk signals (para [0026]; via The system may implement multiple machine learning [ML] models The system may correlate the application data to a training data set or may apply the application data to a trained, first ML model …[0027]; via The output from the second ML model may indicate signals of fraud from the first ML model …);
transmitting the event data, the one or more risk signals, the one or more composite risk signals, and the one or more additional risk signals to a fraud application, the fraud application configured to make a judgement of the event data using one or more internalized logic rules by:
assigning a fraudulent transaction probability score, using a decision engine, wherein the decision engine assigns the fraudulent transaction probability score based on inputs including: the combined standardized transaction data; the event data; the one or more risk signals; the one or more composite risk signals; and the one or more additional risk signals (para [0025]; via the application scores/reason codes/to mitigate risk or identify fraud/application data…receiving additional data from a third-party entity…).
generating at least one of a detection event, a transaction alert, a case management message, or a regulatory filing message based on the fraudulent transaction probability score; and providing at least one of the detection event, the transaction alert, the case management message, or the regulatory filing message to the event hub as additional event data (para [0035-36]; Fig. 1, via a distributed system for fraud detection…identifying fraud…, and a fraud detection computer system 120…, each of devices may transmit electronic messages via communication network…. [0047]; via The application module 113 to transmit application data to the fraud detection computer system 120/encoded in electronic message and transmitted via a network to API with the fraud detection computer system 120 details with Fig. 3, … [0051]; via, The segmentation module 117 may determine to an application data received and store the updated application data in the profile data store 150 for further reprocessing…).
McKenna does not explicitly disclose the step of: analyzing the event data using one or more internal logic rules to determine if the one or more composite risk signals need to be triggered or updated.
However, BIALICK being in the same field of invention discloses the step of: analyzing the event data using one or more internal logic rules to determine if the one or more composite risk signals need to be triggered or updated (para [0041], Fig. 2; via a "rules set-up stage" prior to a transaction, the account holder may define one or more rules The API gateway 204/a rules engine 210 executed on server 200 [0067]; via improve rule processing in contrast to existing methods of applying logic to a data document ).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to modify the features mentioned by McKenna to include the disclosures as taught by BIALICK to facilitate Monitoring Risk Signals of E-Transaction Fraud.
Ref claim 33 (New), McKenna discloses the method of claim 32, further comprising generating a new composite risk signal based on the at least one of the detection event, the transaction alert, the case management message, or the regulatory filing message provided to the event hub as additional event data (para [0035-36]; Fig. 1, via a distributed system for fraud detection…identifying fraud…, and a fraud detection computer system 120…, each of devices may transmit electronic messages via communication network…. [0047]; via The application module 113 to transmit application data to the fraud detection computer system 120/encoded in electronic message and transmitted via a network to API with the fraud detection computer system 120 details with Fig. 3, …).
Ref claim 34 (New), McKenna discloses the method of claim 32, further comprising updating the one or more composite risk signals based on the at least one of the detection event, the transaction alert, the case management message, or the regulatory filing message provided to the event hub as additional event data (para [0035-36]; Fig. 1, via a distributed system for fraud detection…identifying fraud…, and a fraud detection computer system 120…, each of devices may transmit electronic messages via communication network…. [0047]; via The application module 113 to transmit application data to the fraud detection computer system 120/encoded in electronic message and transmitted via a network to API with the fraud detection computer system 120 details with Fig. 3, …).
Ref claim 35 (New), McKenna discloses the method of claim 32, wherein the event data further includes third party data published to the event hub by a third-party provider (para [0025]; via the application scores/reason codes/to mitigate risk or identify fraud/application data…receiving additional data from a third-party entity…).
Ref claim 36 (New), McKenna discloses the method of claim 32, wherein the event data is collected in the event hub by configuring one or more upstream systems to publish the event data directly to the event hub (para [0047]; via The application module 113 to transmit application data to the fraud detection computer system 120/encoded in electronic message and transmitted via a network to API with the fraud detection computer system 120 details with Fig. 3, … [0051]; via, The segmentation module 117 may determine to an application data received and store the updated application data in the profile data store 150 for further reprocessing…).
Ref claim 37 (New), McKenna discloses the method of claim 32, wherein the event data is collected in the event hub by implementing consumers that emit event data to the event hub by leveraging an existing repository in which the event data is already stored (para [0032]; via System may be identified from a borrower user perspective…identifying a signal of fraud or potential risk with data…may prevent future transaction [data] for fraud from occurring…to fix the fraudulent behavior….).
Ref claim 38 (New), McKenna discloses the method of claim 32, wherein the one or more business events include at least one of: a login to an online banking account, a login to a mobile banking app, a call to an automated interactive voice response system, a call to a customer care center, a demographic or account data change, an account lifecycle event, a device lifecycle event, a card lock status change, a new contribution to a hotfile, a new contribution to a shared database, or a new contribution from a consortium (para [0032]; via System may be identified from a borrower user perspective…identifying a signal of fraud or potential risk with data…may prevent future transaction [data] for fraud from occurring…to fix the fraudulent behavior….).
Ref claim 39 (New), McKenna discloses the method of claim 32, wherein the one or more composite risk signals include at least one of: a recent call, a recent login, a recent device enrollment, a recent demographic change, a recent email risk elevation, a recent device risk elevation, a recent confirmed fraud, a recent beneficiary change, a recent high value transaction, a presence on an internal hotfile, or a presence on a national shared database (para [0047]; via The application module 113 to transmit application data to the fraud detection computer system 120/encoded in electronic message and transmitted via a network to API with the fraud detection computer system 120 details with Fig. 3, … [0051]; via, The segmentation module 117 may determine to an application data received and store the updated application data in the profile data store 150 for further reprocessing…).
Claim 40 recites similar limitations to claim 32 and thus rejected using the same art and rationale in the rejection of claim 32 as set forth above.
Claims 41-47 are rejected as per the reasons set forth in claims 33-39 respectively
Claim 48 recites similar limitations to claim 32 and thus rejected using the same art and rationale in the rejection of claim 32 as set forth above.
Claims 49-55 are rejected as per the reasons set forth in claims 33-39 respectively
CONCLUSION
The prior arts made of record and not relied upon are considered pertinent to applicant's disclosure:
Abadi et al (US 2023/0098204-A1) discloses Supervised Machine Learning for distinguishing between Risky and Legitimate Actions in Transactions.
Albright et al (US 2020/0211021-A1) discloses System and Methods for early Detection of Network Fraud Events.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HATEM M. ALI whose telephone number is (571) 270-3021, E-mail: Hatem.Ali@USPTO.Gov and FAX (571)270-4021. The examiner can normally be reached Monday-Friday from 8:00 AM to 6:00 PM ET.
Examiner interviews are available via telephone, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ABHISHEK VYAS can be reached on (571) 270-1836. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HATEM M ALI/
Examiner, Art Unit 3691