Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
The following Non-Final office action is in response to application filed on 12/20/2024.
Priority Date: CON of (#18/609,691)(9/19/2024)>[Claimed priority in amended spec.> Prov.(05/02/2023).
Claim Status:
Canceled claims: 1-31
Pending new claims : 32-55
Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 32-55 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter.
In particular, claims are directed to a judicial exception (Abstract idea) without significantly more.
When considering subject matter eligibility under 35 U.S.C. 101, (Step-1) it must be determined whether the claim is directed to one of the four statutory categories of invention, i.e., process, machine, manufacture, or composition of matter. (Step-2A) If the claim does fall within one of the statutory categories, it must then be determined whether the claim is directed to a judicial exception (i.e., law of nature, natural phenomenon, and abstract idea), and if so, (Step-2B) it must additionally be determined whether the claim is a patent-eligible application of the exception. If an abstract idea is present in the claim, any element or combination of elements in the claim must be sufficient to ensure that the claim amounts to significantly more than the abstract idea itself.
Examples of abstract ideas grouping include: (a) Mental processes; (b) Certain methods of organizing human activities [ i. Fundamental Economic Practice; ii. Commercial or Legal Interaction; iii. Managing Personal behavior or Relations between People]; and (c) Mathematical relationships/formulas. Alice Corporation Pty. Ltd. v. CLS Bank International, et al., 573 U.S. (2014).
Analysis is based on the 2019 Revised Patent Eligibility Guidance (2019 PEG)-(see MPEP § 2106.04(II) and 2106.04(d). )-[see MPEP § 2106.04(II), and § 2106.04(d) & MPEP § 2106.05(a),(b),(c),(e )…].
[Step-1] The claims are directed to a method/system/machine, which are a statutory category of invention.
Claim 32 (exemplary) recites a series of steps for Monitoring, identifying and accelerating Fraud Risk Signals analysis with Trading system.
[Step-2A]-Prong 1:The claim 32 is then analyzed to determine whether it is directed to a judicial exception:
The claim recites the limitations of:
collecting event data in an event hub with a plurality of microservices; and updating a composite risk signal by:
attempting to process the event data for an individual event with a first microservice of the plurality of microservices;
evaluating the health of the first microservice of the plurality of microservices;
determining that the data should not be distributed to the first microservice of the plurality of microservices;
processing the event data for the individual event with a second microservice of the plurality of microservices instead of the first microservice by:
analyzing the event data using one or more internal logic rules to determine if the composite risk signal needs to be updated;
forming a risk signal based on the event data; and
transmitting the risk signal to a composite risk profile;
generating one or more additional risk signals using machine learning based on the composite risk signal, and the event data by:
providing the risk signal and the composite risk signal to a machine learning model as input data;
training the machine learning model with training data, wherein the training data includes the event data, one or more risk factors, and one or more series of event data or risk factors;
finding patterns in the training data; and
generating the one or more additional risk signals based on the patterns in the training data and the input data and using the machine learning model; and
aggregating the risk signal, the one or more additional risk signals, and the composite risk signal to update the composite risk signal.
The claimed method/system/machine simply describes series of steps for Monitoring, Identifying and Accelerating Fraud Risk Signals analysis with Trading system.
These limitations, as drafted, are processes that, under its broadest reasonable interpretation, covers performance of the limitations via human commercial or business or transactional activities/interactions, but for the recitation of generic computer components. That is, other than reciting one or more servers/processors, devices and computer network nothing in the claim precludes the limitations from practically being performed by organizing human business activity. For example, without the structure elements language, the claim encompasses the activities that can be performed manually between the users and a third party. These limitations are directed to an abstract idea because they are business interaction/sale activity that falls within the enumerated group of “certain methods of organizing human activity” in the 2019 PEG.
[Step-2A]-Prong 2:
Next, the claim is analyzed to determine if it is integrated into a practical application. The claim recites additional limitation of using one or more servers/processors, devices and computer network to perform the steps. The processor in the steps is recited at a high level of generality, i.e., as a generic processor performing a generic computer function of processing data. This generic processor limitation is no more than mere instructions to apply the exception using generic computer component. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to the abstract idea.
[Step-2B]
Next, the claim is analyzed to determine if there are additional claim limitations that individually, or as an ordered combination, ensure that the claim amounts to significantly more than the abstract ideas (whether claim provides inventive concept).
As discussed above, the recitation of the claimed limitations amounts to mere instructions to implement the abstract idea on a processor (using the processor as a tool to implement the abstract idea). Taking the additional elements individually and in combination, the processor at each step of the process performs purely generic computer functions. As such, there is no inventive concept sufficient to transform the claimed subject matter into a patent-eligible application. The same analysis applies here, i.e., mere instructions to apply an exception using a generic computer component cannot integrate a judicial exception into a practical application at or provide an inventive concept.
Viewing the limitations as an ordered combination does not add anything further than looking at the limitations individually. When viewed either individually, or as an ordered combination, the additional limitations do not amount to a claim as a whole that is significantly more than the abstract idea itself. Therefore, the claim does not amount to significantly more than the recited abstract idea, and the claim is not patent eligible.
The analysis above applies to all statutory categories of invention including independent claims 40, and 48.
Furthermore, the dependent claims 33-39, 41-47 and 49-55 do not resolve the issues raised in the independent claims.
The dependent claims 33-39, 41-47 and 49-55 are directed towards:
Using,(claims 33-39, 41-47 and 49-55), an alert that the first microservice may require maintenance; evaluating the health of the first microservice of the plurality of microservices includes sending the first microservice of the plurality of microservices a communication and not receiving a response from the first microservice; sending the first microservice of the plurality of microservices a communication and not receiving a response for longer than a predetermined period of time; wherein the event data is collected in the event hub by configuring one or more upstream systems to publish the event data directly to the event hub; the event data is collected in the event hub by implementing consumers that emit event data to the event hub by leveraging an existing repository in which the event data is already stored; the event data includes one or more business events received from an event source; wherein the one or more business events include at least one of: a login to an online banking account, a login to a mobile banking app, a call to an automated interactive voice response system, a call to a customer care center, a demographic or account data change, an account lifecycle event, a device lifecycle event, a card lock status change, a new contribution to a hotfile, a new contribution to a shared database, or a new contribution from a consortium.
These limitations are also part of the abstract idea identified in claim 32, and are similarly rejected under same rationale.
Accordingly, the dependent claims 33-39, 41-47 and 49-55 are rejected as ineligible for patenting under 35 U.S.C. 101 based upon the same analysis.
The instant claims are rejected under 35 USC 101 in view of The Decision in Alice Corporation Ply. Ltd. v. CLS Bank International, et al. in a unanimous decision, the Supreme Court held that the patent claims in Alice Corporation Pty. Ltd. v. CLS Bank International, et al. ("Alice Corp. ") are not patent-eligible under 35 U.S.C. § 101.
Claim Rejections - 35 USC § 103
The following is a quotation of AIA 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 32-55 are rejected under 35 U.S.C. 103 as being unpatentable over McKenna et al (US 2019/0236695-A1) in view of BIALICK et al (US 20230169494 A1 ).
Claims 1–31 (Cancelled).
Ref claim 32 (New), McKenna discloses a computer-implemented method comprising:
collecting event data in an event hub with a plurality of microservices; and updating a composite risk signal (para [0035], Fig.1, via a distributed system for fraud detection…Ex. 100, a distributed system/ a borrower user device 110, a second dealer user device 112, a third lender user device 116 and a fraud detection computer system 120…[0036], via received from other computer systems…[0038], via …computers or devices on a network….including cloud-based software services …) by:
attempting to process the event data for an individual event with a first microservice of the plurality of microservices; evaluating the health of the first microservice of the plurality of microservices; determining that the data should not be distributed to the first microservice of the plurality of microservices (para [0182], via Security group/IP list 510…to ensure that the user device has permission to utilize services described herein…);
processing the event data for the individual event with a second microservice of the plurality of microservices instead of the first microservice by:
[[analyzing the event data using one or more internal logic rules to determine if the composite risk signal needs to be updated;]]
forming a risk signal based on the event data; and transmitting the risk signal to a composite risk profile (para [0051]; via The segmentation module 117 may determine…to an application data received…and store the updated application data in the profile data store 150 for further reprocessing…);
generating one or more additional risk signals using machine learning based on the composite risk signal, and the event data by:
providing the risk signal and the composite risk signal to a machine learning model as input data; training the machine learning model with training data, wherein the training data includes the event data, one or more risk factors, and one or more series of event data or risk factors; finding patterns in the training data (para [0026]; via The system may implement multiple machine learning [ML] models… The system may correlate the application data to a training data set or may apply the application data to a trained, first ML model…[0027]; via … The output from the second ML model may indicate signals of fraud… from the first ML model …); and
generating the one or more additional risk signals based on the patterns in the training data and the input data and using the machine learning model; and aggregating the risk signal, the one or more additional risk signals, and the composite risk signal to update the composite risk signal (para [0026]; via The system may implement multiple machine learning [ML] models… The system may correlate the application data to a training data set or may apply the application data to a trained, first ML model…[0027]; via … The output from the second ML model may indicate signals of fraud… from the first ML model …[0047]; via The application module 113,… to transmit application data to the fraud detection computer system 120/encoded in electronic message and transmitted via a network to API with the fraud detection computer system 120. …details with Fig. 3 …[0051]; via, The segmentation module 117 may determine…to an application data received…and store the updated application data in the profile data store 150 for further reprocessing…).
McKenna does not explicitly disclose the step of: analyzing the event data using one or more internal logic rules to determine if the composite risk signal needs to be updated.
However, BIALICK being in the same field of invention discloses the step of analyzing the event data using one or more internal logic rules to determine if the composite risk signal needs to be updated (para [0041], Fig. 2; via a “rules set-up stage” prior to a transaction, the account holder may define one or more rules…. The API gateway 204/a rules engine 210 executed on server 200 …[0067]; via …improve rule processing…in contrast to existing methods of applying logic to a data document…).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to modify the features mentioned by McKenna to include the disclosures as taught by BIALICK to facilitate applying logic rules to update risk signals.
Ref claim 33 (New), McKenna discloses the method of claim 32, further comprising producing an alert that the first microservice may require maintenance (para [0047]; via The application module 113,… to transmit application data to the fraud detection computer system 120/encoded in electronic message and transmitted via a network to API with the fraud detection computer system 120. …details with Fig. 3 …[0051]; via, The segmentation module 117 may determine…to an application data received…and store the updated application data in the profile data store 150 for further reprocessing…).
Ref claim 34 (New), McKenna discloses the method of claim 32, wherein evaluating the health of the first microservice of the plurality of microservices includes sending the first microservice of the plurality of microservices a communication and not receiving a response from the first microservice (para [0047]; via The application module 113,… to transmit application data to the fraud detection computer system 120/encoded in electronic message and transmitted via a network to API with the fraud detection computer system 120. …details with Fig. 3 …[0051]; via, The segmentation module 117 may determine…to an application data received…and store the updated application data in the profile data store 150 for further reprocessing…).
Ref claim 35 (New), McKenna discloses the method of claim 32, wherein evaluating the health of the first microservice includes sending the first microservice of the plurality of microservices a communication and not receiving a response for longer than a predetermined period of time (para [0047]; via The application module 113,… to transmit application data to the fraud detection computer system 120/encoded in electronic message and transmitted via a network to API with the fraud detection computer system 120. …details with Fig. 3 …[0051]; via, The segmentation module 117 may determine…to an application data received…and store the updated application data in the profile data store 150 for further reprocessing…).
Ref claim 36 (New), McKenna discloses the method of claim 32, wherein the event data is collected in the event hub by configuring one or more upstream systems to publish the event data directly to the event hub (para [0026]; via The system may implement multiple machine learning [ML] models… The system may correlate the application data to a training data set or may apply the application data to a trained, first ML model…[0027]; via … The output from the second ML model may indicate signals of fraud… from the first ML model …[0047]; via The application module 113,… to transmit application data to the fraud detection computer system 120/encoded in electronic message and transmitted via a network to API with the fraud detection computer system 120. …details with Fig. 3 …).
Ref claim 37 (New), McKenna discloses the method of claim 32, wherein the event data is collected in the event hub by implementing consumers that emit event data to the event hub by leveraging an existing repository in which the event data is already stored (para [0026]; via The system may implement multiple machine learning [ML] models… The system may correlate the application data to a training data set or may apply the application data to a trained, first ML model…[0027]; via … The output from the second ML model may indicate signals of fraud… from the first ML model …[0047]; via The application module 113,… to transmit application data to the fraud detection computer system 120/encoded in electronic message and transmitted via a network to API with the fraud detection computer system 120. …details with Fig. 3 …).
Ref claim 38 (New), McKenna discloses the method of claim 32, wherein the event data includes one or more business events received from an event source (para [0035], Fig.1, via a distributed system for fraud detection…Ex. 100, a distributed system/ a borrower user device 110, a second dealer user device 112, a third lender user device 116 and a fraud detection computer system 120…[0036], via received from other computer systems…).
Ref claim 39 (New), McKenna discloses the method of claim 38, wherein the one or more business events include at least one of: a login to an online banking account, a login to a mobile banking app, a call to an automated interactive voice response system, a call to a customer care center, a demographic or account data change, an account lifecycle event, a device lifecycle event, a card lock status change, a new contribution to a hotfile, a new contribution to a shared database, or a new contribution from a consortium (para [0035], Fig.1, via a distributed system for fraud detection…Ex. 100, a distributed system/ a borrower user device 110, a second dealer user device 112, a third lender user device 116 and a fraud detection computer system 120…[0036], via received from other computer systems…).
Claim 40 recites similar limitations to claim 32 and thus rejected using the same art and rationale in the rejection of claim 32 as set forth above.
Claims 41-47 are rejected as per the reasons set forth in claims 32-39 respectively.
Claim 48 recites similar limitations to claim 32 and thus rejected using the same art and rationale in the rejection of claim 32 as set forth above.
Claims 49-55 are rejected as per the reasons set forth in claims 32-39 respectively.
CONCLUSION
The prior arts made of record and not relied upon are considered pertinent to applicant's disclosure:
Abadi et al (2023/0098204-A1), discloses Supervised MACHINE LEARNING for Distinguishing between Risky and Legitimate Action in Transactions.
Albright et al (US 2020/0211021 A1), discloses Systems and Methods for early Detection of Network Fraud Events.
AMBUKKARASU et al (US 2019/0026716 A1), discloses Systems and Methods for Providing Services to Smart Devices Connected in an IoT Platform.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HATEM M. ALI whose telephone number is (571) 270-3021, E-mail: Hatem.Ali@USPTO.Gov and FAX (571)270-4021. The examiner can normally be reached Monday-Friday from 8:00 AM to 6:00 PM ET.
Examiner interviews are available via telephone, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ABHISHEK VYAS can be reached on (571) 270-1836. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HATEM M ALI/
Examiner, Art Unit 3691