Prosecution Insights
Last updated: April 19, 2026
Application No. 18/989,434

SYSTEMS AND METHODS FOR CONTINUOUS EVENT MONITORING, IDENTIFICATION OF RISK SIGNALS, AND ACCELERATION OF FRAUD RISK ANALYSIS

Non-Final OA §101§103
Filed
Dec 20, 2024
Examiner
ALI, HATEM M
Art Unit
3691
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
The PNC Financial Services Group, Inc.
OA Round
1 (Non-Final)
44%
Grant Probability
Moderate
1-2
OA Rounds
4y 5m
To Grant
70%
With Interview

Examiner Intelligence

Grants 44% of resolved cases
44%
Career Allow Rate
244 granted / 548 resolved
-7.5% vs TC avg
Strong +26% interview lift
Without
With
+25.9%
Interview Lift
resolved cases with interview
Typical timeline
4y 5m
Avg Prosecution
55 currently pending
Career history
603
Total Applications
across all art units

Statute-Specific Performance

§101
29.7%
-10.3% vs TC avg
§103
48.5%
+8.5% vs TC avg
§102
2.8%
-37.2% vs TC avg
§112
9.3%
-30.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 548 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION The following Non-Final office action is in response to application filed on 12/20/2024. Priority Date: CON of Appl.(#18/609,691)>(03/19/2024)-Calimed Prov[03/02/2023) Claim Status: Canceled claims: 1-31 Pending claims : 32-52 Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 32-52 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. In particular, claims are directed to a judicial exception (Abstract idea) without significantly more. When considering subject matter eligibility under 35 U.S.C. 101, (Step-1) it must be determined whether the claim is directed to one of the four statutory categories of invention, i.e., process, machine, manufacture, or composition of matter. (Step-2A) If the claim does fall within one of the statutory categories, it must then be determined whether the claim is directed to a judicial exception (i.e., law of nature, natural phenomenon, and abstract idea), and if so, (Step-2B) it must additionally be determined whether the claim is a patent-eligible application of the exception. If an abstract idea is present in the claim, any element or combination of elements in the claim must be sufficient to ensure that the claim amounts to significantly more than the abstract idea itself. Examples of abstract ideas grouping include: (a) Mental processes; (b) Certain methods of organizing human activities [ i. Fundamental Economic Practice; ii. Commercial or Legal Interaction; iii. Managing Personal behavior or Relations between People]; and (c) Mathematical relationships/formulas. Alice Corporation Pty. Ltd. v. CLS Bank International, et al., 573 U.S. (2014). Analysis is based on the 2019 Revised Patent Eligibility Guidance (2019 PEG)-(see MPEP § 2106.04(II) and 2106.04(d). )-[see MPEP § 2106.04(II), and § 2106.04(d) & MPEP § 2106.05(a),(b),(c),(e )…]. [Step-1] The claims are directed to a method/system/machine, which are a statutory category of invention. Claim 32 (exemplary) recites a series of steps for Monitoring Risk Signals with Business Transaction Fraud. [Step-2A]-Prong 1:The claim 32 is then analyzed to determine whether it is directed to a judicial exception: The claim 32 recites the limitations of: collecting event data in an event hub with a plurality of microservices; interpreting the event data to acquire one or more risk signals by: processing the event data from at least one microservice of the plurality of microservices; and identifying the one or more risk signals based on the event data; transmitting the event data to a composite risk profile, the composite risk profile configured to: analyze the event data to create one or more risk signals; compile the one or more risk signals into a single composite risk signal; and generate a composite risk score based on the event data; obtaining one or more additional risk signals from a machine learning model by: obtaining historical transaction data from a historical transaction log; training the machine learning model using the historical transaction data; providing the risk signals, the single composite risk signal, and the event data as inputs to the machine learning model; and receiving the one or more additional risk signals generated by the machine learning model; and providing the composite risk score, the risk signals, the single composite risk signal, and the one or more additional risk signals to a decision engine, the decision engine configured to: evaluate the composite risk score using internalized business logic; and output a decision based on the evaluation of the composite risk score. The claimed method/system/machine simply describes series of steps for Monitoring Risk Signals with Business Transaction Fraud. These limitations, as drafted, are processes that, under its broadest reasonable interpretation, covers performance of the limitations via human commercial or business or transactional activities/interactions, but for the recitation of generic computer components. That is, other than reciting one or more servers/processors, devices and computer network nothing in the claim precludes the limitations from practically being performed by organizing human business activity. For example, without the structure elements language, the claim encompasses the activities that can be performed manually between the users and a third party. These limitations are directed to an abstract idea because they are business interaction/sale activity that falls within the enumerated group of “certain methods of organizing human activity” in the 2019 PEG. [Step-2A]-Prong 2: Next, the claim is analyzed to determine if it is integrated into a practical application. The claim recites additional limitation of using one or more servers/processors, devices and computer network to perform the steps. The processor in the steps is recited at a high level of generality, i.e., as a generic processor performing a generic computer function of processing data. This generic processor limitation is no more than mere instructions to apply the exception using generic computer component. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to the abstract idea. [Step-2B] Next, the claim is analyzed to determine if there are additional claim limitations that individually, or as an ordered combination, ensure that the claim amounts to significantly more than the abstract ideas (whether claim provides inventive concept). As discussed above, the recitation of the claimed limitations amounts to mere instructions to implement the abstract idea on a processor (using the processor as a tool to implement the abstract idea). Taking the additional elements individually and in combination, the processor at each step of the process performs purely generic computer functions. As such, there is no inventive concept sufficient to transform the claimed subject matter into a patent-eligible application. The same analysis applies here, i.e., mere instructions to apply an exception using a generic computer component cannot integrate a judicial exception into a practical application at or provide an inventive concept. Viewing the limitations as an ordered combination does not add anything further than looking at the limitations individually. When viewed either individually, or as an ordered combination, the additional limitations do not amount to a claim as a whole that is significantly more than the abstract idea itself. Therefore, the claim does not amount to significantly more than the recited abstract idea, and the claim is not patent eligible. The analysis above applies to all statutory categories of invention including independent claims 39, and 46. Furthermore, the dependent claims 33-38, 40-45, and 47-52 do not resolve the issues raised in the independent claims. The dependent claims 33-38, 40-45, and 47-52 are directed towards: Using, the composite risk profile is further configured to store the one or more risk signals for a specified party, account, device, or entity, and update the single composite risk signal based on analysis of further event data; the event data further includes: a login to an online banking account, a login to a mobile banking app …; wherein the one or more risk signals include at least one of: a recent call, a recent login, a recent device enrollment, a recent demographic change, a recent email risk elevation, a recent device risk elevation, a recent confirmed fraud, a recent beneficiary change, a recent high value transaction, a presence on an internal hotfile, or a presence on a national shared database; wherein the decision engine is further configured to determine whether fraud is occurring; and wherein the historical transaction log includes: event data, one or more risk factors, one or more previous fraud detection determinations, and third-party data. These limitations are also part of the abstract idea identified in claim 32, and are similarly rejected under same rationale. Accordingly, the dependent claims 33-38, 40-45, and 47-52 are rejected as ineligible for patenting under 35 U.S.C. 101 based upon the same analysis. The instant claims are rejected under 35 USC 101 in view of The Decision in Alice Corporation Ply. Ltd. v. CLS Bank International, et al. in a unanimous decision, the Supreme Court held that the patent claims in Alice Corporation Pty. Ltd. v. CLS Bank International, et al. ("Alice Corp. ") are not patent-eligible under 35 U.S.C. § 101. Claim Rejections - 35 USC § 103 The following is a quotation of AIA 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 32-52 are rejected under 35 U.S.C. 103 as being unpatentable over McKenna et al (US 2019/0236695-A1) in view of Abadi et al (US 2023/0098204-A1) Claims (1–31) (Cancelled). Ref claim 32 (New), McKenna discloses a computer-implemented method comprising: collecting event data in an event hub with a plurality of microservices; interpreting the event data to acquire one or more risk signals (para [0035], Fig.1, via a distributed system for fraud detection Ex. 100, a distributed system/ a borrower user device 110, a second dealer user device 112, a third lender user device 116 and a fraud detection computer system 120 …[0036], via received from other computer systems …[0038], via computers or devices on a network including cloud-based software services …) by: processing the event data from at least one microservice of the plurality of microservices; and identifying the one or more risk signals based on the event data; transmitting the event data to a composite risk profile, the composite risk profile configured to: [[ analyze the event data to create one or more risk signals; compile the one or more risk signals into a single composite risk signal; and generate a composite risk score based on the event data]]; obtaining one or more additional risk signals from a machine learning model by: obtaining historical transaction data from a historical transaction log; training the machine learning model using the historical transaction data (para [0026]; via The system may implement multiple machine learning [ML] models…, The system may correlate the application data to a training data set or may apply the application data to a trained, first ML model…[0027]; via The output from the second ML model may indicate signals of fraud from the first ML model …); providing the risk signals, the single composite risk signal, and the event data as inputs to the machine learning model; and receiving the one or more additional risk signals generated by the machine learning model (para [0047]; via The application module 113 to transmit application data to the fraud detection computer system 120/encoded in electronic message and transmitted via a network to API with the fraud detection computer system 120 details with Fig. 3 …[0051]; via, The segmentation module 117 may determine to an application data… received and store the updated application data in the profile data store 150 for further reprocessing…); and providing the composite risk score, the risk signals, the single composite risk signal, and the one or more additional risk signals to a decision engine, the decision engine configured to: evaluate the composite risk score using internalized business logic; and output a decision based on the evaluation of the composite risk score (para [0051]; via, The segmentation module 117 may determine to an application data… received and store the updated application data in the profile data store 150 for further reprocessing…[0062]; The application engine 136 to store historical application data for lender/borrower users/profile data store 150…[0114]; via The fraud detection computer system 120 may apply various ML models and embodiments of the disclosure. The models may correspond with linear or non-linear functions. For example, a ML model may comprise a supervised learning algorithm including a decision tree that accepts the one or more input features associated with the application to provide the score…). [0182], via Security group/IP list 510 to ensure that the user device has permission to utilize services described herein… ). McKenna does not explicitly disclose the step to: analyze the event data to create one or more risk signals; compile the one or more risk signals into a single composite risk signal; and generate a composite risk score based on the event data. However, Abadi being in the same field of invention discloses the step to: analyze the event data to create one or more risk signals; compile the one or more risk signals into a single composite risk signal; and generate a composite risk score based on the event data (Abst.; Via Risky actions versus non-risky actions in a transaction are identified and a fraud score associated with probabilities of the risky actions is updated accordingly for purposes of determining whether the transaction is likely or not likely to be associated with fraud…;… in the transaction to calculate probabilities of risky actions taken and output a risk or fraud score based thereon…[0011], Fig. 1; a system 100 for risky or legitimate transactions…[0017], via system 100… A fraud score is updated based on the calculated probabilities (again, the fraud score is higher for lower calculated probabilities and lower for higher calculated probabilities. ). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to modify the features mentioned by Mckenna to include the disclosures as taught by Abadi to facilitate generating risk score from event data. Ref claim 33 (New), McKenna discloses the method of claim 32, wherein the composite risk profile is further configured to store the one or more risk signals for a specified party, account, device, or entity (para [0062]; The application engine 136 to store historical application data for lender/borrower users/profile data store 150…[0114]; via The fraud detection computer system 120 may apply various ML models and embodiments of the disclosure…). Ref claim 34 (New), McKenna discloses the method of claim 33, wherein the composite risk profile is further configured to update the single composite risk signal based on analysis of further event data (para [0062]; The application engine 136 to store historical application data for lender/borrower users/profile data store 150…[0114]; via The fraud detection computer system 120 may apply various ML models and embodiments of the disclosure. The models may correspond with linear or non-linear functions. For example, a ML model may comprise a supervised learning algorithm including a decision tree that accepts the one or more input features associated with the application to provide the score…). Ref claim 35 (New), McKenna discloses the method of claim 32, wherein the event data further includes: a login to an online banking account, a login to a mobile banking app, a call to an automated Interactive voice response system, a call to a customer care center, a demographic or account data change, an account lifecycle event, a device lifecycle event, a card lock status change, a new contribution to a hotfile, a new contribution to a shared database, or a new contribution from a consortium. (para [0047]; via The application module 113 to transmit application data to the fraud detection computer system 120/encoded in electronic message and transmitted via a network to API with the fraud detection computer system 120 details with Fig. 3 …[0051]; via, The segmentation module 117 may determine to an application data… received and store the updated application data in the profile data store 150 for further reprocessing…). Ref claim 36 (New), McKenna discloses the method of claim 32, wherein the one or more risk signals include at least one of: a recent call, a recent login, a recent device enrollment, a recent demographic change, a recent email risk elevation, a recent device risk elevation, a recent confirmed fraud, a recent beneficiary change, a recent high value transaction, a presence on an internal hotfile, or a presence on a national shared database (para [0047]; via The application module 113 to transmit application data to the fraud detection computer system 120/encoded in electronic message and transmitted via a network to API with the fraud detection computer system 120 details with Fig. 3 …[0051]; via, The segmentation module 117 may determine to an application data… received and store the updated application data in the profile data store 150 for further reprocessing…). Ref claim 37 (New), McKenna discloses the method of claim 32, wherein the decision engine is further configured to determine whether fraud is occurring (para [0182], via Security group/IP list 510 to ensure that the user device has permission to utilize services described herein… ). Ref claim 38 (New), McKenna discloses the method of claim 32, wherein the historical transaction log includes: event data, one or more risk factors, one or more previous fraud detection determinations, and third-party data (para [0062]; The application engine 136 to store historical application data for lender/borrower users/profile data store 150…[0114]; via The fraud detection computer system 120 may apply various ML models and embodiments of the disclosure. The models may correspond with linear or non-linear functions. For example, a ML model may comprise a supervised learning algorithm including a decision tree that accepts the one or more input features associated with the application to provide the score…). Claim 39 recites similar limitations to claim 52 and thus rejected using the same art and rationale in the rejection of claim 32 as set forth above. Claims 40-45 are rejected as per the reasons set forth in claims 33-38 respectively. Claim 46 recites similar limitations to claim 32 and thus rejected using the same art and rationale in the rejection of claim 32 as set forth above. Claims 47-52 are rejected as per the reasons set forth in claims 33-38 respectively. CONCLUSION The prior arts made of record and not relied upon are considered pertinent to applicant's disclosure: BIALICK et al (US 2023/0169494-A1) discloses System and Method for Application of Smart Rules to Data Transactions. Albright et al (US 2020/0211021 A1), discloses Systems and Methods for early Detection of Network Fraud Events. AMBUKKARASU et al (US 2019/0026716 A1), discloses Systems and Methods for Providing Services to Smart Devices Connected in an loT Platform. Any inquiry concerning this communication or earlier communications from the examiner should be directed to HATEM M. ALI whose telephone number is (571) 270-3021, E-mail: Hatem.Ali@USPTO.Gov and FAX (571)270-4021. The examiner can normally be reached Monday-Friday from 8:00 AM to 6:00 PM ET. Examiner interviews are available via telephone, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ABHISHEK VYAS can be reached on (571) 270-1836. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HATEM M ALI/ Examiner, Art Unit 3691
Read full office action

Prosecution Timeline

Dec 20, 2024
Application Filed
Feb 04, 2026
Non-Final Rejection — §101, §103
Mar 26, 2026
Interview Requested
Apr 01, 2026
Examiner Interview Summary
Apr 01, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591869
BIFURCATED PROCESSING
2y 5m to grant Granted Mar 31, 2026
Patent 12518316
System, Method, and Computer Program Product for Network Anomaly Detection
2y 5m to grant Granted Jan 06, 2026
Patent 12400259
SYSTEMS AND METHODS OF REPRESENTING AND EXECUTING GRID RULES AS DATA MODELS
2y 5m to grant Granted Aug 26, 2025
Patent 12400195
SYSTEM AND METHOD FOR TRANSACTION SETTLEMENT
2y 5m to grant Granted Aug 26, 2025
Patent 12380425
INCREASING ACCURACY OF RFID-TAG TRANSACTIONS
2y 5m to grant Granted Aug 05, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
44%
Grant Probability
70%
With Interview (+25.9%)
4y 5m
Median Time to Grant
Low
PTA Risk
Based on 548 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month