DETAILED ACTION
Acknowledgements
This office action is in response to the claims filed 09/30/2025.
Claims 1-10 are cancelled.
Claims 11 and 21 are amended.
Claims 11-30 are pending.
Claims 11-30 have been examined.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed 09/30/2025 have been fully considered but they are not persuasive.
101
Applicant argues the limitation “in response to the first determination being positive to indicate the fraud attack with respect to the one or more predetermined time periods, blocking the one or more real-time transactions ”… to provide for implementation analogous to the aforementioned Guidance affording integration…the rejection be withdrawn. Examiner disagrees.
Applicant’s limitation recites, in response to finding there is a fraud attack, block a transaction happening in real time. This is yet another automated abstract idea. A merchant for years has had the capability to, for example, in real-time, in response to a fraud determination, can block a transaction. The rejection is maintained.
112
Due to Applicant’s amendments, prior 112(a) rejections are withdrawn.
Applicant writes “As to assertion of indefiniteness regarding assertion of 112(f) interpretation, the disclosed modeling is supported at pages 6-7 via incorporation of all appropriate software and/or hardware, and whereas, at pages 6-7, residency on a server is provided. Accordingly, the rejection flowing from asserted non-structural interpretation, is traversed and respectfully requested to be withdrawn. As to further assertion of indefiniteness, the Office Action further provides the following statements below, which are addressed in turn. ” This is not sufficient evidence, corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. Applicant has claimed the functions of a machine learning model without the recitation of corresponding structure, material, or acts for performing the entire claimed function. The rejection is maintained.
Applicant argues “The rejection is assumed to be based on doubt as to performance by computing or by an entity. Such doubt is resolved by the recitation itself, with explicit determination for feedback by the entity being explicitly separated from all other aspects of the performed computing.” First, the lack of clarity is based on the claim reciting software, not a computer, performing some of the steps and recitations of an “entity” performing some functions and the general lack of clarity in the recitation itself, claims 11-20 do not actually claim any computing entities performing any functions, there is no ‘performed computing’. The 112(b) rejection has not been addressed, and is maintained.
Applicant argues “though the claims do not explicitly recite "a prediction," but rather the indeed, recited step of "the predicting;" such "predicting" is in reference to the recited "predict fraud occurrence," when read as an orderly combination (i.e., predict fraud occurrence as part of the determining); thus, clarity is respectfully submitted to be present.” Applicant has admitted there was no “predicting” performed, and instead references iterations of the word “predict” recited in the claims. Again the limitation recites “in response to predicting, providing a first determination…”, there is no recited “predicting” that the limitation is capable of being performed “in response to…”, given that it does not exist in the claims. The claims remain rejected for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Applicant argues “The claims recite, in the preamble, a fraud attack "on an entity;" and the only other recitations for the entity are with respect to explicit determination with regard to feedback and notification thereto. Applicant thus submits that the recitations are properly accurately ascribed.” The rejection stated “The claims provide for the “machine learning model”, the actor performing the function of “providing a first determination of whether the one or more outliers for the aggregations indicate a fraud attack” and “the entity” all of which perform the same determination of a fraud attack. It is unclear whether these are separate entities or if Applicant has provided three different examples of the same character that performs a determination/prediction and provision of a determination of a fraud attack. The claims are unclear and indefinite. Dependent claims 12-20 and 22-30 are also rejected. The rejection has not been addressed as to the lack of clarity about the multiple possible entities or actors performing the claimed steps in addition to the ‘machine learning model’. The rejection is maintained.
Applicant argues “The claim, with its recited computing system comprising processors, memories, and instructions, entails that system performing a process, as recited, encompassing all that which follows, including a machine learning model, as is adequately recited.” This is incorrect, as Applicant’s first mention of a learning model recites “converting the outliers into input for a machine learning model; determining, by the machine learning model, whether the one or more outliers for the aggregations indicate a fraud attack,….” Nowhere in that entire limitation, or in the claims, is that an adequate recitation that the ‘machine learning model’ is part of the computing system. The rejection is maintained.
103
Misra discloses in response to predicting, providing a first determination of whether the one or more outliers for the aggregations indicate a fraud attack with respect to the one or more predetermined time periods, and in response to the first determination being positive to indicate the fraud attack with respect to the one or more predetermined time periods, blocking the one or more real-time transactions (¶ 11, 49-68, 76-80);
Misra - the catastrophic event detector 330 continuously receives real-time catastrophic event data 360 from catastrophic event data extractor 320 and real-time payment transaction data 390 from payment transaction data provider 362 and makes a prediction about whether a catastrophic event is occurring or going to occur… In some embodiments, the catastrophic event threshold 371 is a threshold used to determine whether to decline or approve a payment transaction based on the catastrophic event score 370… if riskscore>risc_score_threshold and country_code!=issuer_country, then decline transaction… In some embodiments of the computer-implemented method, the start date of the catastrophic event and the location of the catastrophic event are used in combination with the catastrophic event score to determine, with at least one processor, whether to decline or approve the financial transaction. (¶ 55, 56, 58, 59, 78)
Claim Interpretation
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: claims 11 and 21 recite “determining, by the machine learning model…”, “ receiving the input at the machine learning model,…”
According to the disclosure (¶31, 37, 38), “A “machine learning model” or “model” as used herein, refers to a construct that is trained using training data to make predictions or provide probabilities for new data items, whether or not the new data items were included in the training data…FAV 300 can further include a machine learning (ML) suite 320 providing a compiler and several ML models assigned dedicated functions.” The claims do not recite sufficient structure for the claimed functions.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 11-30 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Subject Matter Eligibility Standard
When considering subject matter eligibility under 35 U.S.C. § 101, it must be determined whether the claim is directed to one of the four statutory categories of invention, i.e., process, machine, manufacture, or composition of matter (101 Analysis: Step 1). Even if the claim does fall within one of the statutory categories, it must then be determined whether the claim is directed to a judicial exception (i.e., law of nature, natural phenomenon, and abstract idea) (101 Analysis: Step 2a(Prong 1), and if so, Identify whether there are any additional elements recited in the claim beyond the judicial exception(s), and evaluate those additional elements to determine whether they integrate the exception into a practical application of the exception. (101 Analysis: Step 2a (Prong 2). If additional elements does not integrate the exception into a practical application of the exception, claim still requires an evaluation of whether the claim recites additional elements that amount to an inventive concept (aka “significantly more”) than the recited judicial exception. If the claim as a whole amounts to significantly more than the exception itself (there is an inventive concept in the claim), the claim is eligible. If the claim as a whole does not amount to significantly more (there is no inventive concept in the claim), the claim is ineligible. (101 Analysis: Step 2b).
The 2019 PEG explains that the abstract idea exception includes the following groupings of subject matter: a) Mathematical concepts b) Certain methods of organizing human activity and c) Mental processes
Analysis
In the instant case, claim 11 is directed to a method, and claim 21 is directed to an article of manufacture.
Step 2a.1– Identifying an Abstract Idea
The claims recite the steps of “receiving… aggregating… determining… converting…determining.., receiving the input… providing…cataloguing… blocking… determining... and notifying the entity….” The recited limitations fall within mental processing, specifically, evaluating received information and inputting information to a ML tool. Accordingly, the claims recites an abstract idea.
See MPEP 2106.
Step 2a.2 – Identifying a Practical Application
The claim does recite a potential additional element but the potential additional element does that integrate the judicial exception into a practical application.
The claims recite “determining, by the machine learning model…”, According to the disclosure(Pg. 11, 12 ), “Then, the output from the model, i.e., predicted fraudulent occurrence(s) from the model, can be compared to actual feedback for the transaction(s) and, based on the comparison, the model can be modified, such as by changing weights between nodes of the neural network or parameters of the functions used at each node in the neural network (e.g., applying a loss function). After applying each of the pairings of the inputs (prior transaction data) and the desired outputs (fraud probability and/or bases for occurrences of fraud) in the training data and modifying the model in this manner, the model is trained to evaluate new instances of transaction data in order to determine fraud bases for transaction data. ” The step of determining falls under the abstract idea, as part of the evaluation process. Based on the recited claims, the information is inputted to the machine learning model and then a result is obtained. This is not an additional element of the abstract idea but another recitation of it.
Accordingly, even in combination, this element does not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea.
Mere instructions to apply the exception using generic computer components and limitations to a particular field of use or technological environment do not amount to practical applications. The claim in directed to an abstract idea.
Step 2b
The claim limitations recite “receiving… aggregating… determining… converting…determining.., receiving the input… providing…cataloguing… determining... and notifying the entity” are not additional elements and they amount to no more than mere instructions to apply the exception using a generic computer component. For the same reason these elements are not sufficient to provide an inventive concept. This is also determined to be well-understood, routine and conventional activity in the field. The Symantec, TLI, and OIP Techs, court decision cited in MPEP 2106.05(d)(II) indicates that mere receipt or transmission of data over a network is a well-understood, routine and conventional function when it is claimed in a merely generic manner, as it is here. Therefore, when considering the additional elements alone, and in combination, there is no inventive concept in the claim and thus the claim is not eligible.
Viewed as a whole, instructions/method claims recite the concept of a mental process, specifically, evaluating. The claims do not currently recite any additional elements or combination of additional elements that amount to significantly more than the judicial exception.
Claims 12-19 and 22-29 provide descriptive language surrounding the abstract idea. For example, details of information included in transaction data. As such, these elements do not provide the significantly more to the underlying abstract idea necessary to render the invention patentable.
Claims 20 and 30 describe another evaluation of data. As such, these elements do not provide the significantly more to the underlying abstract idea necessary to render the invention patentable.
The claims do not, for example, purport to improve the functioning of the computer itself. Nor do they effect an improvement in any other technology or technical field. Therefore, based on case law precedent, the claims are claiming subject matter similar to concepts already identified by the courts as dealing with abstract ideas. See Alice Corp. Pty. Ltd., 573 U.S. 208 (citing Bilski v. Kappos, 561, U.S. 593, 611 (2010)).
The claims at issue amount to nothing significantly more than an instruction to apply the abstract idea using some unspecified, generic computer. See Alice Corp. Pty. Ltd., 573 U.S. 208. Mere instructions to apply the exception using a generic computer component and limitations to a particular field of use or technological environment cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. The use of a computer or processor to merely automate and/or implement the abstract idea cannot provide significantly more than the abstract idea itself (MPEP 2106.05(I)(A)(f) & (h)). Therefore, the claim is not patent eligible.
Conclusion
The claim as a whole, does not amount to significantly more than the abstract idea itself. This is because the claim does not affect an improvement to another technology or technical filed; the claim does not amount to an improvement to the functioning of a computer system itself; and the claim does not move beyond a general link of the use of an abstract idea to a particular technological environment.
Accordingly, the Examiner concludes that there are no meaningful limitations in the claim that transform the judicial exception into a patent eligible application such that the claim amounts to significantly more than the judicial exception itself.
Dependent claims do not resolve the deficiency of independent claims and accordingly stand rejected under 35 USC 101 based on the same rationale.
Dependent claims 12-20 and 22-30 are also rejected.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 11-30 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 11 and 21 recite “determining, by the machine learning model…”, “receiving the input at the machine learning model,…” this invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function.
According to the disclosure (¶31, 37, 38), “A “machine learning model” or “model” as used herein, refers to a construct that is trained using training data to make predictions or provide probabilities for new data items, whether or not the new data items were included in the training data…FAV 300 can further include a machine learning (ML) suite 320 providing a compiler and several ML models assigned dedicated functions.” The claims do not recite sufficient structure for the claimed functions.
Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph.
Applicant may:
(a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph;
(b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)).
If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either:
(a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181.
Dependent claims 12-20 and 22-30 are also rejected.
Claim 11 recites “A method for identifying fraud attack on an entity and subsequently mitigating effect of the fraud attack, the method comprising:… determining, by the machine learning model… receiving the input at the machine learning model,….” The claim is unclear and indefinite. The claim is unclear on whether some of the steps are performed by computing or human entity, or whether the machine learning software is performing the claimed steps. The claim language is unclear and indefinite on what entities are performing a majority of the claimed steps. Dependent claims 12-20 and 22-30 are also rejected.
Claims 11 and 21 recite “wherein the machine learning model is trained to predict fraud attack occurrence for the one or more outliers for the aggregations,… in response to predicting, providing a first determination”. The claim is unclear and indefinite. Given that a prediction does not occur in the claims, it is unclear what prediction the providing step is in reference to. The claims are unclear and indefinite. Dependent claims 12-20 and 22-30 are also rejected.
Claims 11 and 21 recite “wherein the machine learning model is trained to predict fraud attack occurrence,…in response to predicting, providing a first determination of whether the one or more outliers for the aggregations indicate a fraud attack with respect to the one or more predetermined time periods,… based on feedback received from the entity and corresponding to a determination, by the entity, of a presence or an absence of fraud”. The claims provide for the “machine learning model”, the actor performing the function of “providing a first determination of whether the one or more outliers for the aggregations indicate a fraud attack” and “the entity” all of which perform the same determination of a fraud attack. It is unclear whether these are separate entities or if Applicant has provided three different examples of the same character that performs a determination/prediction and provision of a determination of a fraud attack. The claims are unclear and indefinite. Dependent claims 12-20 and 22-30 are also rejected.
Claim 21 recites “A computer system…one or more processors…one or more memories… determining, by the machine learning model… receiving the input at the machine learning model”. The claim is unclear and indefinite. The recited “machine learning model” is not recited as part of the claimed computer system, but the functions of the “machine learning model” are claimed in the computer system. The claims are unclear and indefinite as to whether the “machine learning model” is part of the computer system. Dependent claims 22-30 are also rejected.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 11-30 are rejected under 35 U.S.C. 103 as being unpatentable over Misra et al. (US 2023/0104208) (“Misra”), and further in view of Desai et al. (US 2023/0154222) (“Desai”).
Regarding claims 11 and 21, Misra discloses receiving transaction characteristics corresponding and limited to transaction data of one or more real-time transactions (Abstract; ¶ 12, 45, 50-78);
Misra - In some embodiments, the real-time catastrophic event data 360 and the real-time payment transaction data 390 are provided to catastrophic event detector 330 from catastrophic event data extractor 320 and payment transaction data provider 362, respectively…the catastrophic event detection machine learning model 331 is used as a prediction model to predict whether a catastrophic event is occurring or going to occur based on real-time catastrophic event data 360 and real-time payment transaction data 390 …(e.g., whether a catastrophic event is occurring or going to occur in the future)… (Abstract; ¶ 51, 52, 54)
aggregating the transaction characteristics according to one or more predetermined time periods into respective aggregations limited to correspond to only the transaction characteristics (¶ 49-78);
Misra - in some embodiments, in addition to being configured to extract historical catastrophic event data from social media resources 361 and news resources 362, and historical payment transaction data from payment transaction data provide 362, catastrophic event data extractor 320 and payment transaction data service provider 362 are configured to continuously extract and ingest real-time catastrophic event data from social media resources 361 and news resources 362 , and real-time payment transaction data from payment transaction data provide 362. (¶ 55)
determining one or more outliers for the aggregations (¶ 46, 48-50, 54);
Misra - , in addition to being configured to extract historical catastrophic event data from social media resources 361 and news resources 362, and historical payment transaction data from payment transaction data provide 362, catastrophic event data extractor 320 and payment transaction data service provider 362 are configured to continuously extract and ingest real-time catastrophic event data from social media resources 361 and news resources 362 , and real-time payment transaction data from payment transaction data provide 362. In some embodiments, the real-time catastrophic event data and the real-time payment transaction data are provided to catastrophic event detector 330 as catastrophic event data 360 and payment transaction data 390, respectively…. In addition, the catastrophic event score 370 is used in catastrophic-event-based fraud rule 392 to determine whether a payment transaction is fraudulent. In some embodiments, a catastrophic event is, for example, an event or force of nature that has catastrophic consequences, such as, an earthquake, a flood, a terrorist attack, a forest fire, a hurricane, a tornado, a tsunami, a volcanic eruption, a severe snowstorm, or some other catastrophic natural or man-made event. (¶ 46, 54)
converting the outliers into input for a machine learning model (¶ 48-78)
Misra - In some non-limiting embodiments, at step 410, catastrophic event data extractor 320 extracts catastrophic event data 115, which may be, for example, real-time catastrophic event data for prediction purposes …In some non-limiting embodiments, at step 415, catastrophic event data extractor 320 translates catastrophic event data 115 to machine learning feature form (e.g., catastrophic event data 360) (¶ 72,73)
determining, by the machine learning model. whether the one or more outliers for the aggregations indicate a fraud attack, by: receiving the input at the machine learning model, wherein the machine learning model is trained to predict fraud attack occurrence for the one or more outliers for the aggregations, based on training data comprising (a) prior transaction data defining training transaction characteristic aggregations paired with (b) occurrence of fraud attack due to one or more outliers being identified for the training transaction characteristic aggregations during one or more predetermined training time periods, and (Abstract; ¶ 12, 45, 50-78);
Claim Interpretation – “wherein the machine learning model is trained to predict fraud attack occurrence” recites intended use and therefore does not have patentable weight. See MPEP 2114.
Misra - after extracting the historical catastrophic event data 311 from social media resources 361 and the historical catastrophic event data 312 from news resource 362, catastrophic event detector 330 translates the historical catastrophic event data into machine learning feature form as catastrophic event data 360. For example, in some embodiments, catastrophic event data extractor 320 translates and categorizes the catastrophic event data 360 in such a manner that enables catastrophic event detector 330 to generate the catastrophic event score 370. In some embodiments, for example, catastrophic event data extractor 320 extracts and translates the catastrophic event data such that verbiage indicative of a catastrophic event, verbiage indicative of a start time of a catastrophic event, verbiage indicative of an end time of a catastrophic event, verbiage indicative of a geographic location of a catastrophic event, and verbiage indicative of a severity of the catastrophic event are included in the catastrophic event data 360…the catastrophic event detection machine learning model 331 is trained to generate the catastrophic event score 370, the start date of the catastrophic event, the end date of the catastrophic event,… the catastrophic event detection machine learning model 331 is used as a prediction model to predict whether a catastrophic event is occurring or going to occur based on real-time catastrophic event data 360 and real-time payment transaction data 390. (¶ 49, 53, 54)
in response to predicting, providing a first determination of whether the one or more outliers for the aggregations indicate a fraud attack with respect to the one or more predetermined time periods, and in response to the first determination being positive to indicate the fraud attack with respect to the one or more predetermined time periods, blocking the one or more real-time transactions (¶ 11, 49-73, 76-80);
Misra - the catastrophic event detector 330 continuously receives real-time catastrophic event data 360 from catastrophic event data extractor 320 and real-time payment transaction data 390 from payment transaction data provider 362 and makes a prediction about whether a catastrophic event is occurring or going to occur… In some embodiments, the catastrophic event threshold 371 is a threshold used to determine whether to decline or approve a payment transaction based on the catastrophic event score 370… if riskscore>risc_score_threshold and country_code!=issuer_country, then decline transaction… In some embodiments of the computer-implemented method, the start date of the catastrophic event and the location of the catastrophic event are used in combination with the catastrophic event score to determine, with at least one processor, whether to decline or approve the financial transaction. (¶ 55, 56, 58, 59, 78)
the training data of the machine learning model being modified, for a second determination of whether the one or more outliers for the aggregations indicate a fraud attack or another fraud attack with respect to the one or more predetermined time periods, based on feedback received from the entity and corresponding to a determination, by the entity, of a presence or an absence of fraud for a respective transaction corresponding to the first determination (¶ 46-57, 68, 73-75)
Misra - after the catastrophic event detection machine learning model 331 is trained to generate the catastrophic event score 370, the start date of the catastrophic event, the end date of the catastrophic event, the geographic location of the catastrophic event, and the severity of the catastrophic event, the catastrophic event detection machine learning model 331 is used as a prediction model to predict whether a catastrophic event is occurring or going to occur based on real-time catastrophic event data 360 and real-time payment transaction data… in order for the catastrophic event detector 330 to generate the catastrophic event score 370, the catastrophic event detection machine learning model 331 is trained by catastrophic-event-aware fraud detection system 300 to recognize catastrophic events by assessing historical catastrophic event data. In some embodiments, in order to train the catastrophic event detection machine learning model 331, catastrophic event data extractor 320 extracts historical catastrophic event data 311 from social media resources 361 and/or historical catastrophic event data 312 from news resources 362. In some embodiments, the historical catastrophic event data 311 and the historical catastrophic event data 312 include historical data related to catastrophic events that have occurred previously and that have been extracted from at least one of a social media resource 361 (such as, for example, social media applications) or news resource 362… in some embodiments, payment transaction data provider 362 extracts catastrophic event historical payment transaction data 313 from archives of historical customer payment transactions collected by a transaction service provider, such as, for example, Visa® or some other transaction service provider. In some embodiments, payment transaction data provider 362 translates historical payment transaction data 313 into machine learning feature form as payment transaction data 390 for use by catastrophic event detector 330… catastrophic event detector 330 receives the catastrophic event data 360 and payment transaction data 390 and uses a machine learning training application to train a machine learning algorithm to output the catastrophic event detection machine learning model 331. In some embodiments, the catastrophic event detection machine learning model 331 output by the machine learning training application is trained to generate the catastrophic event score 370…. in addition to training the catastrophic event detection machine learning model 331 to generate the catastrophic event score 370, the catastrophic event detection machine learning model 331 is trained to generate a start date of the catastrophic event, an end date of the catastrophic event, a geographic location of the catastrophic event, and a severity of the catastrophic event… In some embodiments, the real-time catastrophic event data 360 and the real-time payment transaction data 390 are provided to catastrophic event detector 330 from catastrophic event data extractor 320 and payment transaction data provider 362, respectively… That is, in some embodiments, in addition to being configured to extract historical catastrophic event data from social media resources 361 and news resources 362, and historical payment transaction data from payment transaction data provide 362, catastrophic event data extractor 320 and payment transaction data service provider 362 are configured to continuously extract and ingest real-time catastrophic event data from social media resources 361 and news resources 362 , and real-time payment transaction data from payment transaction data provide 362. (¶ 48, 51-54)
for any indicated fraud attack, cataloging one or more fraud patterns that respectively correspond to one or more of the aggregations; (¶ 55-62, 68);
Misra - In some embodiments, the catastrophic-event-related-merchant category code 373 is a merchant category code that has been designated by catastrophic-event-aware fraud detection system 300 as essential to the catastrophic event. (¶ 56)
based on the cataloging, determining one or more rules being satisfied by the one or more fraud patterns for the corresponding one or more aggregations; and (¶ 55-62, 69, 75-79)
Misra - catastrophic event analyzer 340 receives the catastrophic event score 370, the start date of the catastrophic event, the end date of the catastrophic event, the geographical location of the catastrophic event, and the severity of the catastrophic event from catastrophic event detector 330 and determines whether to adjust a fraud rule 391 to a catastrophic-event-based fraud rule 392. In some embodiments, for example, when the catastrophic event score 370 is less than or equal to a fraud rule adjustment threshold 389, the catastrophic event analyzer 340 adjusts the fraud rule 391 to include catastrophic event parameters, … catastrophic event analyzer 340 adjusts fraud rule 391 by adding the catastrophic event parameters (e.g., the catastrophic event threshold 371, the catastrophic-event-related-merchant category code 373 (CMCC 373), and the catastrophic event location 372) to the fraud rule 391 to generate catastrophic-event-based fraud rule 392. (¶ 56, 62)
Misra does not disclose notifying the entity of any information with respect to an indicated fraud attack.
Desai teaches notifying the entity of any information with respect to an indicated fraud attack (¶ 21, 27, 33-37, 39)
Desai- At 506, the documents are further processed to identify duplicate documents to pre-empt duplicate payments wherever possible. In an example, the error and fraud probabilities can be used at step 506 in determining the duplicates. The output of step 506 is the validation worklist 130 wherein the documents i.e., the invoices 152, 154, etc., are arranged in descending order of invalidity i.e., descending order of the error/fraud and duplicate probabilities… The index of invoices is queried at 706 to obtain recommendations for similar invoices received in the document package 150 and/or prior invoices stored in the historical data sources 172. The similarity graph is generated at 708 from the recommendations output by the query executed against the index including the invoices… thus, the AI-based document processing and validation system 100 affords an improvement against systems that need to process each document/invoice for error and/or fraud. (¶ 34, 37, 39)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Misra(¶ 24), which teaches “provided are improved systems, devices, products, apparatus, and/or methods for fraud detection” and Desai(¶ 14), which teaches “when a document package including documents such as invoices is received, the invoices therein are processed to detect errors, fraud, and duplicates” in order to minimize transaction processing errors (¶ 32).
Regarding claims 12 and 22, Misra discloses wherein: the transaction characteristics comprise one or more of (a) transaction data, (b) determined fraud probability for a transaction, (c) a reason code or reason codes corresponding to a determined fraud probability for a transaction, or (d) any combination thereof (¶ 47-56, 68).
Regarding claims 13 and 23, Misra discloses wherein: if the transaction characteristics comprise transaction request data, the transaction request data comprises one or more of (e) personally identifiable information, (f) email information, (g) phone or other device information, or (h) any combination thereof (¶ 13, 19, 30, 51, 71).
Claim Interpretation – “if the transaction characteristic comprise…” recites conditional and optional language, and therefore does not have patentable weight. See MPEP 2103(I)(c ).
Regarding claims 14 and 24, Misra discloses wherein: the one or more predetermined time periods comprise respective inspection windows of time over which the transaction characteristics are received (¶ 49-56, 75).
Regarding claims 15 and 25, Misra discloses wherein: the aggregations comprise one or more of (i) skew, (j) volume, (k) occurrence of email data, (1) occurrence of reason codes, (m) occurrence of personally identifiable information, (n) occurrence of clustered data concatenated from either email, internet protocol, device or phone information, or (o) any combination thereof (¶ 49-56).
Regarding claims 16 and 26, Misra discloses wherein: an outlier comprises any aggregation exceeding a predetermined outlier threshold defined for a predetermined time-based inspection window (¶ 46, 51, 56, 75).
Regarding claims 17 and 27, Misra discloses wherein: a fraud attack comprises the occurrence of the outlier over the predetermined time-based inspection window (¶ 56, 75-79).
Regarding claims 18 and 28, Desai discloses wherein: the cataloging one or more fraud patterns for the corresponding one or more aggregations is in accordance with an isolation forest regime (¶ 15, 26, 36).
Regarding claims 19 and 29, Misra discloses wherein: the information with respect to an indicated fraud attack comprises one or more of (p) for any disapproved transaction, an indication of an occurrence of the indicated fraud attack, (q) a change in rules employed by the machine learning model to indicate a fraud attack, (r) an increase in scores corresponding to aggregations and employed by the machine learning model to indicate a fraud attack, (s) rescoring for the transaction characteristics comprising transaction data and determined fraud probability, (t) a request to resubmit the transaction characteristics comprising transaction data, or (u) any combination thereof (¶ 56-64, 75-79).
Regarding claims 20 and 30, Misra discloses further comprising: in response to an indication of a fraud attack, adjusting, based on a respective one or more of the aggregations causing the indication, the determining the one or more outliers for the aggregations and the cataloging one or more fraud patterns for the corresponding one or more aggregations (¶ 46, 53-69, 75-79).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Glokas, (US 2023/0031123) teaches ML fraud detection.
Hayman(US 2023/0154222) teaches method and system for real-time automated identification of fraudulent invoices using machine learning models
Warrick (US 20210142126) teaches potential 102
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ILSE I IMMANUEL whose telephone number is (469)295-9094. The examiner can normally be reached Monday-Friday 9:00 am to 5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, NEHA H PATEL can be reached at (571) 270-1492. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ILSE I IMMANUEL/
Primary Examiner, Art Unit 3699