Detailed Action
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Acknowledgments
The application and preliminary amendment filed on 12/11/2024 and the IDSs filed on 06/03/2025, 11/05/2025, and 01/23/2026 are acknowledged.
Status of Claims
Claims 22-41 are pending.
In the preliminary amendment filed on 12/11/2024, claims 1-21 were cancelled and claims 22-41 were added.
Claims 22-41 are rejected.
Priority
Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) and/or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. Specifically, Applicant claims priority under 35 U.S.C. 120 to U.S. application no. 18/189,952 filed on 03/24/2023, and under 35 U.S.C. 119(e) to provisional application no. 63/404,868 filed on 09/08/2022.
Applicant has not complied with one or more conditions for receiving the benefit of an earlier filing date under 35 U.S.C. 120 and 35 U.S.C. 119(e) as follows:
The later-filed application must be an application for a patent for an invention which is also disclosed in the prior application (the parent or earlier-filed nonprovisional application or provisional application for which benefit is claimed). The disclosure of the invention in the parent application and in the later-filed application must be sufficient to comply with the requirements of 35 U.S.C. 112(a) or the first paragraph of pre-AIA 35 U.S.C. 112, except for the best mode requirement. See Transco Products, Inc. v. Performance Contracting, Inc., 38 F.3d 551, 32 USPQ2d 1077 (Fed. Cir. 1994).
The disclosure of the prior-filed applications, U.S. Patent Application No. 18/189,952 and provisional application no. 63/404,868, fails to provide adequate support or enablement in the manner provided by 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph for one or more claims of this application.
Specifically, independent claims 22, 32 and 41 recite:
generating, by the machine learning model, an action indicator based on the generated risk indicator and the transmitted alert;
Note the language "transmitted alert" refers to the preceding recited operation, which reads:
transmitting, by the at least one processor to the transaction channel, an alert based on the generated risk indicator;
Note further the language "transaction channel" refers to the first recited operation, which reads:
receive, from a transaction channel, a processed action of a user
Accordingly, the language underlined above refers to the alert that was transmitted, based on the generated risk indicator, to the transaction channel from which the processed action of a user was received.
Adequate support or enablement for the language underlined above is not found in the prior-filed applications.
Note inasmuch as each of the dependent claims includes the subject matter of one of the independent claims listed above, the dependent claims, hence all of the claims, lack the support requisite for the benefit of priority asserted by Applicant.
As for U.S. Patent Application No. 18/189,952, as best understood, the portions of the disclosure most closely related to the language underlined above are Figs. 10, 13 and 16 and their associated descriptions in the specification, specifically, 0056-0058 (Fig. 10), 0069 (Fig. 13), and 0082 (Fig. 16).
As for Fig. 10, per 0056, as alert is generated and transmitted to a transaction channel. Further, generated alerts may be queued in an order of probability/risk indicators. Per 0057-0058, a risk result refers to an outcome based on the risk indicator, such as allowing the processed action, flagging it for analyst review, and auto-holding/ stopping it.
As for Fig. 13, 0069 builds on the above-indicated content of 0056-0058 to describe that a "hold can trigger a [further action] including a trigger variable" and a "[trigger variable] includes data … representing [returned transactions]" and "may provide information relating to unauthorized transactions."
As for Fig. 16, this figure is a flow chart containing content similar but not identical to Fig. 10. As per Fig. 16 and 0082, the flow chart includes step 1604 of generating an alert, step 1606 of queueing an ordered list of generated alerts, step 1608 of retrieving a processed action based on the order, and step 1610 of generating an indicator to cause stopping, reviewing, or allowing the transaction.
As per the most closely related portions of the disclosure as described above, no subject matter is seen in U.S. Patent Application No. 18/189,952 suggesting that an action indicator is generated based on the transmitted alert.
As for provisional application no. 63/404,868, as best understood, the portions of the disclosure most closely related to the language underlined above are pages 0 and 8 of the figures.1
Page 0 is reproduced below.
PNG
media_image1.png
625
809
media_image1.png
Greyscale
As per page 0 shown above, "alerts are generated … and queued," "[t]he queues are either auto-decisioned … or sent for manual analyst review," and "[a]ctions involved [sic] putting account on [h]old."
Further, page 0 shows a workflow, including inter alia "score" followed by "alert" followed by "hold."
As per the written description above the workflow, the alerts are generated and queued, and queues may be sent for manual analyst review, but there is no suggestion that alerts are transmitted to the transaction channel from which they were received.
Accordingly, even if we assume for the sake of argument that "hold" in the workflow were to represent generation of an action indicator, the workflow does not show or suggest that the "hold" is generated based on an alert that has been transmitted to the transaction channel from which it was received.
Page 8 of the figures is reproduced below.
PNG
media_image2.png
624
811
media_image2.png
Greyscale
As per page 8 shown above, it contains content similar to that of page 0, but does not contain any content beyond that of page 0 that would show that the "hold" is generated based on an alert that has been transmitted to the transaction channel from which it was received.
As explained above, no disclosure is seen in provisional application no. 63/404,868 suggesting that an action indicator is generated based on an alert that has been transmitted to the transaction channel from which it was received.
Applicant states that this application is a continuation application of the prior-filed application. A continuation application cannot include new matter. Applicant is required to delete the benefit claim or change the relationship (continuation application) to continuation-in-part because this application contains subject matter not disclosed in the prior-filed application, as per independent claims 22, 32 and 41 (and accordingly in all the claims), as explained above.
Drawings/Specification
The specification is objected to and the drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because:
- in respect of Fig. 17A, specification paragraph 0083 refers to "graph 1703A" and "graph 1700," thus using two different reference numerals to refer to the same singular item. Also, as per Fig. 17A, it would appear that both reference numerals 1700 and 1703A refer to the graph, thus using two different reference numerals to refer to the same singular item. Further, specification paragraph 0085 refers to "y-axis 1703A," but specification paragraph 0083 refers to the "y-axis 1702A," thus using two different reference numerals to refer to the same singular item. Further, specification paragraph 0085 refers to "y-axis 1703A," but specification paragraph 0083 refers to "graph 1703A," thus the same reference numeral is used to refer to two different items.
Corrected drawing sheets in compliance with 37 CFR 1.121(d) and amendment to the specification, as applicable, are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 22-41 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Claims 22-41 are directed to a computer-implemented method, computing system, or non-transitory computer-readable medium, which are/is one of the statutory categories of invention. (Step 1: YES)
Claims 22, 32 and 41 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recite a computer-implemented method, computing system, and non-transitory computer-readable medium for determining a likelihood of fraud and preventing fraud.
For claims 22, 32 and 41 (claim 32 being deemed representative), the limitations (indicated below in bold) of:
at least one processor configured to:
receive, from a transaction channel, a processed action of a user;
generate, by a machine learning model, a risk indicator associated with unauthorized activity at the transaction channel for the processed action of the user;
transmit, by the at least one processor to the transaction channel, an alert based on the generated risk indicator;
generate, by the machine learning model, an action indicator based on the generated risk indicator and the transmitted alert; and
execute an outcome based on the generated action indicator.
as drafted, constitute a process that, under the broadest reasonable interpretation, covers "certain methods of organizing human activity," specifically, "fundamental economic practices or principles" and/or "commercial or legal interactions," but for recitation of generic computer components and generally linking the use of a judicial exception to a particular technological environment or field of use. The Examiner notes that "fundamental economic practices" or "fundamental economic principles" describe concepts relating to the economy and commerce, including hedging, insurance, and mitigating risks, and "commercial interactions" or "legal interactions" include agreements in the form of contracts, legal obligations, advertising, marketing or sales activities or behaviors, and business relations. MPEP 2106.04(a)(2)II.A.,B. If a claim limitation, under its broadest reasonable interpretation, covers "fundamental economic practices or principles" and/or "commercial or legal interactions," but for recitation of generic computer components and generally linking the use of a judicial exception to a particular technological environment or field of use, then it falls within the "certain methods of organizing human activity" grouping of abstract ideas. Accordingly, claims 22, 32 and 41 recite an abstract idea. (Step 2A - Prong 1: YES. The claims recite an abstract idea.)
This judicial exception is not integrated into a practical application. Claims 22, 32 and 41 recite the additional elements of at least one processor, a transaction channel, a machine learning model (the foregoing recited in claims 22, 32 and 41), and a non-transitory computer-readable medium storing a set of instructions for identifying unauthorized activity in a computing system including at least one processor, the set of instructions comprising one or more instructions that, when executed by the at least one processor of the computing system, cause the computing system to: [perform operations] (the foregoing recited in claim 41), that implement the abstract idea. These additional elements are not described by the applicant and they are recited at a high level of generality (i.e., one or more generic computer elements performing generic computer functions, or generally linking the use of a judicial exception to a particular technological environment or field of use), such that they amount to no more than mere instructions to apply the exception using generic computer elements (namely, at least one processor, a machine learning model, and a non-transitory computer-readable medium storing a set of instructions for identifying unauthorized activity in a computing system including at least one processor, the set of instructions comprising one or more instructions that, when executed by the at least one processor of the computing system, cause the computing system to: [perform operations]), or such that they amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (namely, a transaction channel and a machine learning model). Accordingly, even in combination these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. (Step 2A - prong 2: NO. The additional elements do not integrate the abstract idea into a practical application.)
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception itself. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of at least one processor, a transaction channel, a machine learning model (the foregoing recited in claims 22, 32 and 41), and a non-transitory computer-readable medium storing a set of instructions for identifying unauthorized activity in a computing system including at least one processor, the set of instructions comprising one or more instructions that, when executed by the at least one processor of the computing system, cause the computing system to: [perform operations] (the foregoing recited in claim 41), to perform the noted steps amount to no more than mere instructions to apply the exception using generic computer elements or generally linking the use of a judicial exception to a particular technological environment or field of use. Mere instructions to apply an exception using generic computer elements or generally linking the use of a judicial exception to a particular technological environment or field of use cannot provide an inventive concept ("significantly more"). Accordingly, even in combination, these additional elements do not provide significantly more. As such, claims 22, 32 and 41 are not patent eligible. (Step 2B: NO. The claims do not provide significantly more.)
Dependent claims 23-31 and 33-40 are similarly rejected because they further define/narrow the abstract idea of independent claims 22, 32 and 41 as discussed above, and/or do not integrate the abstract idea into a practical application or provide an inventive concept such as would render the claims eligible, whether each is considered individually or as an ordered combination.
As for further defining/narrowing the abstract idea:
Dependent claims 23 and 33 merely further describe wherein responsive to a determination that the risk indicator is below a first predetermined threshold, assigning a low risk indicator value; responsive to a determination that the risk indicator is above the first predetermined threshold and below a second predetermined threshold, assigning a medium risk indicator value; and responsive to a determination that the risk indicator is above the second predetermined threshold, assigning a high risk indicator value.
Dependent claims 24 and 34 merely further describe wherein responsive to an assignment of the low risk indicator value, allowing the processed action of the user.
Dependent claims 25 and 35 merely further describe wherein responsive to an assignment of the medium risk indicator value, flagging the processed action of the user.
Dependent claims 26 and 36 merely further describe wherein flagging the processed action of the user comprises reviewing the processed action.
Dependent claims 27 and 37 merely further describe wherein responsive to an assignment of the high risk indicator value, stopping the processed action of the user.
Dependent claims 28 and 38 merely further describe wherein stopping the processed action of the user comprises holding the processed action of the user for a predetermined time.
Dependent claim 29 merely further describes … to predict a likelihood of unauthorized activity for the processed action.
Dependent claims 30 and 39 merely further describe wherein the risk indicator is further generated based on a log-norm scaling of a user's profile against the user’s profile.
Dependent claims 31 and 40 merely further describe further comprising appending, to the processed action, customer characteristics and historical data associated with the processed action.
As for additional elements:
Dependent claim 29 recites "training the machine learning model." This recitation is at a high level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer element or such that it amounts to no more than generally linking the use of a judicial exception to a particular technological environment or field of use. Even in combination these additional elements do not integrate the abstract idea into a practical application and do not amount to significantly more than the abstract idea itself.
Dependent claims 33-35, 37 and 40 recite "wherein the at least one processor is further configured to." This recitation is at a high level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer element. Even in combination these additional elements do not integrate the abstract idea into a practical application and do not amount to significantly more than the abstract idea itself.
Dependent claims 23-28, 30, 31, 36, 38 and 39 do not recite any additional elements, and accordingly, for the reasons provided above with respect to the independent claims, are not patent eligible.
Therefore, dependent claims 23-31 and 33-40 are not patent eligible.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Selway v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 22, 32 and 41 are rejected under 35 U.S.C. 103 as being unpatentable over Selway et al. (U.S. Patent No. 2013/0013491 A1), hereafter Selway.
Regarding Claims 22, 32 and 41
Selway teaches:
(claim 22) the method being performed by at least one processor (Fig. 6, 610) and comprising: (0040-0043, Fig. 6, 600, 610; computer system 600 instantiates duplicate transaction detection system 110 (see Fig. 1), which per 0020 (using processing system 130 of 110) identifies duplicative checks that are improper or fraudulent, e.g., by performing the processes of Figs. 3-5)
(claim 32) at least one processor configured to: (0040-0043, Fig. 6, 600, 610; computer system 600 instantiates duplicate transaction detection system 110 (see Fig. 1), which per 0020 (using processing system 130 of 110) identifies duplicative checks that are improper or fraudulent, e.g., by performing the processes of Figs. 3-5)
(claim 41) A non-transitory computer-readable medium storing a set of instructions for identifying unauthorized activity (0020) in a computing system including at least one processor, the set of instructions comprising: one or more instructions that, when executed by the at least one processor of the computing system, cause the computing system to: (0040-0043, Fig. 6, 600, 610; computer system 600 instantiates duplicate transaction detection system 110 (see Fig. 1), which per 0020 (using processing system 130 of 110) identifies duplicative checks that are improper or fraudulent, e.g., by performing the processes of Figs. 3-5)
receive, from a transaction channel (Fig. 1, any of 120a-d), a processed action of a user; (0012, 0018, 0023, Fig. 3, 310, 0026, Fig. 4, 410, 0039, Fig. 5, 522, 542, all teaching that system 110 receives transaction record from institution 120 (any of 120a-d))
generate, by a machine learning model (0031 neural model), a risk indicator (risk score) associated with unauthorized activity at the transaction channel for the processed action of the user; (0016, 0022, 0037, Fig. 4, 424)
transmitting, by the at least one processor to the transaction channel, an alert based on the generated risk indicator; (0020, 0022, 0024, Fig. 3, 316, 0037, Fig. 4, 428, 0039, Fig. 5, 544)
generating, by the machine learning model, an action indicator based on the generated risk indicator and the transmitted alert; and (0037, Fig. 4, 440 "It should be appreciated that in sending an alert, a risk score could also be sent to each affected institution, to permit the institution to better understand the risk and determine what kind of remedial action to take, step 440." -- "generating … an action indicator" is taught by "determine what kind of remedial action to take" (the resultant determination that is made is the action indicator that is generated); as per the language underlined above, the determining what kind of remedial action to take (generating … an action indicator) is based on the sent (transmitted) alert and the generated risk score (indicator); regarding by the machine learning model: it would be obvious to combine embodiments and use the neural models (taught in 0031 for risk assessment/prediction) for determining the remedial action (generating the action indicator) because Selway 0046 explicitly teaches such combinations of embodiments ("Moreover, the procedures described with respect to one method or process may be incorporated within other described methods or processes; likewise, system components described according to a particular structural architecture and/or with respect to one system may be organized in alternative structural architectures and/or incorporated within other described systems. Hence, while various embodiments may be described with (or without) certain features for ease of description and to illustrate exemplary features, the various components and/or features described herein with respect to a particular embodiment can be substituted, added, and/or subtracted to provide other embodiments, unless the context dictates otherwise.") and because the incorporation of neural models into such an operation would improve the operation by rendering its result more robust/accurate.)
executing an outcome based on the generated action indicator. (0037, Fig. 4, 440 "Remedial action could include various steps …, such as [1] freezing or suspending all activity on an account, …, [2] putting a hold on any future deposits into an account, or [3] notifying authorities …." -- the result of the specific remedial action (e.g., [1], [2], or [3]) performed is the outcome that is executed)
Claims 23-28 and 33-38 are rejected under 35 U.S.C. 103 as being unpatentable over Selway et al. (U.S. Patent No. 2013/0013491 A1), hereafter Selway, in view of Abifaker et al. (U.S. Patent Application Publication No. 2015/0278817 A1), hereafter Abifaker.
Regarding Claims 23 and 33
Selway teaches the limitations of base claims 22 and 32 as set forth above. Selway does not explicitly disclose but Abifaker teaches:
wherein the at least one processor is further configured to: (0077 processor for implementing subject matter taught by Abifaker, e.g., operations performed by fraud detector 130)
responsive to a determination that the risk indicator (overall risk score) is below a first predetermined threshold (Fig. 3, 48), assign a low risk indicator value ("legitimate," "pass"); (0067-0070 (see 0069), 0075, Fig. 2, 225, Fig. 3)
responsive to a determination that the risk indicator is above the first predetermined threshold (Fig. 3, 48) and below a second predetermined threshold (Fig. 3, 68), assign a medium risk indicator value ("human audit," "requiring manual review"); and (0067-0070 (see 0070), 0075, Fig. 2, 225, Fig. 3)
responsive to a determination that the risk indicator is above the second predetermined threshold (Fig. 3, 68), assign a high risk indicator value ("fail," "fraudulent"). (0067-0070 (see 0068), 0075, Fig. 2, 225, Fig. 3)
It would have been obvious to one of ordinary skill in the art not later than the effective filing date of the claimed invention to have modified Selway's systems and methods for determining fraud risk and performing remedial actions, by incorporating therein these teachings of Abifaker regarding assigning a high, medium, or low risk classification to a transaction event based on the assessed level of risk, because it provides a comprehensive, fine-tuned risk assessment/scoring and consequent responsive actions, so as to more likely treat transactions appropriately according to their actual risk level, thus leading to more effective and satisfactory outcomes (e.g., medium risk transactions would be more likely to receive an intermediate level of responsive action, namely, a review, rather than being approved outright as if they were low risk or rejected outright as if they were high risk.) (Note Selway teaches or suggests that remedial action should align with the risk score, but does not flesh out how this is implemented practically. Abifaker's framework provides implementation detail appropriate to implement this under-specified aspect of Selway's systems and methods in a way that improves them.)
Regarding Claims 24 and 34
Selway in view of Abifaker teaches the limitations of base claims 22 and 32 and intervening claims 23 and 33 as set forth above. Abifaker further teaches:
wherein responsive to an assignment of the low risk indicator value, the at least one processor is further configured to allow the processed action of the user. (0067-0070 (see 0069), 0075-0076, Fig. 2, 225, 250, Fig. 3, Fig. 5, 540)
It would have been obvious to one of ordinary skill in the art not later than the effective filing date of the claimed invention to have modified the combination of Selway's systems and methods for determining fraud risk and performing remedial actions, as modified by Abifaker's teachings regarding assigning a high, medium, or low risk classification to a transaction event based on the assessed level of risk, by incorporating therein these further teachings of Abifaker regarding approving low-risk transactions, because in any such system (e.g., for detecting and mitigating/preventing fraudulent transactions) it is appropriate and standard to approve low-risk transactions, and achieves what is considered a proper balance between risk and efficiency/ productivity. (Note this is consistent with Selway, e.g., 0022 0037, which teaches that remedial actions such as blocking transactions or the like are appropriate when there is a high likelihood of fraud.)
Regarding Claims 25 and 35
Selway in view of Abifaker teaches the limitations of base claims 22 and 32 and intervening claim 23 as set forth above. Abifaker further teaches:
wherein responsive to an assignment of the medium risk indicator value, the at least one processor is further configured to flag the processed action of the user. (0067-0070 (see 0070), 0075, Fig. 2, 225, 245, Fig. 3, "determine that manual review … is needed," "requir[e] manual review" teach "flag")
It would have been obvious to one of ordinary skill in the art not later than the effective filing date of the claimed invention to have modified the combination of Selway's systems and methods for determining fraud risk and performing remedial actions, as modified by Abifaker's teachings regarding assigning a high, medium, or low risk classification to a transaction event based on the assessed level of risk, by incorporating therein these further teachings of Abifaker regarding designating medium-risk transactions for manual review (human audit), because in any such system (e.g., for detecting and mitigating/preventing fraudulent transactions) it is appropriate and standard to perform manual/human review of medium-risk transactions, and achieves what is considered a proper balance between risk and efficiency/productivity. (Note this is consistent with Selway, e.g., 0022 0037, which teaches that remedial actions such as blocking transactions or the like are appropriate when there is a high likelihood of fraud.)
Regarding Claims 26 and 36
Selway in view of Abifaker teaches the limitations of base claims 22 and 32 and intervening claims 23, 25 and 35 as set forth above. Abifaker further teaches:
wherein the flag comprises a review of the processed action. (0070-0071, 0075, Fig. 2, 225, 245, Fig. 3, human audit)
It would have been obvious to one of ordinary skill in the art not later than the effective filing date of the claimed invention to have modified the combination of Selway's systems and methods for determining fraud risk and performing remedial actions, as modified by Abifaker's teachings regarding assigning a high, medium, or low risk classification to a transaction event based on the assessed level of risk, and designating medium-risk transactions for manual review (human audit), by incorporating therein these further teachings of Abifaker regarding performing a manual review (human audit) for medium-risk transactions, because in any such system (e.g., for detecting and mitigating/preventing fraudulent transactions) it is appropriate and standard to perform manual/human review of medium-risk transactions, and achieves what is considered a proper balance between risk and efficiency/productivity. (Note this is consistent with Selway, e.g., 0022 0037, which teaches that remedial actions such as blocking transactions or the like are appropriate when there is a high likelihood of fraud.)
Regarding Claims 27 and 37
Selway in view of Abifaker teaches the limitations of base claims 22 and 32 and intervening claims 23 and 33 as set forth above. Abifaker further teaches:
wherein responsive to an assignment of the high risk indicator value, the at least one processor is further configured to stop the processed action of the user. (0067-0070 (see 0068), 0075-0076, Fig. 2, 225, 230, Fig. 3, Fig. 5, 540)
It would have been obvious to one of ordinary skill in the art not later than the effective filing date of the claimed invention to have modified the combination of Selway's systems and methods for determining fraud risk and performing remedial actions, as modified by Abifaker's teachings regarding assigning a high, medium, or low risk classification to a transaction event based on the assessed level of risk, by incorporating therein these further teachings of Abifaker regarding denying high-risk transactions, because in any such system (e.g., for detecting and mitigating/preventing fraudulent transactions) it is appropriate and standard to deny high-risk transactions, and achieves what is considered a proper balance between risk and efficiency/ productivity. (Note this is consistent with Selway, e.g., 0022 0037, which teaches that remedial actions such as blocking transactions or the like are appropriate when there is a high likelihood of fraud.)
Regarding Claims 28 and 38
Selway in view of Abifaker teaches the limitations of base claims 22 and 32 and intervening claims 23, 27, 33 and 37 as set forth above. Abifaker further teaches:
wherein the stop comprises a hold of the processed action of the user for a predetermined time. (0067-0070 (see 0068), 0075-0076, Fig. 2, 225, 230, Fig. 3, Fig. 5, 540, note under broadest reasonable interpretation "hold of the processed action of the user for a predetermined time" is taught by "deny"/"reject" the transaction because, all other things being equal, this denying/rejecting the transaction means permanently denying/rejecting it, i.e., denying/rejecting (or holding) it forever, in which case the time period for which the transaction is to be denied/rejected/held, namely, forever, is predetermined at the outset, i.e., is predetermined at the time the denial/rejection is put into effect)
It would have been obvious to one of ordinary skill in the art not later than the effective filing date of the claimed invention to have modified the combination of Selway's systems and methods for determining fraud risk and performing remedial actions, as modified by Abifaker's teachings regarding assigning a high, medium, or low risk classification to a transaction event based on the assessed level of risk, and denying high-risk transactions, by incorporating therein these further teachings of Abifaker regarding denying/rejecting high-risk transactions, because in any such system (e.g., for detecting and mitigating/preventing fraudulent transactions) it is appropriate and standard to deny/reject high-risk transactions, meaning permanent denial/rejection, and this achieves what is considered a proper balance between risk and efficiency/ productivity. (Note this is consistent with Selway, e.g., 0022 0037, which teaches that remedial actions such as blocking transactions (e.g., freezing, suspending, holding) or the like are appropriate when there is a high likelihood of fraud.)
Claim 29 is rejected under 35 U.S.C. 103 as being unpatentable over Selway et al. (U.S. Patent No. 2013/0013491 A1), hereafter Selway, in view of Comeaux et al. (U.S. Patent No. 11,669,844 B1), hereafter Comeaux.
Regarding Claim 29
Selway teaches the limitations of base claim 22 as set forth above. Selway does not explicitly disclose but Comeaux teaches:
further comprising training the machine learning model to predict a likelihood of unauthorized activity for the processed action. (4:56-64 (read in light of 4:46-56), 6:59-67 (read in light of 6:56-59), 7:59-8:28 (read in light of 7:51-59, 8:29-32), 8:39-9:28)
It would have been obvious to one of ordinary skill in the art not later than the effective filing date of the claimed invention to have modified Selway's systems and methods for determining fraud risk and performing remedial actions, by incorporating therein these teachings of Comeaux regarding training a machine learning model to determine the likelihood of fraud of a transaction, because Selway teaches using a machine learning model (neural model, 0031), and a machine learning model will not work or will not perform well if not trained. Training is a crucial part of employing a machine learning model, and therefore training will permit Selway's machine learning model to perform well, and accordingly will permit Selway to perform well its intended function of assessing risk of fraudulent transactions so as to allow/provide for appropriate remedial actions.
Claims 30 and 39 are rejected under 35 U.S.C. 103 as being unpatentable over Selway et al. (U.S. Patent No. 2013/0013491 A1), hereafter Selway, in view of Pavlovic ("Log-normal Distribution - A simple explanation”).
Regarding Claims 30 and 39
Selway teaches the limitations of base claims 22 and 32 as set forth above. Selway further teaches:
wherein the risk indicator (risk score) is further generated based on … a user's profile … the user's profile. (0030-0031, Table 1, 0037, Fig. 4, 424, risk score is generated based on assessment of risk factors, which include factors such as "History of account … e.g., prior incidents of fraud/suspicious activity," "Account aging (how long the account … has been in existence," "Type of account," etc., which factors teach "user's profile" under broadest reasonable interpretation)
Selway does not explicitly disclose but Pavlovic teaches:
… a log-norm scaling of … against …. (Note Applicant’s specification (0045) defines “log-norm scaling” thus: “Log-norm scaling may refer to applying a logarithmic transformation to values, which transforms the values onto a scale that approximates the normality”; Pavlovic, p. 3, teaches the same thing: “Let’s say your data [values] fits a log-normal distribution. If you then take the logarithm of [applying a logarithmic transformation to] all your data points [values], the newly transformed [logarithmically transformed] points [values] will now fit a normal distribution [a scale that approximates the normality].”)
It would have been obvious to one of ordinary skill in the art not later than the effective filing date of the claimed invention to have modified Selway's systems and methods for determining fraud risk and performing remedial actions, by incorporating therein these teachings of Pavlovic regarding log-norm scaling, because these teachings of Pavlovic are a known way to model various natural phenomena, see Pavlovic, p. 1, and they have particular advantages under certain circumstances applicable to Selway, e.g., where the data cannot be negative (e.g., data such as a probability of fraud / a risk score), where the data skews positive, with most values clustered near the low end and a long tail extending rightward to occasional high outliers (e.g., transaction data, which comprises mostly non-fraudulent transactions and a relatively small amount of fraudulent transactions), and where the data grows multiplicatively/cumulatively (e.g., a plurality of historical transaction data) -- by transforming such data with a logarithm, it can be normalized, allowing for easier analysis with powerful standard statistical techniques, such as calculating z-scores, performing linear regression, estimating parameters using Maximum Likelihood Estimation (MLE), etc.
Claims 31 and 40 are rejected under 35 U.S.C. 103 as being unpatentable over Selway et al. (U.S. Patent No. 2013/0013491 A1), hereafter Selway, in view of Chisholm (U.S. Patent Application Publication No. 2014/0351137 A1).
Regarding Claims 31 and 40
Selway teaches the limitations of base claims 22 and 32 as set forth above. Selway does not explicitly disclose but Chisholm teaches:
wherein the at least one processor is further configured to append, to the processed action, customer characteristics and historical data associated with the processed action. (0101, claims 8 and 14; note, e.g., "account number" (claim 14) teaches "customer characteristics … associated with the processed action" (compare Applicant's specification 0043 teaching account number as example of customer characteristics); e.g., "historical transaction data" (0101, claim 8), "purchase data" (claim 14) teach "historical data associated with the processed action" (compare Applicant's specification 0043 teaching historical account information, e.g., transactions, as example of historical data); "enrich" teaches "append" (compare Applicant's specification 0043 teaching "enrich" may refer to "appending"))
It would have been obvious to one of ordinary skill in the art not later than the effective filing date of the claimed invention to have modified Selway's systems and methods for determining fraud risk and performing remedial actions, by incorporating therein these teachings of Chisholm regarding enriching transaction with historical data and cardholder data to facilitate a process of determining whether a transaction is fraudulent, based on the following reasoning:
Selway (see 0012, 0020, 0024, 0030-0031, Table 1, 0037 Fig. 4, 424) teaches using other data (risk factors) in assessing risk and generating a risk score, which other data (risk factors) include data comparable to Chisholm's historical and cardholder data, but Selway does not teach "appending" this other data (risk factors) to the transaction record that is being analyzed for risk of fraud. Thus, while Selway uses this other data (risk factors) in performing the risk assessment, Selway does not provide implementation detail as to how this other data (risk factors) is obtained so that it is available for use in performing the risk assessment. However, Chisholm's enrichment process provides implementation detail appropriate to implement this under-specified aspect of Selway's systems and methods, as Chisholm's enrichment constitutes a known way of having other supplementary data at hand together with the primary data under test, for use in analyzing the primary data, as Selway requires. In addition, the combination (incorporation of Chisholm's teaching of enrichment into Selway) would have predictable results, e.g., the enrichment can be incorporated into Selway in a mechanical-like manner without adversely affecting any other relevant aspects of Selway. Thus, combining Chisholm's teaching of enrichment with Selway provides implementation detail (namely, for having the risk factors at hand for use in the risk analysis) that permits Selway to actually perform its intended function of assessing risk of fraudulent transactions so as to allow/provide for appropriate remedial actions.
Conclusion
The prior art made of record and not relied upon, as set forth in the accompanying Notice of References Cited (PTO-892), is considered pertinent to applicant's disclosure.
Comeaux (US-11669844-B1) teaches evaluating a transaction for fraud, including generating an alert probability score (fraud risk score), based on a wide variety of risk factors (e.g., behavioral profile including user personal data, financial data, and user social network data; historical user financial data), generating an alert, and taking action to prevent processing of a fraudulent transaction, including using and training a machine learning model.
Comeaux (US-10567402-B1) and Comeaux (US-11722502-B1) teach fraud detection/prevention similar to Comeaux (US-11669844-B1) but to greater depth in certain aspects.
Phatak (2022/0006899) and Anderson (US-12136096-B1) teach a fraud alert queue that prioritizes fraud alerts based on fraud importance.
Vaswani (2022/0377090) teaches fraud detection/prevention (including risk scores and alerts) similar to Comeaux (US-11669844-B1).
Karpovsky (2022/0191173) teaches determining fraud risk based on VPN and/or proprietary knowledge and periodic monitoring.
Pavlovic ("Log-normal Distribution - A simple explanation”) teaches content about log-normal distribution similar to that of Applicant's disclosure (specification paragraph 0045).
Vimal (US-2023/0186311-A1) (qualifying as prior art based on Indian priority date) teaches, inter alia, benchmarking a machine learning model based on precision, recall, F1, and/or F2 scores, see 0097.
Thomas (US-10997596-B1) teaches appending a fraud accuracy tag to a declined transaction, where the fraud accuracy tag is indicative of whether the decline of the transaction is a true positive decline or a false positive decline, whereby the fraud accuracy tag is suitable to provide insight into accuracy of a fraud strategy implemented in connection with the declined transaction.
Beckman (US-10872341-B1) teaches secondary fraud detection during transaction verification, where the transaction verification process with the user itself is evaluated and scored for likelihood of fraud, using machine learning, and including assigning designations of high risk, medium risk, and low risk, such that Beckman teaches content that, for purposes of the instant claims, is comparable to that of Selway (US-2013/0013491-A1) and Comeaux (US-11669844-B1).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DOUGLAS W PINSKY whose telephone number is (571) 272-4131. The examiner can normally be reached on 8:30 am - 5:30 pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jessica Lemieux can be reached on 571-270-3445. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DWP/
Examiner, Art Unit 3626
/JESSICA LEMIEUX/Supervisory Patent Examiner, Art Unit 3626
1 Pages 0 and 8 refer to the page numbers as shown on the figures in question. These pages 0 and 8 correspond to pages 1 and 9 of the PDF of the figures as filed. Note the figures as filed do not include figure numbers.