DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. The applicant's submissions, the “AMENDMENT & RESPONSE” filed on 23 October 2025 (hereinafter referred to as the “Amendment/Response”), and the “SUPPLEMENTAL AMENDMENT & RESPONSE” filed on 29 December 2025 (hereinafter referred to as the “Supplement”), have been entered.
Status of the Claims
The pending claims in the present application are claims 1-20, as presented in the Supplement.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The paragraphs below provide rationales for the rejection. The rationales are based on the multi-step subject matter eligibility test outlined in MPEP 2106.
Step 1 of the eligibility analysis involves determining whether a claim falls within one of the four enumerated categories of patentable subject matter recited in 35 USC 101. (See MPEP 2106.03(I).) That is, Step 1 asks whether a claim is to a process, machine, manufacture, or composition of matter. (See MPEP 2106.03(II).) Referring to the pending claims, the “method” of claims 1-10 constitutes a process under 35 USC 101, the “method” of claims 11-20 also constitutes a process under the statute, and the “system” of claims 19 and 20 constitutes a machine under the statute. Accordingly, claims 1-20 meet the criteria of Step 1 of the eligibility analysis. The claims, however, fail to meet the criteria of subsequent steps of the eligibility analysis, as explained in the paragraphs below.
The next step of the eligibility analysis, Step 2A, involves determining whether a claim is directed to a judicial exception. (See MPEP 2106.04(II).) This step asks whether a claim is directed to a law of nature, a natural phenomenon (product of nature) or an abstract idea. (See id.) Step 2A is a two-prong inquiry. (See MPEP 2106.04(II)(A).) Prong One and Prong Two are addressed below.
In the context of Step 2A of the eligibility analysis, Prong One asks whether a claim recites an abstract idea, law of nature, or natural phenomenon. (See MPEP 2106.04(II)(A)(1).) Using independent claim 1 as an example, the claim recites the following abstract idea limitations:
“A method, comprising: ...” - See below regarding MPEP 2106.04(a), certain methods of organizing human activity, and mental processes
“... obtaining customer data and enterprise data for a financial institution (FI) over a first interval of time; ...” - See below regarding MPEP 2106.04(a), certain methods of organizing human activity, and mental processes
“... forecasting predicted future transactions and churn likelihood over a second interval of time for each customer of the FI based on the customer data and the enterprise data; ...” - See below regarding MPEP 2106.04(a), certain methods of organizing human activity, and mental processes
“... predicting a customer lifetime value (CLV) for each customer over the second interval of time based on a savings account balance rating value assigned by the FI for each customer, wherein savings account balance rating value include low value, medium value, or high value; ...” - See below regarding MPEP 2106.04(a), certain methods of organizing human activity, and mental processes
“... using outputted predicted CLVs as input ... along with the predicted future transactions and the churn likelihood to determine and calculate a final adjusted CLV per customer over the second interval of time; and ...” - See below regarding MPEP 2106.04(a), mathematical concepts, certain methods of organizing human activity, and mental processes
“... integrating the predicted future transactions, the churn likelihood, and the final adjusted CLV for each customer ...” - See below regarding MPEP 2106.04(a), certain methods of organizing human activity, and mental processes
Similarly, independent claim 11 recites the following abstract idea limitations:
“A method, comprising: ...” - See below regarding MPEP 2106.04(a), certain methods of organizing human activity, and mental processes
“... generate predicted credit transactions and a first churn likelihood per customer of a financial institution (FI) over a given interval of time; ...” - See below regarding MPEP 2106.04(a), certain methods of organizing human activity, and mental processes
“... generate predicted debit transactions and a second churn likelihood per customer of the FI over the given interval of time; ...” - See below regarding MPEP 2106.04(a), certain methods of organizing human activity, and mental processes
“... predicting a customer lifetime value (CLV) per customer of the FI over the given interval of time based on the predicted credit transactions, the first churn likelihood, the predicted debit transactions, and the second churn likelihood and the CLV based on a savings account balance rating value assigned by the FI for each customer, wherein the savings account balance rating value includes low value, medium value, or high value; ...” - See below regarding MPEP 2106.04(a), certain methods of organizing human activity, and mental processes
“... using outputted predicted CLVs as input ... along with the predicted credit transactions, the first churn likelihood, the predicted debit transactions, and the second churn likelihood to determine and calculate a final adjusted CLV per customer over the given interval of time; ...” - See below regarding MPEP 2106.04(a), mathematical concepts, certain methods of organizing human activity, and mental processes
“... generating records per customer, each record includes a corresponding customer's predicted credit transactions, first churn likelihood, predicted debit transactions, second churn likelihood, and final adjusted CLV over the given interval of time; and ...” - See below regarding MPEP 2106.04(a), certain methods of organizing human activity, and mental processes
“... delivering the records ...” - See below regarding MPEP 2106.04(a), certain methods of organizing human activity, and mental processes
Similarly, independent claim 19 recites the following abstract idea limitations:
“... perform operations, comprising: ...” - See below regarding MPEP 2106.04(a), certain methods of organizing human activity, and mental processes
“... obtaining customer data and enterprise data from ... a FI; ...” - See below regarding MPEP 2106.04(a), certain methods of organizing human activity, and mental processes
“... forecasting predicted credit transactions and a first churn likelihood of each customer over a given interval of time based on the customer data and the enterprise data; ...” - See below regarding MPEP 2106.04(a), certain methods of organizing human activity, and mental processes
“... forecasting predicted debit transactions and a second churn likelihood of each customer over the given interval of time based on the customer data and the enterprise data; ...” - See below regarding MPEP 2106.04(a), certain methods of organizing human activity, and mental processes
“... predicting a customer lifetime value (CLV) of each customer over the given interval of time based on the customer data, the enterprise data, the predicted credit transactions, the first churn likelihood, the predicted debit transactions, and the second churn likelihood and the second churn likelihood based on a savings account balance rating value assigned by the FI for each customer, wherein the savings account balance rating value includes low value, medium value, or high value; ...” - See below regarding MPEP 2106.04(a), certain methods of organizing human activity, and mental processes
“... using outputted predicted CLVs as input ... along with the predicted credit transactions, the first churn likelihood, the predicted debit transactions, and the second churn likelihood to determine and calculate a final adjusted CLV per customer over the given interval of time; ...” - See below regarding MPEP 2106.04(a), mathematical concepts, certain methods of organizing human activity, and mental processes
“... generating records per customer, each record includes a corresponding customer's predicted credit transactions, first churn likelihood, predicted debit transaction, second churn likelihood, and final adjusted CLV for the given interval of time; and ...” - See below regarding MPEP 2106.04(a), certain methods of organizing human activity, and mental processes
“... integrating the records ...” - See below regarding MPEP 2106.04(a), certain methods of organizing human activity, and mental processes
The above-listed limitations of independent claims 1, 11, and 19, when applying their broadest reasonable interpretations in light of their context in the claim as a whole, fall under enumerated groupings of abstract ideas outlined in MPEP 2106.04(a). For example, limitations of the claims can be characterized as: fundamental economic principles or practices, including forecasting financial transactions and predicting customer lifetime value; commercial interactions, including business relations between customers and financial institutions; and managing relationships or interactions between people, and in particular, between financial institutions and their customers, which fall under the certain methods of organizing human activity grouping of abstract ideas (see MPEP 2106.04(a)). Limitations of the claims also can be characterized as: concepts performed in the human mind, including observation (e.g., the recited “obtaining” step), and evaluation, judgment, and/or opinion (e.g., the recited “forecasting,” “predicting,” “using,” and “integrating” steps), which fall under the mental processes grouping of abstract ideas (see MPEP 2106.04(a)). Accordingly, for at least these reasons, claims 1, 11, and 19 fail to meet the criteria of Step 2A, Prong One of the eligibility analysis.
In the context of Step 2A of the eligibility analysis, Prong Two asks if the claim recites additional elements that integrate the judicial exception into a practical application. (See MPEP 2106.04(II)(A)(2).) Continuing to use independent claim 1 as an example, the claim recites the following additional element limitations:
“... training machine-learning models based on labeled parameter data that identifies factors relevant to predicting future transactions for a given customer's transaction history, wherein the labeled parameter data includes customer identifier, transaction date, transaction time, transaction location, transaction device, insufficient fund violations, interchange fees, costs of a given transaction, profit of a given transaction, loan terms, and loan interest, and wherein the training includes frequency and recency of transactions for each customer during the first interval of time to identify factors relevant to predicting the transactions or transaction rate in a given future period or interval of time; ...” - See below regarding MPEP 2106.05(a)-(c) and (f)-(h)
The claimed “input” is “to a CLV manager” - See below regarding MPEP 2106.05(a)-(c) and (f)-(h)
The claimed “integrating” is “into an interface or a system of the FI” - See below regarding MPEP 2106.05(a)-(c) and (f)-(h)
Similarly, independent claim 11 recites the following additional elements limitations:
“... training a first machine learning model (MLM) and a second MLM on labeled parameter data that identifies factors relevant to predicting future transactions for a given customer's transaction history, wherein the labeled parameter data includes customer identifier, transaction date, transaction time, transaction location, transaction device, insufficient fund violations, interchange fees, costs of a given transaction, profit of a given transaction, loan terms, and loan interest, and wherein the training includes frequency and recency of transactions for each customer to identify factors relevant to predicting the transactions or transaction rate in a given future period or interval of time; ...” - See below regarding MPEP 2106.05(a)-(c) and (f)-(h)
“... training the first MLM to ...” - See below regarding MPEP 2106.05(a)-(c) and (f)-(h)
“... training the second MLM to ...” - See below regarding MPEP 2106.05(a)-(c) and (f)-(h)
The claimed “input” is “to a CLV manager” - See below regarding MPEP 2106.05(a)-(c) and (f)-(h)
The claimed “delivering” is “to a system or an interface of the FI” - See below regarding MPEP 2106.05(a)-(c) and (f)-(h)
Similarly, independent claim 19 recites the following additional elements limitations:
“A system, comprising: at least one server comprising at least one processor and a non-transitory computer-readable storage medium; the non-transitory computer-readable storage medium comprising executable instructions; and the executable instructions when executed by at least one processor cause the at least one processor to” - See below regarding MPEP 2106.05(a)-(c) and (f)-(h)
The claimed “obtaining” is from “a financial institution (FI) server” - See below regarding MPEP 2106.05(a)-(c) and (f)-(h)
“... training machine-learning models on labeled parameter data that identifies factors relevant to predicting future transactions for a given customer's transaction history, wherein the labeled parameter data includes customer identifier, transaction date, transaction time, transaction location, transaction device, insufficient fund violations, interchange fees, costs of a given transaction, profit of a given transaction, loan terms, and loan interest, and wherein the training includes frequency and recency of transactions for each customer to identify factors relevant to predicting the transactions or transaction rate in a given future period or interval of time; ...” - See below regarding MPEP 2106.05(a)-(c) and (f)-(h)
The claimed “input” is “to a CLV manager” - See below regarding MPEP 2106.05(a)-(c) and (f)-(h)
The claimed “integrating” is “into a system or an interface of the FI using an application programming interface” - See below regarding MPEP 2106.05(a)-(c) and (f)-(h)
The above-listed additional element limitations of independent claims 1, 11, and 19 when applying their broadest reasonable interpretations in light of their context in the claims as a whole, are analogous to: mere automation of manual processes, instructions to display two sets of information on a computer display in a non-interfering manner, without any limitations specifying how to achieve the desired result, and arranging transactional information on a graphical user interface in a manner that assists traders in processing information more quickly, which courts have indicated may not be sufficient to show an improvement in computer-functionality (see MPEP 2106.05(a)(I)); a commonplace business method being applied on a general purpose computer, gathering and analyzing information using conventional techniques and displaying the result, and selecting a particular generic function for computer hardware to perform from within a range of fundamental or commonplace functions performed by the hardware, which courts have indicated may not be sufficient to show an improvement to technology (see MPEP 2106.05(a)(II)); a general purpose computer that applies a judicial exception, such as an abstract idea, by use of conventional computer functions, and merely adding a generic computer, generic computer components, or a programmed computer to perform generic computer functions, which do not qualify as a particular machine or use thereof (see MPEP 2106.05(b)(I)); a machine that is merely an object on which the method operates, which does not integrate the exception into a practical application (see MPEP 2106.05(b)(II)); use of a machine that contributes only nominally or insignificantly to the execution of the claimed method, which does not integrate a judicial exception (see MPEP 2106.05(b)(III)); transformation of an intangible concept such as a contractual obligation or mental judgment, which is not likely to provide significantly more (see MPEP 2106.05(c)); recitation of claim limitations that attempt to cover any solution to an identified problem with no restriction on how the result is accomplished and no description of the mechanism for accomplishing the result, which courts have found to be mere instructions to apply an exception, because they recite no more than an idea of a solution or outcome (see MPEP 2106.05(f)); use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea, a commonplace business method or mathematical algorithm being applied on a general purpose computer, and requiring the use of software to tailor information and provide it to the user on a generic computer, which courts have found to be mere instructions to apply an exception, because they do no more than merely invoke computers or machinery as a tool to perform an existing process (see MPEP 2106.05(f)); mere data gathering in the form of obtaining information about transactions using the Internet to verify transactions and consulting and updating an activity log, and selecting a particular data source or type of data to be manipulated in the form of selecting information, based on types of information and availability of information in an environment, for collection, analysis, and display, which courts have found to be insignificant extra-solution activity (see MPEP 2106.05(g)); and specifying that the abstract idea of monitoring audit log data relates to transactions or activities that are executed in a computer environment, because this requirement merely limits the claims to the computer field, i.e., to execution on a generic computer, which courts have described as merely indicating a field of use or technological environment in which to apply a judicial exception (see MPEP 2106.05(h)). For at least these reasons, claims 1, 11, and 19 fail to meet the criteria of Step 2A, Prong Two of the eligibility analysis.
The next step of the eligibility analysis, Step 2B, asks whether a claim recites additional elements that amount to significantly more than the judicial exception. (See MPEP 2106.05(II).) The step involves identifying whether there are any additional elements in the claim beyond the judicial exceptions, and evaluating those additional elements individually and in combination to determine whether they contribute an inventive concept. (See id.) The ineligibility rationales applied at Step 2A, Prong Two, also apply to Step 2B. (See id.) For all of the reasons covered in the analysis performed at Step 2A, Prong Two, independent claims 1, 11, and 19 fail to meet the criteria of Step 2B. Further, claims 1, 11, and 19 also fail to meet the criteria of Step 2B because at least some of the additional elements are analogous to: receiving or transmitting data over a network, e.g., using the Internet to gather data, performing repetitive calculations, electronic recordkeeping, and storing and retrieving information in memory, which courts have recognized as well-understood, routine, conventional activity, and as insignificant extra-solution activity (see MPEP 2106.05(d)(II)). As a result, claims 1, 11, and 19 are rejected under 35 USC 101 as ineligible for patenting.
Regarding claims 2-10, 12-18, and 20 the claims depend from independent claims 1, 11, and 19, and expand upon limitations introduced by claims 1, 11, and 19. The dependent claims are rejected at least for the same reasons as claims 1, 11, and 19. For example, the dependent claims recite abstract idea elements similar to the abstract idea elements of claims 1, 11, and 19 that fall under the same abstract idea groupings as the abstract idea elements of claims 1, 11, and 19 (e.g., the “iterating to the obtaining at a preconfigured period of time to update each of the predicted future transactions, churn likelihood, and CLV for each customer of the FI” of claim 2, the “forecasting predicted debit transactions and predicted credit transactions for each customer separately” of claim 3, the “forecasting a first churn likelihood for the predicted debit transactions and forecasting a second churn likelihood for the predicted credit transactions” of claim 4, the “obtaining the predicted debit transactions with the first churn likelihood from a trained debit ... model” of claim 5, the “obtaining the predicted credit transactions with the second churn likelihood from a trained credit ... model” of claim 6, the “obtaining the CLV from a trained CLV ... model by providing as input the predicted debit transactions, the predicted credit transactions, the first churn likelihood, and the second churn likelihood” of claim 7, the “processing a statistical and heuristic algorithm using the predicted debit transactions, the predicted credit transactions, the first churn likelihood, and the second churn likelihood to obtain the CLV” of claim 8, the “providing the predicted future transactions, the churn likelihood, and the CLV for each customer” of claim 9, the “providing the predicted future transactions, the churn likelihood, and the CLV for each customer” of claim 10, the “updating each of the predicted credit transactions, the first churn likelihood, the predicted credit transactions, the second churn likelihood, and the CLV at predefined intervals of time based on actual observed transactions of each customer” of claim 12, the “training a third ...M to generate each CLV for each customer using as input corresponding predicted credit transactions, a corresponding first churn likelihood, a corresponding predicted debit transactions, and a corresponding second churn likelihood” of claim 13, the “processing a statistical and heuristic algorithm on each of the predicted credit transactions, the first churn likelihood, the predicted debit transactions, and the second churn likelihood to obtain a corresponding CLV for a particular customer” of claim 14, the “adding a total predicted profit for the given interval of time for each record based on corresponding predicted credit transactions” of claim 15 (which also is a mathematical concept), the “adding a total predicted cost for the given interval of time for each record based on corresponding predicted debit transactions” of claim 16 (which also is a mathematical concept), the “providing the records” of claim 17, the “providing the records” of claim 18, and the “predicting the CLV of each customer based on a savings account balance associated with a corresponding customer” of claim 20). The dependent claims recite further additional elements that are similar to the additional elements of claims 1, 11, and 19 that fail to warrant eligibility for the same reasons as the additional elements of claims 1, 11, and 19 (e.g., the “machine learning” of claims 5-7, the “via an application programming interface (API) to the interface or the system” of claim 9, the “to a dashboard interface of the system via the API” of claim 10, the “ML” of claim 13, the “to the system or the interface via an application programming interface” of claim 17, the “to a dashboard interface associated with a system of the FI” of claim 18, and the “system” of claim 20). Accordingly, claims 2-10, 12-18, and 20 also are rejected as ineligible under 35 USC 101.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pat. App. Pub. No. 2019/0066130 A1 to Shen et al. (hereinafter referred to as “Shen”), in view of U.S. Pat. App. Pub. No. 2021/0350464 A1 to Martinez et al. (hereinafter referred to as “Martinez”), further in view of Li, Chun-Qing, Weverbergh Marcel, and Ting Gao. "Does A Long-Term and Active Customer of A Current Savings Account Have A High Lifetime Value?." Contemporary Management Research 8.2 (2012) (hereinafter referred to as “Li”), and further in view of U.S. Pat. App. Pub. No. 2022/0405775 A1 to Siebel et al. (hereinafter referred to as “Siebel”).
Regarding independent claim 1, Shen discloses the following limitations:
“A method, comprising: ...” - Shen discloses, “methods” (para. [0001]).
“... obtaining customer data and enterprise data for a financial institution (FI) over a first interval of time; ...” - Shen discloses, “Using historical data such as past transaction data, a customer's previous CV can be determined (e.g., from existing data, the CV for a customer over a previous period of time such as the last 6 or 12 months can be calculated).” (para. [0020]), “A user account may have a variety of associated funding mechanisms (e.g. a linked bank account, a credit card, etc.) and may also maintain a currency balance in the electronic payment account. A number of possible different funding sources can be used to provide a source of funds (credit, checking, balance, etc.)” (para. [0030]), “Events database (DB) 130 includes records of various actions taken by users of transaction system 160. These records can include any number of details, such as any information related to a transaction or to an action taken by a user on a web page or an application installed on a computing device (e.g., the PayPal app on a smartphone). Many or all of the records in events database 130 are transaction records including details of a user sending or receiving currency (or some other quantity, such as credit card award points, cryptocurrency, etc.)” (para. [0031]), “Input layer 305 may provide a variety of inputs for the model. These inputs can include past transactions for an electronic payment transaction service such as that provided by PayPal™ or other service providers” (para. [0045]), “Input data 405 for neural network 400, for example, can include not just transaction history, but also various profile data about a user” (para. [0054]), and “predicting, by the computer system, total customer values for each of a plurality of different users of an electronic transaction payment service over a particular period of time” (claim 8). Receiving inputs including data about user accounts with payment services and banks, records of actions taken by users, and the like, from particular periods of time, in Shen, reads on the recited limitation.
“... training machine-learning models based on labeled parameter data that identifies factors relevant to predicting future transactions for a given customer's transaction history, wherein the labeled parameter data includes customer identifier, transaction date, transaction time, transaction location, transaction device, insufficient fund violations, interchange fees, costs of a given transaction, profit of a given transaction, loan terms, and loan interest, and wherein the training includes frequency and recency of transactions for each customer during the first interval of time to identify factors relevant to predicting the transactions or transaction rate in a given future period or interval of time; ...” - See the aspects of Shen that have been cited above. Shen also discloses, “to accurately predict future user behavior and the outcomes of this behavior” (para. [0002]), “a user makes a $100 ACH transaction but the user's bank later denies the ACH for insufficient funds” (para. [0018]), “revenue paid to the service provider for a transaction” (para. [0019]), “Turning to FIG. 2, a block diagram is shown of one embodiment of sample records 200. This diagram is just one example of some of the types of data that can be maintained regarding electronic payment transactions engaged in by a user, and these records may be contained in events database 130” (para. [0033]), “interchange fees” (para. [0038]), “Also, many additional pieces of information may be present in events database 130 in various embodiments. An email address associated with an account (e.g. which can be used by users to direct an electronic payment to an account using only that account's associated email address) can be listed. Home address, phone number, and any number of other personal details can be listed. A transaction timestamp (e.g. date, time, hour, minute, second) is provided in various embodiments” (para. [0041]), “Turning to FIG. 4, a diagram of one embodiment of a densely connected neural network 400 is shown. The architecture for this densely connected neural network is broadly applicable, and need not be used only to calculate CV. In various embodiments, other quantities can be calculated for the model—indeed, this architecture can be used for any appropriate modeling task” (para. [0047]), “Neural network 400 can be optimized on an output task, such as CV, using historical data in various embodiments. For example, neural network 400 (which is densely connected) can be trained using past transaction data for users and/or other data (e.g. cost data, fraud loss data, revenues data, etc.)” (para. [0052]), “Once neural network 400 is trained, it can then be used to predict CV (or another quantity) for various input data” and “Input data 405 for neural network 400, for example, can include not just transaction history, but also various profile data about a user. Someone who has only recently joined PayPal™, for example, may still provide many pieces of information about themselves, such as mailing address, country of residence, linked funding sources (debit or credit card, checking account, etc.), an email address, and device information (e.g. what model of computer or smartphone the user has, whether they have connected to PayPal.com from different cities, network information such as IP addresses used to login, additional hardware device information like screen size and other fixed and/or changeable aspects of the device, etc.)” (para. [0054]), “Turning to FIG. 5, a diagram is shown of one embodiment of a unified model 500 for predicting customer value (CV) and component pieces of customer value such as cost, loss, revenue derived from a user sending money (Rev_S), and revenue derived from a user receiving money (Rev_R). This unified model allows simultaneous calculation from the same model for not only an overall objective (CV) but also calculations for related sub-variables (cost, loss, Rev_S, Rev_R) that are related components of the overall objective” (para. [0063]), “Once the first series of neural network modules (e.g. 510, 515, 520, 525) has finished its calculations, output from that series is then distributed to a plurality of variable sub-task neural network modules. In this example, the sub-task neural network modules comprise a first sub-task module including dense layers 530 and 531, a second sub-task module including dense layers 535 and 536, a third sub-task module including dense layers 540 and 541, and a fourth sub-task module including dense layers 545 and 546. These modules are each respectively designed to generate outputs for the sub-variables loss, cost, Rev_S, and Rev_R, as indicated by tasks 532, 537, 542, and 547” (para. [0063]). Shen also discloses “Account ID,” “Country,” “IP Address,” “Fee Costs,” (FIG. 2). Training the neural network modules and variable sub-task neural network modules based on pieces of information in the events database related to calculated predictions of transactional loss, cost, and revenue, wherein the information includes the account ID, personal details, the timestamp (date, time, hour, minute, and second), the country, the IP address of the device, the insufficient funds, the fee costs, the revenues, and wherein the training information includes one or more transactions being performed during periods of time by customers to identify sub-variables relevant to predicting the transactional sub-variables in the future, in Shen, reads on the recited “training machine-learning models based on labeled parameter data that identifies factors relevant to predicting future transactions for a given customer's transaction history, wherein the labeled parameter data includes customer identifier, transaction date, transaction time, transaction location, transaction device, insufficient fund violations, interchange fees, costs of a given transaction, profit of a given transaction, ... and wherein the training includes frequency and recency of transactions for each customer during the first interval of time to identify factors relevant to predicting the transactions or transaction rate in a given future period or interval of time” limitation.
“... forecasting predicted future transactions ... over a second interval of time for each customer of the FI based on the customer data and the enterprise data; ...” - See the aspects of Shen that have been cited above. Shen also discloses, “A unified model architecture (which may or may not make use of a densely connected neural network) can also be used to predict not only CV, but also sub-components of CV” (para. [0023]), and “a diagram is shown of one embodiment of a unified model 500 for predicting customer value (CV) and component pieces of customer value such as cost, loss, revenue derived from a user sending money (Rev S), and revenue derived from a user receiving money (Rev R). This unified model allows simultaneous calculation from the same model for not only an overall objective (CV) but also calculations for related sub-variables (cost, loss, Rev S, Rev R) that are related components of the overall objective” (para. [0063]). The predicting of sub-components of CV, including the sending of money and the receiving of money, based on the input data, in Shen, reads on the recited limitation.
The combination of Shen and Martinez (hereinafter referred to as “Shen/Martinez”) teaches limitations below of independent claim 1 that do not appear to be disclosed in their entirety by Shen:
The claimed “labeled parameter data” includes “loan terms, and loan interest” - Martinez discloses, “the financial product is at least one of a credit card, a line of credit, or a mortgage” (para. [0025]), “a term product, such as a mortgage” (para. [0044]), and “The remaining term value is the value of the term product over the remaining term between the current date 234 and the term date 236. For example, the remaining term value for a mortgage may include the remaining net interest income expected between the current date 234 and the term date 236” (para. [0061]). Information about mortgage term and interest, in Martinez, reads on the recited limitation.
The claimed “forecasting” also applies to “churn likelihood” - Martinez discloses, “the remaining lifetime of the customer is based on a remaining lifetime of a cohort. In an embodiment, the cohort is the first cohort. In an embodiment, the remaining lifetime of the cohort is based on an attrition driver for the cohort. In an embodiment, the attrition driver is at least one of a risk score, a usage rate, a default rate, and a delinquency rate. In an embodiment, the attrition driver is a customer exit rate based on historical customer data for the cohort. In an embodiment, the remaining lifetime is indicative of a point in time wherein 50% of or less of the customers originally in the cohort are no longer expected to remain in one of the plurality of cohorts” (para. [0030]), and “a computer-implemented method for determining a customer lifetime value (CLV) for a customer is disclosed, the method including segmenting a plurality of customers into a plurality of cohorts based on a performance driver indicative of future customer performance, wherein a first cohort includes the customer; generating a plurality of cohort forecasts corresponding to the plurality of cohorts, each cohort forecast based on the performance driver of each customer belonging to a corresponding cohort, wherein the plurality of cohort forecasts are generated for a remaining lifetime of the customer; and, calculating the CLV metric based on the plurality of cohort forecasts and a set of transition probabilities indicative of a likelihood that the customer remains in the first cohort, or transitions to a different cohort” (para. [0034]). Forecasting the remining lifetime of the customer in terms of attrition and expected customer exit, in Martinez, reads on the recited limitation.
“... predicting a customer lifetime value (CLV) for each customer over the second interval of time; ...” - See the aspects of Shen and Martinez that have been cited above. Predicting the customer values, in Shen, in the form of expected or future customer lifetime values, in Martinez, reads on the recited limitation. Additionally or alternatively, Li is cited below to address the recited limitation.
“... using outputted predicted CLVs as input to a CLV manager along with the predicted future transactions and the churn likelihood to determine and calculate a final adjusted CLV per customer over the second interval of time; and ...” - See the aspects of Shen that have been cited above. Shen also discloses, “Neural network 400 can be optimized on an output task, such as CV, using historical data in various embodiments. For example, neural network 400 (which is densely connected) can be trained using past transaction data for users and/or other data (e.g. cost data, fraud loss data, revenues data, etc.). Training the model can be performed, in the case of CV, by taking known user-related data and running it through neural network 400. Predictions from the neural network 400 can then be compared to actual data, “Adjustments can then be made to the different components of the neural network (e.g., to neurons within dense layer 410, 425, etc.) to see if tweaking the model results in better accuracy,” and “This process can be repeated many times, for different customers (with potentially millions or more trials) to tweak the model to produce good results for a large population. Neuron adjustments can include changing weighting and/or mathematical functions at those neurons to produce different results” (para. [0052]). Using the transactional loss, cost, and revenue predictions as inputs to the unified neural network, and using the predicted CLV as input for comparison purposes to the unified neural network to make adjustments that result in improved CLV, in Shen, reads on the recited “using outputted predicted CLVs as input to a CLV manager along with the predicted future transactions ... to determine and calculate a final adjusted CLV per customer over the second interval of time” limitation. Using the forecasting of remaining lifetime of the customer in terms of attrition and expected customer exit, in Martinez, reads on the recited “and the churn likelihood” limitation.
Martinez discloses “quantitative customer analysis” (abstract), similar to the claimed invention and to Shen. It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the determining of the customer values and their component pieces, in Shen, to include consideration of financial products like loans, and using forecasts about the remaining lifetimes of customers, the likelihood of customers exiting, and attrition drivers, as in Martinez, because such values can be used for determining useful customer lifetime values, as taught by Martinez (see para. [0027]).
The combination of Shen, Martinez, and Li (hereinafter referred to as “Shen/Martinez/Li”) teaches limitations below of independent claim 1 that do not appear to be taught in their entirety by Shen/Martinez:
“... predicting a customer lifetime value (CLV) for each customer over the second interval of time ...” - See above for the additional or alternative approach to addressing the recited limitation based on aspects disclosed by Martinez. Li discloses, “The current savings account CLV model has been introduced before. We used the average monthly balance of current savings account multiplied by the net interest rate at that time to arrive at the NIR; balance change times multiplied by the service cost allocation ratio to arrive at IE; the difference between NIR and IE is the customer profitability. In addition, the CLV is the customer’s profitability of all lifetime discounted values,” “In the calculation of CLV, we assume that customers’ transaction behavior in 2009.12 and after would be the same as in 2009.11; that is, the average monthly account balance and the balance change times remain unchanged. Then we calculate the customer’s present value of its profits in the infinite period.” and “a method to calculate CLV” (p. 142), and “CLV calculations are divided into three phases, the first phase of the training period, 1-12; 13-16 for the second stage of the forecast period, the third stage, 17-N for the infinite life-cycle of the CLV” (p. 151). Forecasting the CLVs of customers according to various date ranges and/or phases, in Li, reads on the recited limitation.
“... based on a savings account balance rating value assigned by the FI for each customer, wherein savings account balance rating value include low value, medium value, or high value; and ...” - See the aspects of Li that have been cited above. Li also discloses, “In personal bank service scenarios, the most valuable customers are those who are less active and have high account balances” (p. 158). Calculating customer CLVs based on customer savings account balances, including those identified as being high account balances, in Li, reads on the recited limitation.
Li discloses, “relationship marketing” (p. 141), similar to the claimed invention and to the combination of Shen and Martinez. It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the determining of customer value, in the combination of Shen and Martinez, to include the savings account balance level and CLV aspects, of Li, at least because “three reasons encourage us to begin with the customers of current savings accounts,” as taught by Li (p. 142).
The combination of Shen, Martinez, Li, and Siebel (hereinafter referred to as “Shen/Martinez/Li/Siebel”) teaches limitations below of independent claim 1 that do not appear to be taught in their entirety by Shen/Martinez/Li:
“... integrating the predicted future transactions, the churn likelihood, and the final adjusted CLV for each customer into an interface or a system of the FI.” - See the aspects of Shen, Martinez, and Li that have been cited above. Shen also discloses, “The peripherals 820 may include user interface devices such as a display screen” (para. [0104]). Shen does not appear to provide details about the specific type of content that is displayed on the display screen. Siebel teaches, “a user interface 1700 represents an executive dashboard interface that can be used to summarize information” (para. [0362]), and “The user interface 1700 also includes a forecast categories section 1706, which identifies various overall forecasts” (para. [0363]). Presenting information about the sending and receiving of money, the remaining lifetime of the customer in terms of attrition and expected exit, customer (lifetime) value, and adjusted customer (lifetime) value, in Shen/Martinez/Li, as part of the summarized information, reports, and forecasts, in Siebel, reads on the recited limitation.
Siebel teaches, “the use case here involves financial services, meaning the services provided to customers are financial in nature (such as banking or investment services)” (para. [0424]), similar to the claimed invention and to Shen/Martinez/Li. It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have displayed the information and forecasts, of Shen/Martinez/Li, using the display screen of Shen/Martinez/Li, per the dashboard interface aspects, of Siebel, for its ability to visually convey elements of information, as taught by Siebel (see para. [0362]).
Regarding claim 2, Shen/Martinez/Li/Siebel teaches the following limitations:
“The method of claim 1 further comprising: iterating to the obtaining at a preconfigured period of time to update each of the predicted future transactions, churn likelihood, and CLV for each customer of the FI.” - See the aspects of Shen, Martinez, and Li that have been cited above. Continuous operation of the method of Shen/Martinez/Li/Siebel teaches the recited limitations. For example, every instance of performance of the method of the combination (other than the first instance) reads on the recited “iterating to the obtaining at a preconfigured period of time” limitation. Results of performance of the method of the combination, to make predictions about the sending and receiving of money (of Shen), the attrition and CLV (of Martinez), and the CLV and savings accounts (of Li), reads on the recited limitation. The rationales for combining the teachings of the cited references, from the rejection of independent claim 1, also apply to this rejection of claim 2.
Regarding claim 3, Shen/Martinez/Li/Siebel teaches the following limitations:
“The method of claim 1, wherein forecasting further includes forecasting predicted debit transactions and predicted credit transactions for each customer separately.” - See the aspects of Shen and Li that have been cited above. Shen also discloses, “Turning to FIG. 5, a diagram is shown of one embodiment of a unified model 500 for predicting customer value (CV) and component pieces of customer value such as cost, loss, revenue derived from a user sending money (Rev_S), and revenue derived from a user receiving money (Rev_R). This unified model allows simultaneous calculation from the same model for not only an overall objective (CV) but also calculations for related sub-variables (cost, loss, Rev_S, Rev_R) that are related components of the overall objective” (para. [0063]), “each of the variable sub-task neural network modules is configured to calculate a separate one of a plurality of component variables for predicted customer value. As noted above, for example, a loss variable sub-task neural network module including dense layers 530 and 531 is connected to the output of DBD module 525, as are other sub-task neural network modules for cost, Rev S, and Rev R” (para. [0071]). Making predictions about sending money and receiving money, by customers, using different sub-task neural network modules, in Shen, reads on the recited “wherein forecasting further includes forecasting predicted ... transactions for each customer separately” limitation. Li discloses, “Therefore, if we use average account balance as measuring unit, the number of transactions is the number of both debit and credit transactions and the transactions times linked with the average account balance changes. Of note, we assume that each transaction changes the balance” (p. 152). Using the neural network modules for calculations about sub-variables, in Shen, when sub-variable include the debit and credit transactions, in Li, reads on the recited “forecasting predicted debit transactions and predicted credit transactions” limitation. The rationales for combining the teachings of the cited references, from the rejection of independent claim 1, also apply to this rejection of claim 3.
Regarding claim 4, Shen/Martinez/Li/Siebel teaches the following limitations:
“The method of claim 3, wherein forecasting further includes forecasting a first churn likelihood for the predicted debit transactions and forecasting a second churn likelihood for the predicted credit transactions.” - See the aspects of Shen, Martinez, and Li that have been cited above. Making predictions about attrition, customer exits, and remaining lifetimes of customers, as in Martinez, based on forecasted debit transactions, and based on forecasted credit transactions, using the sub-task neural network modules, in Shen, reads on the recited limitation. The rationales for combining the teachings of the cited references, from the rejection of independent claim 1, also apply to this rejection of claim 4.
Regarding claim 5, Shen/Martinez/Li/Siebel teaches the following limitations:
“The method of claim 4, wherein forecasting further includes obtaining the predicted debit transactions with the first churn likelihood from a trained debit machine learning model.” - See the aspects of Shen, Martinez, and Li that have been cited above. Receiving forecasted debit transactions (of Li), with predicted attrition/customer exits/remaining customer lifetimes (of Martinez), using the sub-task neural network modules (of Shen), reads on the recited limitation. The rationales for combining the teachings of the cited references, from the rejection of independent claim 1, also apply to this rejection of claim 5.
Regarding claim 6, Shen/Martinez/Li/Siebel teaches the following limitations:
“The method of claim 5, wherein forecasting further includes obtaining the predicted credit transactions with the second churn likelihood from a trained credit machine learning model.” - See the aspects of Shen, Martinez, and Li that have been cited above. Receiving forecasted credit transactions (of Li), with predicted attrition/customer exits/remaining customer lifetimes (of Martinez), using the sub-task neural network modules (of Shen), reads on the recited limitation. The rationales for combining the teachings of the cited references, from the rejection of independent claim 1, also apply to this rejection of claim 6.
Regarding claim 7, Shen/Martinez/Li/Siebel teaches the following limitations:
“The method of claim 6, wherein predicting further includes obtaining the CLV from a trained CLV machine learning model by providing as input the predicted debit transactions, the predicted credit transactions, the first churn likelihood, and the second churn likelihood.” - See the aspects of Shen, Martinez, and Li that have been cited above. Shen also discloses, “In operation 490, the neural network is trained by analysis system 120, in some embodiments. Training may include using historical data to optimize the neural network to predict a particular quantity, such as CV” (para. [0060]). Receiving the customer value (of Shen), from the trained neural network (of Shen), wherein the customer value includes the CLV (of Martinez and Li), based on the forecasted debit transactions, the forecasted credit transactions, and predicted attrition/customer exits/remaining customer lifetimes (of Martinez), reads on the recited limitation. The rationales for combining the teachings of the cited references, from the rejection of claim 1, also apply to this rejection of claim 7.
Regarding claim 8, Shen/Martinez/Li/Siebel teaches the following limitations:
“The method of claim 6, wherein predicting further includes processing a statistical and heuristic algorithm using the predicted debit transactions, the predicted credit transactions, the first churn likelihood, and the second churn likelihood to obtain the CLV.” - See the aspects of Shen, Martinez, and Li that have been cited above. Shen also discloses, “these adaptations represent unique ways of structuring and processing data using advanced technical algorithms and machine learning techniques” (para. [0079]), “a densely connected neural network (e.g. as in FIG. 4) and a structured convolutional neural network (e.g. as in FIG. 6) can be used in combined joint model that may achieve even greater accuracy for predicting CV (or another quantity)” and “Such a joint combined model can also operate as in FIG. 5 to predict multiple sub-variables for CV such as loss, cost, Rev_S, and Rev_R” (para. [0098]). Predicting DV by instituting joint models involving advanced technical algorithms and machine learning techniques, associated with various types and forms of neural networks, to predict CVs (in Shen), based on the forecasted debit and credit transactions (in Li), and the predicted attrition/customer exits/remaining customer lifetimes (in Martinez), reads on the recited limitation. The rationales for combining the teachings of the cited references, from the rejection of independent claim 1, also apply to this rejection of claim 8.
Regarding claim 9, Shen/Martinez/Li/Siebel teaches the following limitations:
“The method of claim 1, wherein integrating further includes providing the predicted future transactions, the churn likelihood, and the CLV for each customer via an application programming interface (API) to the interface or the system.” - See the aspects of Shen, Martinez, Li, and Siebel that have been cited above. Siebel also teaches, “The outputs can also be presented in various ways depending on the AI-based CRM functions 226-238 being performed and the use case, such as when the outputs are used in graphical user interfaces, reports, or marketing campaigns. The outputs can further be provided in any suitable manner, such as via electronic communications/transmissions like via an application program interface (API), a real-time stream, or a dynamic graphical reporting interface” (para. [0123]). Displaying the predictions about sending and receiving money and customer value (see Shen); attrition, customer lifetimes, customer exits, and CLV (see Martinez); and CLV and savings accounts (see Li), using the interface and API, in Siebel, reads on the recited limitation. The rationales for combining the teachings of the cited references, from the rejection of independent claim 1, also apply to this rejection of claim 9.
Regarding claim 10, Shen/Martinez/Li/Siebel teaches the following limitations:
“The method of claim 9, wherein providing further includes providing the predicted future transactions, the churn likelihood, and the CLV for each customer to a dashboard
interface of the system via the API.” - See the aspects of Shen, Martinez, Li, and Siebel that have been mentioned above. Displaying the predictions about sending and receiving money and customer value (see Shen); attrition, customer lifetimes, customer exits, and CLV (see Martinez); and CLV and savings accounts (see Li), using the dashboard and API, in Siebel, reads on the recited limitation. The rationales for combining the teachings of the cited references, from the rejection of independent claim 1, also apply to this rejection of claim 10.
Regarding independent claim 11, while the claim is of different scope relative to claims 1 and 4-7, the claims recite limitations similar to those recited by claims 1 and 4-7. As such, the rationales applied in the rejection of claims 1 and 4-7 also apply for purposes of rejecting claim 11. Claim 11 is, therefore, also rejected under 35 USC 103 as obvious in view of Shen/Martinez/Li/Siebel.
Regarding claim 12, Shen/Martinez/Li/Siebel teaches the following limitations:
“The method of claim 11 further comprising: updating each of the predicted credit transactions, the first churn likelihood, the predicted credit transactions, the second churn likelihood, and the CLV at predefined intervals of time based on actual observed transactions of each customer.” - See the aspects of Shen, Martinez, Li, and Siebel that have been cited above. Shen also discloses, “if a user had an average annual CV of $135 for the last 5 years, he may be likely to have a similar CV for the next 12 month period of the future” (para. [0022]). Continual operation of the method and system of Shen/Martinez/Li/Siebel, involving using newly received information to determine the transactional losses, costs, revenues, and customer values (in Shen), and the forecasted CLVs and attrition (in Martinez), for 5 year periods, reads on the recited limitation. The rationales for combining the teachings of the cited references, from the rejection of independent claim 11, also apply for purposes of rejecting claim 12.
Regarding claim 13, Shen/Martinez/Li/Siebel teaches the following limitations:
“The method of claim 11, wherein predicting further includes training a third MLM to generate each CLV for each customer using as input corresponding predicted credit transactions, a corresponding first churn likelihood, a corresponding predicted debit transactions, and a corresponding second churn likelihood.” - See the aspects of Shen, Martinez, and Li that have been cited above. The predicting involving training multiple neural network modules and variable sub-task neural network modules (per Shen), to generate customer values (per Shen) including customer lifetime values (per Martinez and Li), using as inputs transactions involving receipt or disbursement of various amounts (per Shen) and forecasts of attrition (per Martinez), reads on the recited limitation. The rationales for combining the teachings of the cited references, from the rejection of independent claim 11, also apply for purposes of rejecting claim 13.
Regarding claim 14, Shen/Martinez/Li/Siebel teaches the following limitations:
“The method of claim 11, wherein predicting further includes processing a statistical and heuristic algorithm on each of the predicted credit transactions, the first churn likelihood, the predicted debit transactions, and the second churn likelihood to obtain a corresponding CLV for a particular customer.” - See the aspects of Shen, Martinez, and Li that have been cited above. Shen also discloses, “mathematical functions” (para. [0052]), and “using advanced technical algorithms and machine learning techniques” (para. [0079]). The predicting involving using the mathematical functions, advanced technical algorithms, and machine learning techniques on information including predictions on transaction losses, costs, and revenues associated with receiving and sending amounts (per Shen), and also including forecasts about attrition (per Martinez), to determine customer values (per Shen), in the form of customer lifetime values (per Martinez and Li), reads on the recited limitation. The rationales for combining the teachings of the cited references, from the rejection of independent claim 11, also apply for purposes of rejecting claim 14.
Regarding claim 15, Shen/Martinez/Li/Siebel teaches the following limitations:
“The method of claim 11, wherein generating further includes adding a total predicted profit for the given interval of time for each record based on corresponding predicted credit transactions.” - See the aspects of Shen that have been cited above. Predicting the revenue derived from the user receiving money in the future, in Shen, reads on the recited limitation.
Regarding claim 16, Shen/Martinez/Li/Siebel teaches the following limitations:
“The method of claim 15, wherein generating further includes adding a total predicted cost for the given interval of time for each record based on corresponding predicted debit transactions.” - See the aspects of Shen that have been cited above. Determining the revenue derived from the user sending money, in Shen, reads on the recited limitation.
Regarding claim 17, Shen/Martinez/Li/Siebel teaches the following limitations:
“The method of claim 11, wherein delivering further includes providing the records to the system or the interface via an application programming interface.” - Siebel discloses, “The outputs can further be provided in any suitable manner, such as via electronic communications/transmissions like via an application program interface (API), a real-time stream, or a dynamic graphical reporting interface” (para. [0123]). The rationales for combining the teachings of the cited references, from the rejection of independent claim 11, also apply to this rejection of claim 17.
Regarding claim 18, Shen/Martinez/Li/Siebel teaches the following limitations:
“The method of claim 17, wherein providing further includes providing the records to a dashboard interface associated with a system of the FI.” - See the aspects of Siebel that have been cited above. Siebel also discloses, “a user interface 1700 represents an executive dashboard interface that can be used to summarize information associated with a company or a portion thereof” (para. [0362]). Providing summarized information to the dashboard interface associated with the company, in Siebel, reads on the recited limitation. The rationales for combining the teachings of the cited references, from the rejection of independent claim 11, also apply to this rejection of claim 18.
Regarding claims 19 and 20, while the claims are of different scope relative to independent claims 1 and 11, the claim recites limitations similar to those recited by claims 1 and 11. As such, the rationales applied to reject claims 1 and 11 also apply for purposes of rejecting claims 19 and 20. Limitations recited by claims 19 and 20 that do not appear to have a counterpart in claims 1 and 11, such as the recited “system, comprising: at least one server comprising at least one processor and a non-transitory computer-readable storage medium; the non-transitory computer-readable storage medium comprising executable instructions; and the executable instructions when executed by at least one processor cause the at least one processor to perform operations” limitation of claim 19, is taught by Shen/Martinez/Li/Siebel (see, e.g., paras. [0099]-[0104], describing elements of the “Computer-Readable Medium” and “Computer System” of Shen). Claims 19 and 20 are, therefore, also rejected under 35 USC 103 as obvious in view of Shen/Martinez/Li/Siebel.
Response to Arguments
The applicant’s remarks and arguments from the Amendment/Response are addressed first below. The applicant’s remarks and arguments from the Supplement are addressed thereafter.
On pp. 8-10 of the Amendment/Response, the applicant argues for reconsideration and withdrawal of the claim rejection under 35 USC 101. More specifically, the applicant argues that the amended claims solve a specific technical problem. (See Amendment/Response, p. 8.) The technical problem being that conventional machine learning systems fail to account for banking-specific phenomena. (See Amendment/Response, p. 9.) The applicant also argues that the claims offer a technical solution addressing the problem through training machine learning models on specific labeled parameter data. (See Amendment/Response, p. 9.) The examiner finds the arguments unpersuasive. Selecting specific training datasets for machine learning is conventional. While an unconventional technical solution to a technological problem may lead to eligibility, (see, e.g., MPEP 2106(II)), a conventional technical solution does not.
The applicant also argues that the amended claims strongly align with eligible claim 3 of the Office’s Example 47, in that both use specific labeled training data, solve technological problems, and provide specific technological improvements. (See Amendment/Response, p. 9.) The examiner finds the arguments unpersuasive. Claim 3 of Example 47 detects network anomalies to improve network security, which is clearly technological. The applicant’s claims, on the other hand, specify a non-technological improvement. The applicant characterizes the claims as improving machine learning accuracy. (See Amendment/Response, p. 9.) The examiner takes another view. The claims improve the accuracy of banking predictions by using conventional machine learning modelling to process information, instead of performing the information processing manually. Further, while network intrusion detection is a technical field, making banking predictions is not. The applicant’s claims are more like ineligible claim 2 of Example 47 than eligible claim 3 of the example.
The applicant also argues that the amended claims integrate any alleged judicial exception into a practical application by improving machine learning training technology itself. (See Amendment/Response, p. 10.) The applicant highlights that the training uses labeled parameter data including specific banking parameters combined with frequency and recency of transactions, that provides a technological solution that enables models to account for banking-specific phenomena. (See Amendment/Response, p. 10.) The applicant also argues that the claims recite a practical application by solving the technological problem that conventional systems fail to address banking-specific requirements where customer behavior patterns require specialized parameter identification for accurate prediction. (See Amendment/Response, p. 10.) The examiner finds the arguments unpersuasive. Selecting specific types of training data is part of conventional machine learning. Virtually all machine learning models are trained in such a way. The use of specific data does not improve the machine learning itself. Note that element (b) in claim 2 of Example 47 did not lead to eligibility. Also, addressing banking specific requirements where customer behavior patterns require specialized parameter identification for accurate prediction is not a technological problem. It is a banking prediction problem.
The applicant also argues that the specific training methodology using comprehensive labeled parameter data combined with frequency and recency analysis represents an unconventional technical solution, not well-understood, routine, or conventional activity. (See Amendment/Response, p. 10.) The cited references in the 35 USC 103 rejection establish that training machine learning models on specific forms of data is well-understood, routine, and conventional. It is an act performed in virtually all forms of conventional machine learning. In any event, there are multiple other ineligibility rationales presented in the 35 USC 101 section above that act as a bar to eligibility.
On pp. 10-12 of the Amendment/Response, the applicant requests reconsideration and withdrawal of the claim rejection under 35 USC 103. More specifically, the applicant argues that the cited references fail to disclose, teach, or suggest the claimed training machine learning models on various labeled parameter data. The examiner finds the arguments unpersuasive. The 35 USC 103 section above cites new passages and figures from the cited references that read on said claim limitations.
The applicant also argues that the combination of the cited references is improper because there is no suggestion or motivation to modify their teachings to arrive at the specific comprehensive training methodology claimed. (See Amendment/Response, pp. 11 and 12.) The examiner finds the arguments unpersuasive. The 35 USC 103 section above outlines suggestions or motivations for the combinations, that together disclose, teach, or suggest the claimed training methodology.
On pp. 8-14 of the Supplement, the applicant argues for reconsideration and withdrawal of the claim rejection under 35 USC 101. More specifically, the applicant argues that the amended claims now more explicitly recite the specific technical problem and solution (conventional machine learning systems fail to account for banking-specific phenomena where customers generate value through multiple disparate mechanisms while generating costs through various channels). (See Supplement, p. 8.) The examiner finds the arguments unpersuasive. The problem solved by the claims is not technological. The problem is being solved is more accurate determinations of CLV. The solution involves using separate models. The separate models process data using conventional, generic machine learning.
The applicant also argues that the amended claims recite technical improvements similar to those in Desjardins. (See Supplement, p. 9.) According to the applicant, the claimed training methodology represents a concrete improvement to how machine learning models are trained and operate in the banking domain, analogous to how Desjardins improved machine learning. (See Supplement, p. 9.) The examiner finds the arguments unpersuasive. Desjardins involves a claim for training a machine learning model, with specific steps about how the training is performed so that the model can learn new tasks in succession while protecting knowledge about previous tasks. While Desjardins describes how training should be performed, to improve machine learning, the applicant’s claims merely describe the types of training data that should be used in conventional, generic machine learning training. The machine learning training process is not improved by the applicant’s claims. Rather, the applicant’s claims read like applying conventional, generic machine learning training processes on banking data.
The applicant also argues that the claims offer a technical solution via specialized training methodology, two-stage prediction architecture, and iterative refinement. (See Supplement, p. 10.) The examiner finds the arguments unpersuasive. The claimed training methodology is not specialized. It is generic, conventional machine learning training. The data used in training might be specialized, but that can be said of every machine learning model training process. The alleged two-stage prediction architecture reads like merely using two generic, conventional machine learning models, often time referred to as ensemble learning. Iterative refinement is inherent in virtually all forms of conventional, generic machine learning.
The applicant also argues that the claims integrate any judicial exception into a practical application through elements that improve machine learning training and prediction technology, including specialized training on banking-specific parameters, frequency and recency-based factor identification, and two-stage CLV determination with CLV manager. (See Supplement, pp. 10-13.) The examiner finds the arguments unpersuasive, for the reasons explained in the immediately preceding paragraph. Further, the applicant’s claims involve using conventional machine learning processes to predict CLVs, unlike Desjardin’s claims, which involve actual improvements to how machine learning training is performed. The examiner contends that neither Desjardins or any other eligibility rationale set forth by the Office establishes that a machine learning model is improved or becomes unconventional merely based on the type of content supplied as training data. If that was the case, every machine learning model would be eligible before the Office just based on using machine learning in a specific context. Machine learning models being trained on banking data, or trained on ridesharing data, or trained on appointment scheduling data, makes no difference on the issue of eligibility. Perhaps if the claims and specification establish that the machine learning is being trained on data that it could not have been trained on conventionally due to some improvement to how the training is conducted, that might lead to eligibility. But the content or subject matter of the training data is of little weight for eligibility determinations. The applicant’s remaining arguments in the Supplement have already been addressed by various paragraphs above.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Such prior art includes the following:
U.S. Pat. App. Pub. No. 2015/0073954 A1 to Braff discloses, “providing anonymized, filtered data from a financial institution having cardholders to a business client. The method may include the steps of storing data in at least one database. The data may include credit card transaction data and debit card transaction data maintained by the financial institution, cardholder demographic data maintained by the financial institution, and other data maintained by the business client. The method may also implement a heuristic process to clean the data. Further, the method may comprise providing an interface for the business client to allow the business client to filter and display the data. The interface can anonymize the data to safeguard the privacy of the cardholders, receive input from the business client as to desired filtering criteria, wherein the filtering criteria include time period, geographic region, type of merchant, and cardholder demographic data, and present the anonymized, filtered data to the business client. The invention can thus enable improved decision making by the business client in various promotions, investments and other transactions.” (Abstract.)
U.S. Pat. App. Pub. No. 2019/0019213 A1 to Silberman et al. discloses, “a computing device may determine, from multiple data sources, multiple event timelines, with each event timeline associated with a customer. Each event in an event timeline represents an interaction between the customer and a vendor of goods and/or services. For N (N>1) marketing campaigns, N augmented timelines may be created for each timeline by augmenting each event timeline with the individual marketing campaigns. Thus, for M (M>1) customers, M×N augmented event timelines may be created. A trained machine learning model may perform an analysis of each augmented event timeline to predict results of executing each marketing campaign. The results may include total predicted revenue and total predicted cost resulting from executing each marketing campaign. A particular marketing campaign from the N marketing campaigns may be selected and execution of one or more marketing events may be initiated.” (Abstract.)
U.S. Pat. App. Pub. No. 2023/0032429 A1 to Raj Susairaju et al. discloses, “A system and method that provides predictions about the propensity of customers to answer a communication, pay an outstanding bill, and remain a customer. Furthermore, the system and method use propensity predictions to optimize the order of tasks that are carried out by integrated business systems. During a specific time-block, an optimization may reconfigure an auto-dialer to contact only the most likely customers who are both willing to pay and who are most likely to answer, as one example. Another example may be the reordering of tasks provided to a call agent's computing device. The system uses machine learning for the predictions and optimizations, and continuously and automatically updates the machine learning models over time.” (Abstract.)
U.S. Pat. App. Pub. No. 2023/0140020 A1 to Parameswar discloses, “facilitating privacy conscious market collaboration. A first-party computing system can access a first-party user information attribute from a first-party identified user profile associated with a first-party identified user and generate a first-party hashed user information attribute including indecipherable text by applying a predetermined hash function. The system can transmit to a third-party computing system a communication including the first-party hashed user information attribute and a payload including customer insight data. The third-party computing system can determine that a third-party hashed user information attribute associated with a third-party identified user profile includes indecipherable text that matches the indecipherable text of the first-party hashed user information attribute. The third-party computing system can provide to a third-party identified user associated with the third-party identified user profile an advertisement that is based on the customer insight data in the payload of the communication.” (Abstract.)
CN Pat. App. Pub. No. 112700286 A to Huang et al. discloses, “a deep learning integrated model for client classification and multi-entity matching policy. in a novel aspect, based on the client life cycle value (CLV) of the deep learning model (DNN) using data mining and recurrent neural network (RNN) - convolutional neural network (CNN) aggregation to identify potential foreground from potential customers, prediction loss/reservation; predicting the next purchase; recommending strategy to maintain and enhance the existing client relationship; and providing a potential client/client, an agent, an n-element matching between the product and the delivery strategy. In one embodiment, the CLV system obtains the CLV profile of the client; using the DNN model to generate the output based on CLV for the client; selecting n-element matching for the client according to the output based on the CLV; and collecting the feedback for n-element matching to update the n-element matching until satisfying one or more exit conditions.” (English-language abstract.)
WIPO Int’l Pub. No. 2022/140839 A1 to Stanevich et al. discloses, “computer-implemented apparatuses and methods that predict occurrences of temporally separated events using adaptively trained artificial intelligence processes. For example, an apparatus may generate an input dataset based on first interaction data that characterizes an occurrence of a first event, and may apply a trained artificial intelligence process to the input dataset. Based on the application of the trained artificial intelligence process to the input dataset, the apparatus may generate output data representative of a predicted likelihood of an occurrence of a second event within a predetermined time period subsequent to the occurrence of the first event, and may transmit the output data to a computing system. The computing system may generate second interaction data specifying an operation associated with the occurrence of the first event based on the output data, and perform the operation in accordance with the second interaction data.” (Abstract.)
Sun, Yuechi, Haiyan Liu, and Yu Gao. "Research on customer lifetime value based on machine learning algorithms and customer relationship management analysis model." Heliyon 9.2 (2023).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to THOMAS Y. HO, whose telephone number is (571)270-7918. The examiner can normally be reached Monday through Friday, 9:30 AM to 5:30 PM Eastern.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jerry O'Connor, can be reached at 571-272-6787. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/THOMAS YIH HO/Primary Examiner, Art Unit 3624