DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1, 9, and 15 are rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent No. 11,599,939 (“Patent ‘939”). Although the claims at issue are not identical, they are not patentably distinct from each other because both claim 1 of the ‘939 patent and claim 1 of the instant application describe a method involving modifying or varying weights applied to account for changes in business conditions; performing an action based on receiving an indication of a loan application and associated data; updating or configuring a machine learning model based on weights applied to the variables of the objective function, determining based on a localized linearity process being applied to the machine learning model, one or more categories of reasons for rejection mapped to action notices and sending an indication of an adverse action notice based on the mapping of categories of reasons for rejection to adverse action notices, or an indication of the reason for the loan decision. Claim 1 of Patent ‘939 recites using an ensemble machine learning model. Claim 1 of the application also recites using an ensemble machine learning model.
With regard to the differences between claim 1 of Patent ‘939 and claim 1 of the instant application, claim 1 of the instant application recites determining a loan decision associated with the loan application and sending an indication of the loan decision, whereas claim 1 of Patent ‘939 recites determining categories of reasons for rejection mapped to adverse action notices and sending to a device associated with the loan application an indication of an adverse action notice based on the mapping of categories. However, an adverse action notice sent based on a mapping of categories of reasons for rejection to a device associated with a loan application is itself a type of loan decision and sending a notice indicating a reason for the loan decision (which is equivalent to mapping categories of rejection to an adverse action notice) would be obvious to implement in view of claim 1 of Patent ‘939 because a rejection of a loan application is an adverse action. Independent claims 9 and 15 of the instant application recite substantially the same concepts and are also rejected over claim 1 of the ‘939 patent.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-3, 5-11,13-17 and 19-22 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claim 1 recites configuring an ensemble machine learning model based on modifying weights applied to one or more variables of an objective function to account for changes in business conditions, determining data associated with a loan application, determining a loan decision based on providing the data associated with the processing the loan application, and sending an indication of the loan decision, and training the plurality of machine learning models based on the objective function.
This is a fundamental economic practice and commercial interaction (agreement and business relations) because it is directed to loan applications and loan application decisions; and also a mental process (concepts performed in the human mind (including an observation, evaluation, judgment, opinion) because it can practically be performed in the human mind, with or without the use of a physical aid such as pen and paper. See MPEP 2106.04(a)(2) subsection III .
This judicial exception is not integrated into a practical application because there are no only additional elements. The claim does not require a computer to carry out the method. If a computer could be inferred, it is recited at a high level of generality and amounts to no more than mere instructions to apply the abstract idea using a generic computer or amounts to merely using a computer as a tool to perform the abstract idea. Accordingly, these additional elements do not integrate the abstract idea into a practical application. The claim is directed to an abstract idea.
The claim does not include additional elements, individually and in combination, that are sufficient to amount to significantly more than the judicial exception because, as discussed above with respect to integration of the abstract idea into a practical application, the additional elements of using generic computer (if inferred) or merely using a computer as a tool to perform the abstract idea amounts to no more than mere instructions to apply the exception using generic computer and/or computer network components. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. Thus, the claim is not patent-eligible.
Independent claims 9 and 15 recite substantially the same concepts and also recite only the machine learning model as an additional element, and are rejected for the same reason.
Claim(s) 2, 10, and 16 adds determining a reason for a loan decision based on analysis of output associated with the loan decision and indicating the reason for the loan decision which is a mental process (evaluation/judgement/opinion) and a commercial interaction (loan decision) and does not integrate the abstract idea into a practical application or provide significantly more.
Claim(s) 3, 11 and 17 adds that determining the reason is based on a localized interpretation process being applied to the output associated with the loan decision, one or more categories of reasons for rejection mapped to action notices. This is an abstract idea that falls into the category of mental process or mathematical concept and does not integrate the abstract idea into a practical application or provide significantly more. Providing reasons for rejection of a loan application also falls under the abstract idea grouping of commercial or legal interactions (legal obligations) because the Specification at ¶[0008] explains that at least two federal laws require that consumers and businesses applying for credit should receive notice of the reasons a creditor took adverse action on the application or credit account.
Claim(s) 5, 13, and 19 recites updating the objective function based on causing the one or more weights to be varied over time to account for the changes in the one or more business conditions, and one or more of updating or generating the machine learning model based on the updated objective function. This is a mental process and does not integrate the abstract idea into a practical application or provide significantly more.
Claim(s) 6, 14, and 20 claims recite what business variables are determine the objective function which is a mental process and does not integrate the abstract idea into a practical application or provide significantly more.
Regarding claim(s) 7 recites determining one or more business objectives using a computational model and generating the objective function based on the one or more business objectives, which is a mental process and does not integrate the abstract idea into a practical application or provide significantly more.
Claim(s) 8 only further specifies the source of the data and thus is insignificant extra-solution activity. See. 2106.05(g)(3). This does not integrate the abstract idea into a practical application or provide significantly more.
Independent claim 21 recites causing a machine learning model to be configured based on modifying one or more weights applied to one or more variables of an objective function to account for changes in one or more business conditions; determining, based on receiving an indication of a loan application, data associated with the loan application; determining, based on providing the data associated with processing the loan application to the machine learning model, a loan decision associated with the loan application; determining, based on analysis of the machine learning model, a reason for the loan decision; and sending an indication of the loan decision and the reason for the loan decision.
This is a fundamental economic practice and commercial interaction (agreement and business relations) because it is directed to loan applications and loan application decisions and a mental process (concepts performed in the human mind (including an observation, evaluation, judgment, opinion) because can practically be performed in the human mind, with or without the use of a physical aid such as pen and paper. See MPEP 2106.04(a)(2) subsection III. In addition, providing reasons for rejection of a loan application also falls under the abstract idea grouping of commercial or legal interactions (legal obligations) because the Specification at ¶[0008] explains that at least two federal laws require that consumers and businesses applying for credit should receive notice of the reasons a creditor took adverse action on the application or credit account.
This judicial exception is not integrated into a practical application because there are no only additional elements. The claim does not require a computer to carry out the method. If a computer could be inferred, it is recited at a high level of generality and amounts to no more than mere instructions to apply the abstract idea using a generic computer or amounts to merely using a computer as a tool to perform the abstract idea. Accordingly, these additional elements do not integrate the abstract idea into a practical application. The claim is directed to an abstract idea.
The claim does not include additional elements, individually and in combination, that are sufficient to amount to significantly more than the judicial exception because, as discussed above with respect to integration of the abstract idea into a practical application, the additional elements of using generic computer (if inferred) or merely using a computer as a tool to perform the abstract idea amounts to no more than mere instructions to apply the exception using generic computer and/or computer network components. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. Thus, the claim is not patent-eligible.
Claim 22 adds that determining the reason for a loan decision includes using a localized linearity interpretation process being applied to the output associated with the loan decision and mapping categories of reasons for rejection mapped to action notices. This is an abstract idea that falls into the category of mental process or mathematical concept and does not integrate the abstract idea into a practical application or provide significantly more.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-3, 5-11, 13-17, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over NIKANKIN (US 20130138554 A1 to Nikankin; Andrey N. et al.) in view of MERILL (US 20190378210 A1 Merrill; Douglas C. et al.) in further view of HARGRAS (US 20220036221 A1 to Hargras, Hani et al.).
Regarding claim(s) 1, 9, and 15,
NIKANKIN discloses:
A method comprising: causing a machine learning model to be configured (NIKANKIN: [0041]: the risk assessment component 202 can also employ intelligent determinations or inferences in connection with determining a risk of lending.[…] the approval component 206 can intelligently determine or infer the user's 104 eligibility for one or more loans.[…] the foregoing inferences can potentially be based upon, e.g., Bayesian probabilities or confidence measures or based upon machine learning techniques related to historical analysis, feedback, and/or other determinations or inferences.)
based on modifying one or more weights applied to one or more variables of an objective function to account for changes in one or more business conditions (NIKANKIN: ¶[031]: The risk assessment component 202 can dynamically classify, rate, or otherwise determine a risk of lending to the user 104 based on a set of risk assessment criterion. The set of risk assessment criterion can include, but is not limited to, risk of loss, expense of lending, or economic conditions. For example, if the user 104 has a high risk of loss, or a high costs of lending, the risk assessment component can classify the user 104 as high risk; ¶[0026]: the scoring calculation component 103 can assess, grade, or otherwise weight one or more of the characteristics based on a set of scoring criterion, and determine the credit score as a function of the respective weights of the factors; NIKANKIN: [0041]: inferences can potentially be based upon machine learning techniques related to historical analysis, feedback, and/or other determinations or inferences.; [0035] the risk assessment component 202 can dynamically determine a risk of lending to the user 104 based in part on a set of risk assessment criterion. the forecasting component 302 can include an economic conditions component 308. the economic conditions component 308 can forecast a set of economic conditions, including, but not limited to, local economic conditions (e.g., for the user 104), or overall economic conditions. For example, the economic conditions component 308 can forecast a potential risk of job loss for the user 104 (e.g., reduced income, likelihood of default).; [0036] The forecasting component 302 can obtain, acquire, or otherwise receive information or data for use in forecasting the subset of the risk assessment criterion from virtually any source, including, but not limited to, the set of data sources 108 or the set of profiles 106 maintained in the data store 112; the subset of the risk assessment criterion can include a virtually infinite number of criterion, and the forecasting component 302 can include a virtually infinite number of components to facilitate forecasting the subset of the risk assessment criterion.);
determining, based on receiving an indication of a loan application, data associated with the loan application (NIKANKIN: ¶[0026]: The system 100 includes a consumer information aggregator component 102 (aggregator component 102), a scoring calculation component 103, and a credit standards component 105. The consumer information aggregator component 102 can obtain, locate, or otherwise acquire data relating to a user 104, and generate a profile of candidate characteristics 106 (profile 106) based at least in part on the data ; ¶[0028]: The scoring calculation component 103 can examine, inspect or otherwise analyze the profile 106, and generate a credit score (e.g., grade, rank, etc.) for the user 104. The scoring calculation component 103 analyzes the profile 106 to determine whether various loan eligibility determination factors (factors) are included in the profile 106. For example, the factors can include, but are not limited to, the user's 104 employment, education, demographics, hobbies, residency, internet usage, and so forth. The credit score can be generated as a function of the factors included in the profile 106. For example, the scoring calculation component 103 can assess, grade, or otherwise weight one or more of the characteristics based on a set of scoring criterion, and determine the credit score as a function of the respective weights of the factors.);
determining, based on providing the data associated with processing the loan application to the machine learning model, a loan decision associated with the loan application (NIKANKIN: ¶[0002]: when a consumer needs to obtain a quick credit decision; ¶[0007]: In another non-limiting embodiment, an exemplary method is provided that includes determining a risk of lending to a user, and classifying the user as a function of the risk, generating a set of lending criteria for the determined classification, and determining an eligibility of the user for one or more loans based on a comparison of a profile associated with the user and the set of lending criteria.);
NIKANKIN does not expressly disclose the following limitations, which HARGRAS however, teaches:
and sending an indication of the loan decision .(HARGRAS: ¶[0044]: FIG. 11 shows a decision presented by the local AI model in a comprehensive format.; [0045] FIG. 12 shows the decision presented in compact view format (“This instance was classified as Uncreditworthy”; [0252] for instance, in the context of the lending business, this approach would allow an explanation as to how a given customer can switch from being a “Creditworthy” to an “Uncreditworthy” customer, to understand the edge cases and any risks. When a user wants to query, the user simply clicks on the given decision and the system queries the generated local model and presents the user with the reasoning behind the given decision in a linguistic format which has pros and cons where the contributing rules are weighted according to their importance in the given decision. The decision can be presented in a comprehensive format as shown in FIG. 11 or compact view format as shown in FIG. 12.)
It would have been obvious to one of ordinary skill in the art to combine the electronic systems and techniques for implementing dynamic risk assessment and credit standards generation of NIKANKIN with the system of HARGRAS, which discloses applications of the invention to the lending business and findings of creditworthiness (HARGRAS ¶¶[0008] and [0252]) in order to allow users to easily understand credit decisions and permit gaining insights about the differences between classes in the context of the lending business (HARGRAS ¶[0252]).
NIKANKIN does not expressly disclose the following limitations, which MERRILL however, teaches:
wherein the machine learning model comprises an ensemble machine learning model based on a plurality of machine learning models, and wherein causing the machine learning model to be configured comprises causing the plurality of machine learning models to be trained based on the objective function (MERRILL: ¶[0016]: embodiments herein include a method of using machine learning model interpretations to determine whether a heterogeneous, ensembled model has disparate impact, which of the variables used in the model are driving the disparity, the degree to which they are driving the disparity, and their relationship to business objectives such as profitability.; ¶[0019]: Machine learning models are often ensembles of heterogeneous sub-models. For example, a neural network may be combined with a tree-based model such as a random forest, or gradient-boosted tree by averaging the values of each of the sub-models to produce an ensemble score. Other computable functions can be used to ensemble heterogeneous sub model scores.; ¶[0139]: metadata includes the model or ensemble and metadata, including training data, ¶[0033]: In some embodiments, the model evaluation system evaluates and explains the model (or ensemble) by generating score explanation information for a specific score generated by the ensemble model for a particular input data set. the score explanation information is used to generate Adverse Action information. the score explanation information is used to generate an Adverse Action letter in order to allow lenders to comply with 15 U.S.C. § 1681 et. seq.; [0034]: the model evaluation system evaluates the model (or ensemble) by generating information that allows the operator to determine whether the disparate impact has adequate business justification.).
It would have been obvious to one of ordinary skill in the art to modify the combination of NIKANKIN, which discloses electronic systems and techniques for implementing dynamic risk assessment and credit standards generation and HARGRAS, which discloses applications of the invention to the lending business and findings of creditworthiness (HARGRAS ¶¶[0008] and [0252]) with the system and method of MERRILL, which provides machine learning model explain-ability information to comply with Equal Credit Opportunity Act, the Fair Credit Reporting Act (MERRILL ¶[0013]), in order to take advantage of more complex models when making lending decisions (MERRILL ¶[0018]) and because machine learning models are often ensembles of heterogeneous sub-models (MERRILL ¶[0019]).
Regarding claim(s) 2, 10, and 16,
NIKANKIN, HARGRAS, and MERILL teaches the limitations of claims 1, 9, and 15 as shown above.
NIKANKIN does not expressly disclose the following limitations, which MERILL however, teaches:
further comprising determining, based on analysis of output, of the machine learning model, associated with the loan decision, a reason for the loan decision and sending an indication of the reason for the loan decision (MERILL: ¶[0030]: In some embodiments, the model evaluation and explanation system (e.g., 120 of FIGS. 1A and 1B) uses a non-differentiable model decomposition module (e.g., 121) to decompose scores generated by a model by computing at least one SHAP (SHapley Additive exPlanation) value; ¶[0032]: In some embodiments, model evaluation and explanation system 120 uses score decompositions to determine important features a model ( or ensemble) that impact scores generated by the model (or ensemble).; ¶[0033]: the model evaluation system evaluates and explains the model ( or ensemble) by generating score explanation information for a specific score generated by the ensemble model for a particular input data set. In some embodiments, the score explanation information is used to generate Adverse Action information. In some embodiments, the score explanation information is used to generate an Adverse Action letter in order to allow lenders to comply with 15 U.S.C. § 1681 et. seq.; ¶[0018]: in the United States, under the Fair Credit Reporting Act 15 U.S.C. § 1681 et seq, when generating a decision to deny a consumer credit application, lenders are required to provide to each consumer the reasons why the credit application was denied; figure 1B: score passed to decomposition module)
It would have been obvious to one of ordinary skill in the art to modify the combination of NIKANKIN, which discloses electronic systems and techniques for implementing dynamic risk assessment and credit standards generation and HARGRAS, which discloses applications of the invention to the lending business and findings of creditworthiness (HARGRAS ¶¶[0008] and [0252]) with the system and method of MERRILL, which provides machine learning model explain-ability information to comply with Equal Credit Opportunity Act, the Fair Credit Reporting Act (MERRILL ¶[0013]), in order to take advantage of more complex models when making lending decisions (MERRILL ¶[0018]) and because machine learning models are often ensembles of heterogeneous sub-models (MERRILL ¶[0019]).
Regarding claim(s) 3, 11 and 17,
NIKANKIN, HARGRAS, and MERILL teaches the limitations of claims 1, 2, 9, 10, 15 and 16 as shown above.
NIKANKIN does not expressly disclose the following limitations, which HARGRAS however, teaches:
wherein determining the reason for the loan decision comprises determining, based on a localized interpretation process being applied to the output associated with the loan decision (HARGRAS: ¶[0017]: In [12] a method is presented to explain a prediction by sampling the input feature space around the instance to be explained. The sampled points are “close” to the original one, in order to capture and maintain local fidelity and meaning. In addition, the contribution of each point is weighted according to some distance metric capturing “how far away” the samples are from the explanation point.; ¶[0027]: feeding the input into the Type-2 FLM to provide an explanation of the output from the opaque model from a local point of view; ¶[0236]: The high-level description of the workflow is captured in FIG. 9 on the Explainable AI local component which employs type-2 fuzzy logic to generate human understandable models and explanations which can explain the opaque AI model and decision in a given input/output local vicinity; ¶[0008]: In 2010, public concerns about racial and other bias in the use of AI for […] findings of creditworthiness may have led to increased demand for transparent artificial intelligence),
It would have been obvious to one of ordinary skill in the art to combine the electronic systems and techniques for implementing dynamic risk assessment and credit standards generation of NIKANKIN with the system of HARGRAS, which discloses applications of the invention to the lending business and findings of creditworthiness (HARGRAS ¶¶[0008] and [0252]) in order to allow users to easily understand credit decisions and permit gaining insights about the differences between classes in the context of the lending business (HARGRAS ¶[0252]).
NIKANKIN does not expressly disclose the following limitations, which MERILL however, teaches:
one or more categories of reasons for rejection mapped to action notices (MERILL: ¶[0018]: in the United States, under the Fair Credit Reporting Act 15 U.S.C. § 1681 et seq, when generating a decision to deny a consumer credit application, lenders are required to provide to each consumer the reasons why the credit application was denied; figure 1B: score passed to decomposition module; ¶[0033]: In some embodiments, the model evaluation system evaluates and explains the model (or ensemble) by generating score explanation information for a specific score generated by the ensemble model for a particular input data set. In some embodiments, the score explanation information is used to generate Adverse Action information. In some embodiments, the score explanation information is used to generate an Adverse Action letter in order to allow lenders to comply with 15 U.S.C. § 1681 et. seq.; ¶[0137]: In some embodiments model metadata includes a mapping between decompositions and adverse action reason codes. In some embodiments the adverse action mapping is a computable function based on a decomposition.)
It would have been obvious to one of ordinary skill in the art to modify the combination of NIKANKIN, which discloses electronic systems and techniques for implementing dynamic risk assessment and credit standards generation and HARGRAS, which discloses applications of the invention to the lending business and findings of creditworthiness (HARGRAS ¶¶[0008] and [0252]) with the system and method of MERRILL, which provides machine learning model explain-ability information to comply with Equal Credit Opportunity Act, the Fair Credit Reporting Act (MERRILL ¶[0013]), in order to take advantage of more complex models when making lending decisions (MERRILL ¶[0018]) and because machine learning models are often ensembles of heterogeneous sub-models (MERRILL ¶[0019]).
Regarding claim(s) 5, 13, and 19
NIKANKIN, HARGRAS, and MERILL teaches the limitations of claims 1, 9, and 15 as shown above.
NIKANKIN further discloses:
wherein causing the machine learning model to be configured comprises: updating the objective function based on causing the one or more weights to be varied over time to account for the changes in the one or more business conditions, and one or more of updating or generating the machine learning model based on the updated objective function (NIKANKIN:¶[0031]: The risk assessment component 202 can dynamically classify, rate, or otherwise determine a risk of lending to the user 104 based on a set of risk assessment criterion. The set of risk assessment criterion can include, but is not limited to, risk of loss, expense of lending, or economic conditions. ¶[0039]:the adjustment component 404 can adjust a set of lending criterion for a classification of potential borrowers based on virtually any of the update data.; ¶[0047]: a risk of lending can be determined for a potential borrower (e.g., user 104) based on a set of risk assessment criterion. As discussed, the set of risk assessment criterion can include economic conditions; ¶[0055]: As an additional example, the set of lending criterion can be adjusted to reduce the difficulty of obtaining a loan (e.g., lower a lending threshold) for a set of potential borrowers previously classified as high risk. It is to be appreciated that such adjustments enable a lender to dynamically adjust lending requirements in a relatively short period of time in response to actual results obtained by the lender)).
Regarding claim(s) 6, 14, and 20
NIKANKIN, HARGRAS, and MERILL teaches the limitations of claims 1, 9 and 15, as shown above.
NIKANKIN further discloses:
wherein the one or more variables of the objective function comprises one or more of: a first payment default recovered variable, a return on capital variable, a cost of customer acquisition variable, a cost of maintaining a customer variable, or a customer lifetime value variable (NIKANKIN: ¶[0032]: The standards generation component 204 can dynamically set, determine, or otherwise generate a set of lending criterion (e.g., standards) for respective classifications of potential borrowers based at least in part on a set of predetermined criterion or a set of previous lending outcomes. For example, the predetermined criterion can include a desired return on investment (ROI), and the standards generation component 204 can analyze the set of previous lending outcomes for a classification of potential borrowers, and adjust or generate the set of lending criterion for the classification of potential borrowers to achieve the desired return on investment; ¶[0050]: Where the potential borrower is eligible for one or more loans, at 610, a set of terms are generated for the one or more loans as a function of the classification and a desired return on investment (ROI).)
Regarding claim(s) 7,
NIKANKIN, HARGRAS, and MERILL teaches the limitations of claim 1.
NIKANKIN further discloses:
further comprising determining, based on a computational model for maximizing valuation, one or more business objectives and generating the objective function based on the one or more business objectives (NIKANKIN: ¶[0003]: Typically, credit lending organizations regularly reevaluate and update their credit lending requirements.; ¶[0006]: Various embodiments for dynamic risk assessment and credit standards generation are contained herein.; ¶[0032]: The standards generation component 204 can dynamically set, determine, or otherwise generate a set of lending criterion (e.g., standards) for respective classifications of potential borrowers based at least in part on a set of predetermined criterion. the predetermined criterion can include a desired return on investment (ROI), and the standards generation component 204 can analyze the set of previous lending outcomes for a classification of potential borrowers, and adjust or generate the set of lending criterion for the classification of potential borrowers to achieve the desired return on investment. The set of lending criterion can include satisfaction of a credit score threshold, an income requirement, a residency requirement, an age requirement, an employment requirement, an education requirement, a banking requirement (e.g., checking account, savings account, etc.), a personal information requirement (e.g., marriage, hobbies, vacation, etc.), or satisfaction of virtually any requirement regarding virtually information relating to the user 104.]; ¶[0034]: The terms component 208 can select, generate, or otherwise determine a set of terms for a loan, where the user 104 is eligible for a loan. the terms component 208 can dynamically generate the set of terms. the terms component 208 can include a return on investment (ROI) component 210 that determines a desired ROI for the one or more loans, and the terms component 208 can select or generate the terms for the one or more loans as a function of the desired ROI. For example, the desired ROI can be higher for a high risk loan, than for a low risk loan, and the terms component 208 can generate a set of terms for high risk loans having a higher interest rate than for low risk loans, in order to achieve the desired ROI.; ¶[0035]: the risk assessment component 202 can dynamically determine a risk of lending to the user 104 based in part on a set of risk assessment criterion. The risk assessment component 202 can include a forecasting component 302 that can predict, determine or otherwise forecast a subset of the risk assessment criterion. For example, the forecasting component 302 can include a loss component 304, an expense component 306, and an economic conditions component 308. The loss component 304 can forecast a risk of loss associated with lending to the user; ¶[0055]: , the set of lending criterion can be adjusted to reduce the difficulty of obtaining a loan (e.g., lower a lending threshold) for a set of potential borrowers previously classified as high risk, wherein the data regarding previously granted loans indicates that there have been a large quantity of loans that satisfy a desired return on investment. It is to be appreciated that such adjustments enable a lender to dynamically adjust lending requirements in a relatively short period of time in response to actual results obtained by the lender.).
Regarding claim(s) 8,
NIKANKIN, HARGRAS, and MERILL teaches the limitations of claim 1.
NIKANKIN further discloses:
wherein the data associated with the loan application comprises one or more of data provided by an applicant of the loan application or external data determined from one or more sources different than a source of the loan application. (NIKANKIN: ¶[0026]: The system 100 includes a consumer information aggregator component 102 (aggregator component 102), a scoring calculation component 103, and a credit standards component 105. The consumer information aggregator component 102 can obtain, locate, or otherwise acquire data relating to a user 104, and generate a profile of candidate characteristics 106 (profile 106) based at least in part on the data. The aggregator component 102 obtains, acquires, or otherwise receives one or more identifiers associated with the user. For example, the identifiers can include but are not limited to the user's 104 name, date of birth, email address, home address, phone number, and so forth. The aggregator component 102 can acquire data relating to the user 104 by searching a set of data sources 108 using the identifiers, and collecting a set of search results. The data sources 108 can include virtually any open source or publicly available sources of information, including but not limited to websites, search engine results, social networking websites, online resume databases, job boards, government records, online groups, payment processing services, online subscriptions, and so forth. In addition, the data sources 108 can include private databases, such as credit reports, loan applications, and so forth. The aggregator component 102 can connect to the data sources 108 via a communication link 110 (e.g., comm link, network connection, etc.). For example, the aggregator component 102 can obtain a set of data relating to the user 104 by querying one or more internet search engines based on the identifiers.).
Claims 21 is rejected under 35 U.S.C. 103 as being unpatentable over NIKANKIN (US 20130138554 A1 to Nikankin; Andrey N. et al.) in view of MERILL (US 20190378210 A1 Merrill; Douglas C. et al.).
Regarding claim(s) 21,
NIKANKIN discloses:
A method comprising: causing a machine learning model to be configured (NIKANKIN: ¶[0041]: the risk assessment component 202 can also employ intelligent determinations or inferences in connection with determining a risk of lending.[…] the approval component 206 can intelligently determine or infer the user's 104 eligibility for one or more loans.[…] the foregoing inferences can potentially be based upon, e.g., Bayesian probabilities or confidence measures or based upon machine learning techniques related to historical analysis, feedback, and/or other determinations or inferences.)
based on modifying one or more weights applied to one or more variables of an objective function to account for changes in one or more business conditions (NIKANKIN: ¶[031]: The risk assessment component 202 can dynamically classify, rate, or otherwise determine a risk of lending to the user 104 based on a set of risk assessment criterion. The set of risk assessment criterion can include, but is not limited to, risk of loss, expense of lending, or economic conditions. For example, if the user 104 has a high risk of loss, or a high costs of lending, the risk assessment component can classify the user 104 as high risk; ¶[0026]: the scoring calculation component 103 can assess, grade, or otherwise weight one or more of the characteristics based on a set of scoring criterion, and determine the credit score as a function of the respective weights of the factors; NIKANKIN: [0041]: inferences can potentially be based upon machine learning techniques related to historical analysis, feedback, and/or other determinations or inferences.; [0035] the risk assessment component 202 can dynamically determine a risk of lending to the user 104 based in part on a set of risk assessment criterion. the forecasting component 302 can include an economic conditions component 308. the economic conditions component 308 can forecast a set of economic conditions, including, but not limited to, local economic conditions (e.g., for the user 104), or overall economic conditions. For example, the economic conditions component 308 can forecast a potential risk of job loss for the user 104 (e.g., reduced income, likelihood of default).; [0036] The forecasting component 302 can obtain, acquire, or otherwise receive information or data for use in forecasting the subset of the risk assessment criterion from virtually any source, including, but not limited to, the set of data sources 108 or the set of profiles 106 maintained in the data store 112; the subset of the risk assessment criterion can include a virtually infinite number of criterion, and the forecasting component 302 can include a virtually infinite number of components to facilitate forecasting the subset of the risk assessment criterion.);
determining, based on receiving an indication of a loan application, data associated with the loan application (NIKANKIN: ¶[0026]: The system 100 includes a consumer information aggregator component 102 (aggregator component 102), a scoring calculation component 103, and a credit standards component 105. The consumer information aggregator component 102 can obtain, locate, or otherwise acquire data relating to a user 104, and generate a profile of candidate characteristics 106 (profile 106) based at least in part on the data ; ¶[0028]: The scoring calculation component 103 can examine, inspect or otherwise analyze the profile 106, and generate a credit score (e.g., grade, rank, etc.) for the user 104. The scoring calculation component 103 analyzes the profile 106 to determine whether various loan eligibility determination factors (factors) are included in the profile 106. For example, the factors can include, but are not limited to, the user's 104 employment, education, demographics, hobbies, residency, internet usage, and so forth. The credit score can be generated as a function of the factors included in the profile 106. For example, the scoring calculation component 103 can assess, grade, or otherwise weight one or more of the characteristics based on a set of scoring criterion, and determine the credit score as a function of the respective weights of the factors.);
determining, based on providing the data associated with processing the loan application to the machine learning model, a loan decision associated with the loan application (NIKANKIN: ¶[0002]: when a consumer needs to obtain a quick credit decision; ¶[0007]: In another non-limiting embodiment, an exemplary method is provided that includes determining a risk of lending to a user, and classifying the user as a function of the risk, generating a set of lending criteria for the determined classification, and determining an eligibility of the user for one or more loans based on a comparison of a profile associated with the user and the set of lending criteria.);
NIKANKIN does not expressly disclose the following limitations, which MERILL however, teaches:
determining, based on analysis of output of the machine learning model associated with the loan decision, a reason for the loan decision; and sending an indication of the loan decision and the reason for the loan decision (MERILL: ¶[0030]: In some embodiments, the model evaluation and explanation system (e.g., 120 of FIGS. 1A and 1B) uses a non-differentiable model decomposition module (e.g., 121) to decompose scores generated by a model by computing at least one SHAP (SHapley Additive exPlanation) value; ¶[0032]: In some embodiments, model evaluation and explanation system 120 uses score decompositions to determine important features a model ( or ensemble) that impact scores generated by the model (or ensemble).; ¶[0033]: the model evaluation system evaluates and explains the model ( or ensemble) by generating score explanation information for a specific score generated by the ensemble model for a particular input data set. In some embodiments, the score explanation information is used to generate Adverse Action information. In some embodiments, the score explanation information is used to generate an Adverse Action letter in order to allow lenders to comply with 15 U.S.C. § 1681 et. seq.; ¶[0018]: in the United States, under the Fair Credit Reporting Act 15 U.S.C. § 1681 et seq, when generating a decision to deny a consumer credit application, lenders are required to provide to each consumer the reasons why the credit application was denied; figure 1B: score passed to decomposition module)
It would have been obvious to one of ordinary skill in the art to modify the combination of NIKANKIN, which discloses electronic systems and techniques for implementing dynamic risk assessment and credit standards generation and HARGRAS, which discloses applications of the invention to the lending business and findings of creditworthiness (HARGRAS ¶¶[0008] and [0252]) with the system and method of MERRILL, which provides machine learning model explain-ability information to comply with Equal Credit Opportunity Act, the Fair Credit Reporting Act (MERRILL ¶[0013]), in order to take advantage of more complex models when making lending decisions (MERRILL ¶[0018]) and because machine learning models are often ensembles of heterogeneous sub-models (MERRILL ¶[0019]).
Claims 22 is rejected under 35 U.S.C. 103 as being unpatentable over NIKANKIN (US 20130138554 A1 to Nikankin; Andrey N. et al.) in view of MERILL (US 20190378210 A1 Merrill; Douglas C. et al.) in further view of HARGRAS (US 20220036221 A1 to Hargras, Hani et al.)
Regarding claim(s) 22,
NIKANKIN and MERRILL teach the method of claim 21, as shown above.
NIKANKIN does not expressly disclose the following limitations, which HARGRAS however, teaches:
wherein determining the reason for the loan decision comprises determining, based on a localized interpretation process being applied to the output associated with the loan decision (HARGRAS: ¶[0017]: In [12] a method is presented to explain a prediction by sampling the input feature space around the instance to be explained. The sampled points are “close” to the original one, in order to capture and maintain local fidelity and meaning. In addition, the contribution of each point is weighted according to some distance metric capturing “how far away” the samples are from the explanation point.; ¶[0027]: feeding the input into the Type-2 FLM to provide an explanation of the output from the opaque model from a local point of view; ¶[0236]: The high-level description of the workflow is captured in FIG. 9 on the Explainable AI local component which employs type-2 fuzzy logic to generate human understandable models and explanations which can explain the opaque AI model and decision in a given input/output local vicinity; ¶[0008]: In 2010, public concerns about racial and other bias in the use of AI for […] findings of creditworthiness may have led to increased demand for transparent artificial intelligence),
It would have been obvious to one of ordinary skill in the art to combine the electronic systems and techniques for implementing dynamic risk assessment and credit standards generation of NIKANKIN with the system of HARGRAS, which discloses applications of the invention to the lending business and findings of creditworthiness (HARGRAS ¶¶[0008] and [0252]) in order to allow users to easily understand credit decisions and permit gaining insights about the differences between classes in the context of the lending business (HARGRAS ¶[0252]).,
NIKANKIN does not expressly disclose the following limitations, which MERILL however, teaches:
one or more categories of reasons for rejection mapped to action notices (MERILL: ¶[0018]: in the United States, under the Fair Credit Reporting Act 15 U.S.C. § 1681 et seq, when generating a decision to deny a consumer credit application, lenders are required to provide to each consumer the reasons why the credit application was denied; figure 1B: score passed to decomposition module; ¶[0033]: In some embodiments, the model evaluation system evaluates and explains the model (or ensemble) by generating score explanation information for a specific score generated by the ensemble model for a particular input data set. In some embodiments, the score explanation information is used to generate Adverse Action information. In some embodiments, the score explanation information is used to generate an Adverse Action letter in order to allow lenders to comply with 15 U.S.C. § 1681 et. seq.; ¶[0137]: In some embodiments model metadata includes a mapping between decompositions and adverse action reason codes. In some embodiments the adverse action mapping is a computable function based on a decomposition.)
It would have been obvious to one of ordinary skill in the art to modify the combination of NIKANKIN, which discloses electronic systems and techniques for implementing dynamic risk assessment and credit standards generation and HARGRAS, which discloses applications of the invention to the lending business and findings of creditworthiness (HARGRAS ¶¶[0008] and [0252]) with the system and method of MERRILL, which provides machine learning model explain-ability information to comply with Equal Credit Opportunity Act, the Fair Credit Reporting Act (MERRILL ¶[0013]), in order to take advantage of more complex models when making lending decisions (MERRILL ¶[0018]) and because machine learning models are often ensembles of heterogeneous sub-models (MERRILL ¶[0019])
Response to Arguments
Response to arguments: Double Patenting
Applicant's arguments filed October 9, 2025 have been fully considered but they are not persuasive. Applicant argues that a sufficient explanation was not provided. The Examiner respectfully, disagrees. Reasons for the double patenting rejection are given above and apply to the current claims.
Response to arguments: what 35 U.S.C. § 101
Applicant's arguments filed October 9, 2025 have been fully considered but they are not persuasive. Applicant argues at pages 8-11, that an allowance in the parent necessities an allowance in the present application. The Examiner respectfully, disagrees as each application must be examined on its own merits and the present claims, at best, merely use a computer as tool to carry out the abstract idea without significantly more, as explained above.
Applicant argues at pages 11-14, that the recent Federal Circuit opinion Recentive is inapplicable to the present claims. This argument has been considered but is unpersuasive. The claims do not recite an improvement to machine learning and do no more than use a computer as tool to carry out the abstract idea without significantly more, as explained above.
Applicant's arguments at page 15 have been fully considered but they are not persuasive. Applicant argues that analysis of output of the machine learning model associated with the loan decision is a significant technical improvement over conventional approaches. The Examiner respectfully, disagrees as this is a mental process and the present claims, at best, merely use a computer as tool to carry out the abstract idea without significantly more, as explained above.
Response to arguments: what 35 U.S.C. § 103
Applicant's arguments have been fully considered but they are not persuasive. Applicant argues the references do not teach “"causing a machine learning model to be configured based on modifying one or more weights applied to one or more variables of an objective function to account for changes in one or more business conditions". NIKANKIN teaches:¶[0031]: The risk assessment component 202 can dynamically classify, rate, or otherwise determine a risk of lending to the user 104 based on a set of risk assessment criterion. The set of risk assessment criterion can include, but is not limited to, risk of loss, expense of lending, or economic conditions. ¶[0039]:the adjustment component 404 can adjust a set of lending criterion for a classification of potential borrowers based on virtually any of the update data.; ¶[0047]: a risk of lending can be determined for a potential borrower (e.g., user 104) based on a set of risk assessment criterion. As discussed, the set of risk assessment criterion can include economic conditions; ¶[0055]: As an additional example, the set of lending criterion can be adjusted to reduce the difficulty of obtaining a loan (e.g., lower a lending threshold) for a set of potential borrowers previously classified as high risk. It is to be appreciated that such adjustments enable a lender to dynamically adjust lending requirements in a relatively short period of time in response to actual results obtained by the lender.
This argument has been considered but is unpersuasive. Although NIKANKIN does not disclose these limitations, MERRILL does teach these limitations as shown in the 35 U.S.C. § 103 rejection, above.
At page 15, the Applicant refers to the new claims 21 and 22 and asserts that they are not obvious in view of the cited references. This argument has been considered but is unpersuasive. New claims 21 and 22 are rejected under 35 U.S.C. § 103 in view of NIKANKIN (US 20130138554 A1 to Nikankin; Andrey N. et al.) and HARGRAS (US 20220036221 A1 to Hargras, Hani et al.) for the reasons given, above.
At page 18, Applicant argues that the combination of references does not teach or suggest the limitations added to claim 1 of “The cited combination of references does not teach or suggest wherein the machine learning model comprises an ensemble machine learning model based on a plurality of machine learning models, and wherein causing the machine learning model to be configured comprises causing the plurality of machine learning models to be trained based on the objective function.” This argument has been considered but is unpersuasive. Although NIKANKIN does not disclose these limitations, MERRILL does teach these limitations as shown in the 35 U.S.C. § 103 rejection, above.
At page 19-21, the Applicant refers to amendments made to claims 21 and 22 and asserts that they are not obvious in view of the cited references. This argument has been considered but is moot in view of the new grounds of rejection under 35 U.S.C. § 103 applied to amended claims 21 and 22.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Tulio Ribeiro, M., Singh, S., and Guestrin, C., “"Why Should I Trust You?": Explaining the Predictions of Any Classifier”, <i>arXiv e-prints</i>, Art. no. arXiv:1602.04938, 2016. doi:10.48550/arXiv.1602.04938. (Proposes (Local Interpretable Model-agnostic Explanations) LIME, a novel explanation technique that explains the predictions of any classier in an interpretable and faithful manner, by learning an interpretable model locally around the prediction.).
Riccardo Guidotti and Anna Monreale and Salvatore Ruggieri and Dino Pedreschi and Franco Turini and Fosca Giannotti. Local Rule-Based Explanations of Black Box Decision Systems; arXiv; 2018.{https://arxiv.org/abs/1805.10820}. (Discuses techniques for explaining black box machine learning decisions including loan approvals).
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BOLKO HAMERSKI whose telephone number is (571)270-7621. The examiner can normally be reached Monday-Friday 10:00 AM to 6:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, BENNETT SIGMOND can be reached at (303) 297-4411. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
BOLKO HAMERSKI
Examiner
Art Unit 3694
/BOLKO M HAMERSKI/ Examiner, Art Unit 3694
/BENNETT M SIGMOND/ Supervisory Patent Examiner, Art Unit 3694