Prosecution Insights
Last updated: April 19, 2026
Application No. 17/527,134

EXPLAINABLE ARTIFICIAL INTELLIGENCE BASED DECISIONING MANAGEMENT SYSTEM AND METHOD FOR PROCESSING FINANCIAL TRANSACTIONS

Non-Final OA §103
Filed
Nov 15, 2021
Examiner
BENOURAIDA, AMINA MORENO
Art Unit
2129
Tech Center
2100 — Computer Architecture & Software
Assignee
Lithasa Technologies Pvt Ltd.
OA Round
1 (Non-Final)
0%
Grant Probability
At Risk
1-2
OA Rounds
3y 3m
To Grant
0%
With Interview

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 2 resolved
-55.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
16 currently pending
Career history
18
Total Applications
across all art units

Statute-Specific Performance

§101
28.1%
-11.9% vs TC avg
§103
51.7%
+11.7% vs TC avg
§102
13.5%
-26.5% vs TC avg
§112
6.7%
-33.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 2 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant's claim for foreign priority based on an application filed in REPUBLIC OF INDIA on 09/07/2021. It is noted, however, that applicant has not filed a certified copy of the IN202121040459 application as required by 37 CFR 1.55. Specification The abstract of the disclosure is objected to because Abstract exceeds 150 words. A corrected abstract of the disclosure is required and must be presented on a separate sheet, apart from any other text. See MPEP § 608.01(b). Applicant is reminded of the proper language and format for an abstract of the disclosure. The abstract should be in narrative form and generally limited to a single paragraph on a separate sheet within the range of 50 to 150 words in length. The abstract should describe the disclosure sufficiently to assist readers in deciding whether there is a need for consulting the full patent text for details. See MPEP § 608.01(b) for guidelines for the preparation of patent abstracts. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-3, 6 and 13-14, 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over James et al., (US20200265512A1) in view of DeOliveira et al., (US20150339769A1), further in view of Contryman et al., (US12056771B1). Regarding Claim 1 and analogous Claim 13: James teaches: An explainable artificial intelligence based decisioning management system for processing financial transaction comprising: ([0012], A system of one or more computers can be configured to perform particular operations or actions by virtue of having software, firmware, hardware, or a combination of them installed on the system that in operation causes or cause the system to perform the actions. One or more computer programs can be configured to perform particular operations or actions by virtue of including instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions. One general aspect includes a system including: a loan approval decision module that receives input from a loan applicant and collects external data including credit bureau data, bank transaction data, and social media data. The system also includes a machine learning module having a pre-processing subsystem, an automated feature engineering subsystem and a feature statistical assessment subsystem. A business objective determination module and an adverse notice notification module is also provided"....[0051], "used to map reasons of rejection into limited categories 403 and finally mapping the categories to adverse action notices (i.e., wherein map the reasons is interpreted as an explanation that led to the decision)") one or more hardware processors; and a memory coupled to the one or more hardware processors, wherein the memory comprises a plurality of modules in the form of programmable instructions executable by the one or more hardware processors, wherein the plurality of modules comprises: ([0012], “A system of one or more computers can be configured to perform particular operations or actions by virtue of having software, firmware, hardware, or a combination of them installed on the system that in operation causes or cause the system to perform the actions. One or more computer programs can be configured to perform particular operations or actions by virtue of including instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions.”) a request handler module configured for receiving a request for performing a financial transaction from an applicant and from one or more data sources, ([0027], “FIG. 1 illustrates an environment 100 according to an implementation of the disclosure. The environment 100 may include a loan approval and processing system 101 [a request handler module] having an underwriting module 103”…“ The loan approval and processing system 101 will also receive input from a customer application [receiving a request] subsystem 157 (i.e., wherein a customer application is interpreted to be a loan application, hence ‘performing a financial transaction (obtaining a loan)’)”…([0028], “The underwriting module 103 automatically decides whether to approve the loan based on information received from customer application 157, credit bureau data sources 133 [data source]”) wherein the request comprises application information of the applicant, financial information, activity information, and sourcing information; ([0004], “Information requested in a typical loan request may include name, address, age, employment, [information of the applicant] (i.e., wherein the information of the applicant includes name, age, address etc.) financial history, credit rating”…([0028], “based on information received from customer application 157, credit bureau data sources [sourcing information] 133, bank transaction data sources [financial information] 135, social media data sources [activity information] (i.e., wherein social media under broadest reasonable interpretation (BRI) is interpreted as activity information (SPEC [0047], i.e., lifestyle, activity etc.) 137 and other data sources”) a financial transaction performer module configured for performing the financial transaction with the applicant in response to the received request based on the generated case assessment report ([0027]-[0028], “The loan approval and processing system 101 will also receive input from a customer application subsystem 157. The underwriting module 103 automatically decides whether to approve the loan based on information received from customer application 157 (i.e., wherein received from customer application is interpreted as ‘applicant in response to the received request’), credit bureau data sources 133, bank transaction data sources 135, social media data sources 137 and other data sources. The automatic decision of loan approval is made using machine learning module 105 (i.e., wherein using the machine learning module is interpreted as the generated case assessment report). The underwriting module 103 may select one or more recommended actions based on one or more machine learning results. The underwriting module 103 may select or recommend an action based on a confidence metric associated with the action (i.e., wherein ‘may select or recommend an action’ is interpreted as the transaction is triggered based on the report ‘metric’)”) James does not explicitly teach: wherein the request comprises application information of the applicant, health information a data sufficiency validation module configured for performing a data sufficiency check using one or more neural networks on the received request by validating the request with trained neural network models ; a decision generator module configured for generating a decision for the received request using a neural network model if the data sufficiency check is successful, wherein the decision comprises at least one of an acceptance decision of the request, customization decision and a rejection decision of the request, wherein the neural network model comprises one or more neural layers comprising neural nodes representing an analysis of the application information of the applicant and wherein each of the neural nodes are assigned a weightage ; a neural network explainable module configured for validating the generated decision by reverse calculating, through the one or more neural layers of the neural network model, the importance weightage distribution across each of the neural nodes towards data features considered in the one or more neural networks; and a case assessment report generation module configured for generating a case assessment report for the generated decision based on the validation, wherein the case assessment report comprises explainable reasons for arriving at the decision by the neural network model, impact of each of the neural nodes on arriving at the decision and similar past transactions; and DeOliveira teaches: a data sufficiency validation module configured for performing a data sufficiency check using one or more neural networks on the received request by validating the request with trained neural network models; ([0024], “the systems can perform a verification analysis to evaluate the completeness of the loan application data, the accuracy and authenticity of the loan application data (i.e., wherein the verification analysis is interpreted as data sufficiency validation)”…[0086], “In one exemplary embodiment, a confidence score is calculated using multilayer neural network techniques.”…[0100], “In one aspect of the present invention, the neural networks can be refined or trained using historical decision data about prior loan applications processed by the provider (i.e., wherein using historical decision data about prior applications under the broadest reasonable interpretation (BRI), is interpreted as validating using trained neural network models)”) a decision generator module configured for generating a decision for the received request using a neural network model if the data sufficiency check is successful, ([0066], “The systems and methods further include a verification analysis that helps ensure the reliability, accuracy, and integrity of the information utilized in the compliance and underwriting analysis 204 and the results of the analysis. The output of the verification analysis can be a confidence score that represents the probability that the loan application will be approved or successfully processed in the next stage of the loan application process (i.e., wherein the verification analysis gives the probability of successfully processed is interpreted as the ‘data sufficiency check is successful’)”) wherein the decision comprises at least one of an acceptance decision of the request, customization decision and a rejection decision of the request, ([0102], “Depending on the results of the verification analysis, the system can: (1) approve the loan application to proceed to the next stage of the evaluation process: (2) reject the loan application; or (3) return the loan application to the application and enrollment stage 201 to gather additional loan application data or seek clarification (i.e., wherein (3) is interpreted as customization decision)”) wherein the neural network model comprises one or more neural layers comprising neural nodes representing an analysis of the application information of the applicant and ([0086]-[0087], “In one exemplary embodiment, a confidence score is calculated using multilayer neural network [one or more neural layers] techniques. Among other advantages, neural network techniques allow providers to account for the nonlinear effects of certain variables on the calculation of a confidence score. For instance, it might be the case that a low risk loan has very little effect on the confidence score calculation (e.g., increases the score five percent), but a high risk loan has a much more significant impact (e.g., lowering the score by fifty percent) (i.e., wherein the high risk loan is interpreted as the analysis of the application of the applicant). An exemplary neural network according to one embodiment of the invention is illustrated in FIG. 11. Generally, a multilayer neural network utilizes an input layer, an output layer, and one or more intermediate layers. The layers are made up of nodes [neural nodes] called neurons connected by synapses”) wherein each of the neural nodes are assigned a weightage; ([0087], “The layers are made up of nodes called neurons connected by synapses. The nodes are implemented by activation functions that act on weighted inputs provided by the synapses. The neurons sum the weighted synapses inputs and pass the summed total through the activation function.”) a neural network explainable module configured for validating the generated decision by reverse calculating, through the one or more neural layers of the neural network model, the importance weightage distribution across each of the neural nodes towards data features considered in the one or more neural networks; and ([0094], “Neural networks can be trained utilizing test sets of inputs and corresponding outputs. In one embodiment, the inputs are run through the neural network, and the synapses weights arc calculated that minimize the sum squared error of the difference between the test outputs and the calculated outputs. Specifically, after the lest inputs are propagated forward through the neural network, the system computes the net input and output of each node in the intermediate and output layers and back propagates the error through the network. To illustrate a back propagation process, [reverse calculating] the output layer error can be calculated as: Err0 =s 0(1−s 0)(T−s 0) The variable T represents the true output from the test data set. The output error is propagated backwards by first calculating the error for a neuron node j in the intermediate layer. Errj =s j(1−s j)Σw i,j*Err0 Where Wi,j is the weight assigned to the synapses between nodes i and j. Lastly, the synapses weights can be updated using a fixed learning rate, L”…the importance weightage distribution across each of the neural nodes towards data features considered in the one or more neural networks; and [0100], ”If, for example, a loan application is returned for additional information, it is assumed that the confidence score should have been approximately fifty percent. The loan application data and other associated data then serves as a lest input data set where the test output is fifty percent. This data is used to train the system and adjust the weights of the synapses. Feedback can also be generated by identifying the top two factors [data features] that affected a decision on a loan application (i.e., wherein the features can be explained or validating how the decision was generated) The weights for these factors can then be adjusted in the neural network to account for the importance of the factors in a loan application decision [importance weightage distribution across each of the neural nodes towards data features]”) a case assessment report generation module configured for generating a case assessment report for the generated decision based on the validation, ([0101], “The exemplary verification analysis shown in FIGS. 12A-B includes evaluations concerning the authenticity, completeness, and accuracy of the loan application data, as well as a risk analysis and loan profitability assessment. Although the risk analysis is shown in FIGS. 12A-B as being conducted after a decision is made on the loan application, it should be recognized that a risk analysis can be performed throughout the loan application evaluation process (i.e., wherein the ‘evaluations is interpreted as the case assessment report that is based on the validated decision)”) wherein the case assessment report comprises explainable reasons for arriving at the decision by the neural network model, impact of each of the neural nodes on arriving at the decision and similar past transactions; and ([0100], “In one aspect of the present invention, the neural networks can be refined or trained using historical decision data about prior loan applications processed by the provider (i.e., wherein historical decisions is interpreted as similar past transactions) The decision data analysis provides feedback to the system in the form of a loan application decision [explainable reasons for arriving at the decision]”…[0100], “If, for example, a loan application is returned for additional information, it is assumed that the confidence score should have been approximately fifty percent. The loan application data and other associated data then serves as a lest input data set where the test output is fifty percent. This data is used to train the system and adjust the weights of the synapses. Feedback can also be generated by identifying the top two factors that affected a decision on a loan application. The weights for these factors can then be adjusted in the neural network to account for the importance of the factors in a loan application decision [neural network model, impact of each of the neural nodes on arriving at the decision]”) DeOliveira and James are both related to the same field of endeavor (i.e., automating loan/underwriting process). In view of the teachings of DeOliveira it would have been obvious for a person of ordinary skill in the art to apply the teachings of DeOliveira to James before the effective filing date of the claimed invention in order to improve the efficiency of explaining neural network decision making in a loan/underwriting process (DeOliveira, [0002], “Financial service providers must be able to quickly and efficiently process large volumes of loan applications while complying with strict origination guidelines that are designed to meet regulatory requirements and minimize the risk to the financial service provider. Traditional methods of compiling relevant information and evaluating loan applications are labor-intensive and time-consuming. Additionally, traditional methods for compiling relevant information and evaluating a loan application often include a subjective component, and the process might not be standardized across an organization. Different loan officers often interpret origination guidelines differently, or the origination guidelines might vary depending on the amount and type of loan the borrower is seeking, among other factors. It would, therefore, be advantageous to provide an efficient, reliable mechanism for compiling and evaluating information relevant to making a lending decision and for presenting this information in a manner that facilitates evaluation of the loan application.”) Contryman teaches: wherein the request comprises application information of the applicant, health information (Col 12, lines 61-65, “In embodiments, feature importance is a measure of how important a key, value pair (e.g., a feature) was in making the ultimate decision on the form (e.g., an applicant's weight is more valuable in making a health insurance decision than an applicant's age) (i.e., wherein weight and age is interpreted as health information for the applicant which is used to make a health insurance decision)”) Contryman and James are both related to the same field of endeavor (i.e., automating loan/underwriting process). In view of the teachings of Contryman it would have been obvious for a person of ordinary skill in the art to apply health information from the applicant to the decision making process from the teachings of Contryman to James before the effective filing date of the claimed invention in order to improve the efficiency of explaining neural network decision making in a loan/underwriting process (Contryman, Col 1, lines 25-43, “Customer information is collected by organizations on application forms, such as paper forms, interactive forms rendered within an application (e.g., on the customer's mobile device or mobile device of an agent of an organization), an editable form displayed using a web page (e.g., on a computer system of the customer), etc. The customer information, as discussed above, includes a set of data points relevant to the service the customer is applying for, and which enables an organizational representative (e.g., an underwriter), to decide whether to accept or reject the customer based on the information provided in the application form.”) Regarding Claim 2 and analogous Claim 14: James, as modified by DeOliveira and Contryman, teaches the system of claim 1. James further teaches: wherein in receiving the request for performing the financial transaction from the applicant and from the one or more data sources ([0027]-[0028], “The loan approval and processing system 101 will also receive input from a customer application subsystem 157. The underwriting module 103 automatically decides whether to approve the loan based on information received from customer application 157 (i.e., wherein received from customer application is interpreted as ‘applicant in response to the received request’), credit bureau data sources 133, bank transaction data sources 135, social media data sources 137 and other data sources [from the one or more data sources] (i.e., wherein performing the financial transaction (i.e., wherein the request is a financial transaction (i.e., obtaining a loan) includes information from data sources (i.e., social media, bank transactions)). The automatic decision of loan approval is made using machine learning module 105. The underwriting module 103 may select one or more recommended actions based on one or more machine learning results. The underwriting module 103 may select or recommend an action based on a confidence metric associated with the action (i.e., wherein ‘may select or recommend an action’ is interpreted as the transaction is triggered based on the report ‘metric’)”) the request handler module is configured for: ([0027], “FIG. 1 illustrates an environment 100 according to an implementation of the disclosure. The environment 100 may include a loan approval and processing system 101 [a request handler module] having an underwriting module 103” James, as modified by Contryman does not explicitly teach: prompting one or more questions relating to the application information of the applicant; and receiving additional application information of the applicant as a response to the one or more questions. DeOliveira further teaches: prompting one or more questions relating to the application information of the applicant; and receiving additional application information of the applicant as a response to the one or more questions. ([0103], “If the system determines that additional data is needed to evaluate the loan application, the communication module 206 transmits a communication to the customer seeking clarification or additional documents and information [prompting one or more questions relating to the application information] (i.e., wherein ‘transmits a communication’ is interpreted as prompting to the applicant related to the application, ‘asking questions about the application’). The customer then has the opportunity to return to the website or graphical user interface used in the application and enrollment process 201 to provide supplemental data [receiving additional application information]. By way of example, if the system determines that an expired policy is listed on a proof of insurance document submitted by a customer, the customer can be asked by email to submit an updated proof of insurance document. As another example, the provider's underwriting guidelines may require two years of continuous employment with the same employer but an analysis of employment history reveals that the customer frequently changes jobs after less than two years. In this case, the system may send an email asking the customer to submit a written explanation regarding the employment history, like the customer changed jobs to advance within the same line of work, or the customer was employed under a contract of limited duration (i.e., wherein based on the information from applicant on the application, if needed, questions are sent to applicant for a response for additional information)”) The motivation for claim 2 and 14 is the same motivation as claim 1. Regarding Claim 3 and analogous Claim 16: James, as modified by DeOliveira and Contryman, teaches the system of claim 1. James, as modified by Contryman, does not explicitly teach: wherein in generating the decision for the received request using the neural network model if the data sufficiency check is successful, the decision generator module is configured for: generating one or more data features from the application information of the applicant; applying the generated one or more features onto a trained neural network model, wherein the trained neural network model comprises the one or more neural layers comprising the neural nodes representing the analysis of the application information of the applicant and wherein each of the neural nodes are assigned the weightage on the basis of training on past transactions; determining whether the output of the trained neural network model meets acceptance criteria prestored in the database; and generating the decision for the received request based on the output of the trained neural network model and based on the determination. DeOliveira further teaches: wherein in generating the decision for the received request using the neural network model if the data sufficiency check is successful, the decision generator module is configured for: ([0066], “The systems and methods further include a verification analysis that helps ensure the reliability, accuracy, and integrity of the information utilized in the compliance and underwriting analysis 204 and the results of the analysis. The output of the verification analysis can be a confidence score that represents the probability that the loan application will be approved or successfully processed in the next stage of the loan application process (i.e., wherein the verification analysis gives the probability of successfully processed is interpreted as the ‘data sufficiency check is successful’)”) generating one or more data features from the application information of the applicant; ([0088], “When implementing neural networks, it can be useful to normalize the input data. Because neural networks work internally with numeric data, binary data (e.g., the results of a OFAC screening) and categorical data (e.g., loan type) can be encoded in numeric form (i.e., wherein taking the application information (i.e., loan type) and converting to a numeric value is interpreted as generating data feature)”) applying the generated one or more features onto a trained neural network model, ([0088], “When implementing neural networks, it can be useful to normalize the input data. Because neural networks work internally with numeric data, binary data (e.g., the results of a OFAC screening) and categorical data (e.g., loan type) can be encoded in numeric form (i.e., wherein taking the application information (i.e., loan type) and converting to a numeric value is interpreted as generating data feature)”…[0100], “In one aspect of the present invention, the neural networks can be refined or trained using historical decision data about prior loan applications processed by the provider (i.e., wherein using historical decision data about prior applications under the broadest reasonable interpretation (BRI), is interpreted as using a trained neural network model)”) wherein the trained neural network model comprises the one or more neural layers comprising the neural nodes representing the analysis of the application information of the applicant and ([0086]-[0087], “In one exemplary embodiment, a confidence score is calculated using multilayer neural network [one or more neural layers] techniques. Among other advantages, neural network techniques allow providers to account for the nonlinear effects of certain variables on the calculation of a confidence score. For instance, it might be the case that a low risk loan has very little effect on the confidence score calculation (e.g., increases the score five percent), but a high risk loan has a much more significant impact (e.g., lowering the score by fifty percent) (i.e., wherein the high risk loan is interpreted as the analysis of the application of the applicant). An exemplary neural network according to one embodiment of the invention is illustrated in FIG. 11. Generally, a multilayer neural network utilizes an input layer, an output layer, and one or more intermediate layers. The layers are made up of nodes [neural nodes] called neurons connected by synapses”) wherein each of the neural nodes are assigned the weightage on the basis of training on past transactions; ([0087], “The layers are made up of nodes called neurons connected by synapses. The nodes are implemented by activation functions that act on weighted inputs provided by the synapses. The neurons sum the weighted synapses inputs and pass the summed total through the activation function.”…[0100], “In one aspect of the present invention, the neural networks can be refined or trained using historical decision data about prior loan applications processed by the provider (i.e., wherein using historical decision data about prior applications (i.e., past transactions))”) determining whether the output of the trained neural network model meets acceptance criteria prestored in the database; and ([0102], “Depending on the results of the verification analysis, the system can: (1) approve the loan application to proceed to the next stage of the evaluation process: (2) reject the loan application; or (3) return the loan application to the application and enrollment stage 201 to gather additional loan application data or seek clarification. The system can be configured so that certain actions are taken when the confidence score reaches predetermined thresholds set by the provider. In the embodiment shown in FIGS. 12A-B, the application proceeds to the next stage when the confidence score reaches one-hundred percent. If the confidence score is less than fifty percent, the loan application is rejected. When the loan application is rejected, the loan application data is saved to the provider's core database [database] 214 (i.e., wherein neural network model meets acceptable levels)”…[0105], “the system reruns part or all of the underwriting analysis 204 and recalculates the confidence score. This verification analysis feedback loop continues until the confidence score reaches one-hundred percent or another predetermined threshold deemed acceptable to the financial service provider. When the confidence score reaches an acceptable threshold, the process continues to the next stage of the loan application lifecycle”….[0100], “In one aspect of the present invention, the neural networks can be refined or trained using historical decision data about prior loan applications processed by the provider (i.e., wherein using historical decision data about prior applications (i.e., past transactions) hence, the loan application data is saved to the provider’s core database and under the broadest reasonable interpretation (BRI) is then used as a criteria/threshold to meet eligibility for new applications)”) generating the decision for the received request based on the output of the trained neural network model and based on the determination ([0102], “Depending on the results of the verification analysis, the system can: (1) approve the loan application to proceed to the next stage of the evaluation process: (2) reject the loan application; or (3) return the loan application to the application and enrollment stage 201 to gather additional loan application data or seek clarification. The system can be configured so that certain actions are taken when the confidence score reaches predetermined thresholds set by the provider. In the embodiment shown in FIGS. 12A-B, the application proceeds to the next stage when the confidence score reaches one-hundred percent. If the confidence score is less than fifty percent, the loan application is rejected (i.e., wherein the decision is based on the confidence score reaching a predetermined threshold (i.e., approve, reject, or more info needed))”) The motivation for claim 3 and 16 is the same motivation as claim 1. Regarding Claim 6: James, as modified by DeOliveira and Contryman, teaches the system of claim 1. DeOliveira further teaches: wherein in generating the case assessment report for the generated decision based on the validation, the case assessment report generation module is configured for: ([0101], “The exemplary verification analysis shown in FIGS. 12A-B includes evaluations concerning the authenticity, completeness, and accuracy of the loan application data, as well as a risk analysis and loan profitability assessment. Although the risk analysis is shown in FIGS. 12A-B as being conducted after a decision is made on the loan application, it should be recognized that a risk analysis can be performed throughout the loan application evaluation process (i.e., wherein the ‘evaluations is interpreted as the case assessment report that is based on the validated decision)”) retrieving the similar past transactions from training datasets using the importance weightage; and ([0100], “In one aspect of the present invention, the neural networks can be refined or trained using historical decision data about prior loan applications processed by the provider (i.e., wherein using historical decision data about prior applications (i.e., past transactions))”…“The weights for these factors can then be adjusted in the neural network to account for the importance of the factors in a loan application decision [importance weightage]) generating the case assessment report for the generated decision comprising the retrieved similar past transactions ([0100], “In one aspect of the present invention, the neural networks can be refined or trained using historical decision data about prior loan applications processed by the provider (i.e., wherein using historical decision data about prior applications (i.e., past transactions))”…[0106], “This information is used to perform certain underwriting and compliance analysis 204 techniques, including an IDV, PEP screening. OFAC screening, address verification, a historical account data analysis (i.e., wherein a historical data analysis is interpreted as report for past transactions)”) The motivation for claim 6, is the same motivation for claim 1. Regarding Claim 10 and analogous Claim 20: James, as modified by DeOliveira and Contryman, teaches the system of claim 1. Contryman further teaches: wherein the financial transaction comprises loan, policy issuance, benefit qualification, insurance claim (Col 1, paragraph 1, “Organizations, such as medical organizations, insurance organizations, financial institutions, and other organizations provide services to customers, such as insurance, loans, and other services. Prior to providing a customer with an insurance policy, funded loan, or other service, a customer will typically apply for the service by completing an application form containing relevant customer information that the organization designates before deciding whether to approve or reject the customer (i.e., wherein such organizations provide the transaction to customer under the broadest reasonable interpretation (BRI) the transactions include loan, policy issuance, benefit qualification, insurance claim )”) The motivation for claim 10, 20 is the motivation for claim 1. Claim(s) 4-5, 17-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over James et al., in view of DeOliveira et al., and Contryman et al., further in view of Yu et al., Non-Patent Literature (“NISP: Pruning Networks using Neuron Importance Score Propagation”). Regarding Claim 4 and analogous Claim 17: James, as modified by DeOliveira and Contryman, teaches the system of claim 1. James further teaches: wherein the neural node importance weightage indicates impact of each of the neural nodes on arriving at the decision. ([0051], “As shown in FIG. 4, the output of the ensemble machine learning model 309 is used to develop a localized linear explanation of the model behavior 401 which is then used to map reasons of rejection into limited categories 403 and finally mapping the categories to adverse action notices. The concept of “Localized Linearity” is used to help map the decision from the underwriting model to adverse action notices (i.e., wherein localized linearity under the broadest reasonable interpretation handles importance of features, hence ‘importance weightage’). The underwriting models are non-linear, but the further one zooms in on them, the more the decision point around a particular customer is assume to be approximated in a linear fashion. The concept of “Localized Linearity” is applied for all the customers that pass through the underwriting model and need to be mapped to adverse action notices. “Localized Linearity” helps understand the factors that played a role in the decision from the underwriting model; based on those factors, the leads are bucketed into group of adverse action notices that are then sent out.”) corelating the aggregated importance weightage with the request indicating usage of the application information of the applicant, financial information, health information, activity information, and sourcing information on arriving at the final decision. ([0051], “As shown in FIG. 4, the output of the ensemble machine learning model 309 is used to develop a localized linear explanation of the model behavior 401 which is then used to map reasons of rejection into limited categories 403 and finally mapping the categories to adverse action notices. The concept of “Localized Linearity” is used to help map the decision from the underwriting model to adverse action notices (i.e., wherein localized linearity under the broadest reasonable interpretation handles importance of features, hence ‘importance weightage’). The underwriting models are non-linear, but the further one zooms in on them, the more the decision point around a particular customer is assume to be approximated in a linear fashion. The concept of “Localized Linearity” is applied for all the customers that pass through the underwriting model and need to be mapped to adverse action notices. “Localized Linearity” helps understand the factors that played a role in the decision from the underwriting model; based on those factors, the leads are bucketed into group of adverse action notices that are then sent out.”…[0004], “Information requested in a typical loan request may include name, address, age, employment, [information of the applicant] (i.e., wherein the information of the applicant includes name, age, address etc.) financial history, credit rating”…([0028], “based on information received from customer application 157, credit bureau data sources [sourcing information] 133, bank transaction data sources [financial information] 135, social media data sources [activity information] (i.e., wherein social media under broadest reasonable interpretation (BRI) is interpreted as activity information (SPEC [0047], i.e., lifestyle, activity etc.) 137 and other data sources”)) DeOliveira further teaches: wherein in validating the generated decision by reverse calculating, through the one or more neural layers of the neural network model, the importance weightage distribution across each of the neural nodes, the neural network explainable module is configured for: ([0094], “Neural networks can be trained utilizing test sets of inputs and corresponding outputs. In one embodiment, the inputs are run through the neural network, and the synapses weights arc calculated that minimize the sum squared error of the difference between the test outputs and the calculated outputs. Specifically, after the lest inputs are propagated forward through the neural network, the system computes the net input and output of each node in the intermediate and output layers and back propagates the error through the network. To illustrate a back propagation process, [reverse calculating] the output layer error can be calculated as: Err0 =s 0(1−s 0)(T−s 0) The variable T represents the true output from the test data set. The output error is propagated backwards by first calculating the error for a neuron node j in the intermediate layer. Errj =s j(1−s j)Σw i,j*Err0 Where Wi,j is the weight assigned to the synapses between nodes i and j. Lastly, the synapses weights can be updated using a fixed learning rate, L”… [0100], “If, for example, a loan application is returned for additional information, it is assumed that the confidence score should have been approximately fifty percent. The loan application data and other associated data then serves as a lest input data set where the test output is fifty percent. This data is used to train the system and adjust the weights of the synapses. Feedback can also be generated by identifying the top two factors that affected a decision on a loan application (i.e., wherein how a decision can be explained or validated) The weights for these factors can then be adjusted in the neural network to account for the importance of the factors in a loan application decision [importance weightage distribution across each of the neural nodes]”) assigning an overall score to the generated decision; ([0060]-[0062], “Each response is assigned a numeric score reflecting the risk posed by that factor”…“The responses to each inquiry can be scored the same (e.g., a “1” or a “5”), or the responses can be scored differently to reflect different weights assigned to each factor in determining customer risk. The scores for each response are summed to yield an overall score, and the customer is classified as a low, medium, or high risk based on whether the overall score falls within certain numeric ranges (i.e., wherein classified as low, medium, or high risk is interpreted as assigning an overall score to generate a decision). If the business customer falls within the medium or high risk category, the provider can further investigate the customer. The provider can contact the customer in person or by phone to evaluate circumstances such as whether: the individual who initiated the account opening is available; the business answers telephone calls in a professional manner; the business is appropriately staffed; the nature of the business matches information provided in connection with the loan application; or any other relevant factor. Once again, the responses are assigned a numeric score reflecting the risk posed by that factor, and the scores are summed to yield an overall score that gives further insight as to the customer's risk level. A capacity analysis evaluates a customer's ability to make payments on a loan by examining the customer's employment, income, current debts, and assets.”) assigning importance weightage to each of the neural node within the neural network model by propagating in a backward direction starting from final neural layer to first neural layer of the neural network model; ([0094]-[0096], “Neural networks can be trained utilizing test sets of inputs and corresponding outputs. In one embodiment, the inputs are run through the neural network, and the synapses weights arc calculated that minimize the sum squared error of the difference between the test outputs and the calculated outputs. Specifically, after the lest inputs are propagated forward through the neural network, the system computes the net input and output of each node in the intermediate and output layers and back propagates the error through the network [propagating in a backward direction]. To illustrate a back propagation process, the output layer error can be calculated as: Err0 =s 0(1−s 0)(T−s 0) The variable T represents the true output from the test data set. The output error is propagated backwards by first calculating the error for a neuron node j in the intermediate layer. Errj =s j(1−s j)Σw i,j*Err0 Where Wi, j is the weight assigned to the synapses between nodes i and j [importance weightage]. Lastly, the synapses weights can be updated using a fixed learning rate, L.”) James, as modified by DeOliveira and Contryman, does not explicitly teach: wherein the importance weightage are proportionately distributed among one or more child nodes of the neural node and internal biases; determining whether the assignment of the importance weightage is completed to all of the neural nodes within the neural network model; determining a neural node importance weightage for each of the assigned importance weightage of the neural node, Yu teaches: wherein the importance weightage are proportionately distributed among one or more child nodes of the neural node and internal biases; (Page 9195, Col 1, “We define the importance of neurons in early layers based on a unified goal: minimizing the reconstruction errors of the responses produced in the FRL. We first measure the importance of responses in the FRL by treating them as features and applying some feature ranking techniques (e.g., [31]), then we propagate the importance of neurons backwards from the FRL to earlier layers”…” the weighted 1 distance [importance weightage] (proportional to the importance scores) [proportionately distributed among one or more child nodes of the neural node] between the original final response”…Section 3.2.1, “Thus, we define a network with depth n as a function F(n) = f(n) ◦ f(n−1) ◦···◦ f(1). The l-th layer f(l) is represented using the following general form, f(l) (x) = σ(l) (w(l) x + b(l) ), (1) where σ(l) is an activation function and w(l) , b(l) are weight and bias [internal biases], and f(n) represents the ”final response layer”(i.e., wherein the ‘bias’ is interpreted as the internal bias)”) determining whether the assignment of the importance weightage is completed to all of the neural nodes within the neural network model; (Page 9195, Col 1, “We obtain a closed-form solution to a relaxed version of this objective to infer the importance score [importance weightage] of every neuron in the network. Based on this solution, we derive the Neuron Importance Score Propagation (NISP) algorithm, which computes all importance scores recursively, using only one feature ranking of the final response layer and one backward pass through the network, as illustrated in Fig. 1 (i.e., wherein the importance weightage is completed to all node in the model)”) determining a neural node importance weightage for each of the assigned importance weightage of the neural node, (Page 9195, Col 1, “We obtain a closed-form solution to a relaxed version of this objective to infer the importance score [importance weightage] of every neuron in the network (i.e., wherein each node is given the importance weightage)”) A person of ordinary skill in the art would reasonably find the teachings of Yu to be helpful in solving the problem of neuron importance scores in neural network model in James. In view of the teachings of Yu it would have been obvious for a person of ordinary skill in the art to apply the teachings of Yu to James before the effective filing date of the claimed invention to combine in order to improve the efficiency of explaining neural network decision making in a loan/underwriting process (Yu, Abstract, “Specifically, we apply feature ranking techniques to measure the importance of each neuron in the FRL, formulate network pruning as a binary integer optimization problem, and derive a closed-form solution to it for pruning neurons in earlier layers. Based on our theoretical analysis, we propose the Neuron Importance Score Propagation (NISP) algorithm to propagate the importance scores of final responses to every neuron in the network.”) Regarding Claim 5 and analogous Claim 18: James, as modified by DeOliveira and Contryman, teaches the system of claim 1. DeOliveira further teaches: wherein in generating the case assessment report for the generated decision based on the validation, the case assessment report generation module is configured for: ([0101], “The exemplary verification analysis shown in FIGS. 12A-B includes evaluations concerning the authenticity, completeness, and accuracy of the loan application data, as well as a risk analysis and loan profitability assessment. Although the risk analysis is shown in FIGS. 12A-B as being conducted after a decision is made on the loan application, it should be recognized that a risk analysis can be performed throughout the loan application evaluation process (i.e., wherein the ‘evaluations is interpreted as the case assessment report that is based on the validated decision)”) mapping one or more data features associated with the application information of the applicant with corresponding set of explainable reasons for arriving at a decision pre- stored in a database; ([0049], “OFAC and PEP screening checks customer information against public or private databases [pre-stored in database] of individuals known to present an increased risk to the provider or who are precluded by law from engaging in certain financial transactions. In the case of OFAC screening, the customer information is compared against a specially designated national list (“SDN List”) maintained by the U.S. OFAC of groups and individuals who are deemed to present a threat to national security and foreign or economic policy, such as terrorists, money launderers, organized crime affiliates, and narcotics traffickers [co
Read full office action

Prosecution Timeline

Nov 15, 2021
Application Filed
Oct 31, 2025
Non-Final Rejection — §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
0%
Grant Probability
0%
With Interview (+0.0%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 2 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month