Prosecution Insights
Last updated: April 19, 2026
Application No. 18/306,103

MULTI-STAGE MACHINE-LEARNING TECHNIQUES FOR RISK ASSESSMENT

Non-Final OA §101§103§112
Filed
Apr 24, 2023
Examiner
MAHARAJ, DEVIKA S
Art Unit
2123
Tech Center
2100 — Computer Architecture & Software
Assignee
Equifax Inc.
OA Round
1 (Non-Final)
55%
Grant Probability
Moderate
1-2
OA Rounds
5y 0m
To Grant
63%
With Interview

Examiner Intelligence

Grants 55% of resolved cases
55%
Career Allow Rate
43 granted / 78 resolved
At TC average
Moderate +8% lift
Without
With
+7.7%
Interview Lift
resolved cases with interview
Typical timeline
5y 0m
Avg Prosecution
28 currently pending
Career history
106
Total Applications
across all art units

Statute-Specific Performance

§101
27.4%
-12.6% vs TC avg
§103
42.8%
+2.8% vs TC avg
§102
10.1%
-29.9% vs TC avg
§112
16.6%
-23.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 78 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION 1. This communication is in response to the Application No. 18/306,103 filed on April 24, 2023 in which Claims 1-20 are presented for examination. Notice of Pre-AIA or AIA Status 2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 3. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. 4. Claims 5-6, 12-13, and 18-19 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. 5. The term “favorable” in Claims 5, 12, and 18 is a relative term which renders the claim indefinite. The term “favorable” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. Examiner notes that it is not clear, nor defined by the claim, what comprises a “favorable action” and there is no requisite/degree/threshold as to what a “favorable action” may comprise and thus, the limitation “[...] the second set of explanatory data indicating whether a favorable action is recommended […]” is indefinite. 6. The term “highest impact” in Claims 6, 13, and 19 is a relative term which renders the claim indefinite. The term “highest impact” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. Examiner notes that it is not clear, nor defined by the claim, what comprises a predictor variable having a “highest impact” and there is no requisite/degree/threshold as to what such an “impact” may comprise. While the respective Independent claims state that “[…] the explanatory data indicating an effect that a predictor variable has on the first risk indicator” and it is mentioned that the “first risk indicator” is compared to a “threshold value”, this still does not define what comprises an “impact” and how such an “impact” may be evaluated as a predictor variable having the “highest impact” on the first risk indicator (e.g., is the predictor variable negatively/adversely impacting the first risk indicator, is the predictor value positively impacting the first risk indicator, is the predictor value impact directly correlated to its comparison with the threshold, etc.). Thus, the limitation “[…] wherein the explanatory data is generated for a subset of the predictor variables that have the highest impact on the first risk indicator” is indefinite. Claim Rejections - 35 USC § 101 7. 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 8. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding Claim 1: Step 1: Claim 1 is a method type claim. Therefore, Claims 1-7 are directed to either a process, machine, manufacture, or composition of matter. 2A Prong 1: If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation by mathematical calculation but for the recitation of generic computer components, then it falls within the “Mathematical Concepts” grouping of abstract ideas. determining, responsive to the risk assessment query, a first risk indicator for the target entity by applying a first risk assessment model to predictor variables associated with the target entity (mental process – other than reciting “by applying a first risk assessment model”, determining a first risk indicator may be performed manually by a user observing/analyzing a set of predictor variables associated with the target entity (such as variables indicating demographic characteristics, variables indicative of prior actions or transactions, variables indicative of one or more behavior traits of an entity, etc. as supported by Applicant’s specification Par. [0030]) and accordingly using judgement/evaluation to determine a “risk indicator” (e.g., credit score per Applicant’s specification Par. [0029]) based on the analysis and consideration of said predictor variables) responsive to determining that the first risk indicator indicates a risk higher than a threshold value […] (mental process – determining that the first risk indicator indicates a risk higher than a threshold value may be performed manually by a user observing/analyzing the first risk indicator and threshold value and accordingly using judgement/evaluation to determine whether the value of the first risk indicator is higher than/greater than the threshold value) generating explanatory data for the predictor variables, the explanatory data indicating an effect that a predictor variable has on the first risk indicator (mathematical process – generating explanatory data for the predictor values, where the explanatory data indicates an effect that a predictor variable has on a first risk indicator may be performed by mathematical process, utilizing a mathematical function/algorithm such as SHAP (Shapley Additive exPlanations) which is a commonly used mathematical process for providing explanations of a machine learning model) determining a second risk indicator for the target entity by applying a second risk assessment model to the predictor variables associated with the target entity (mental process – other than reciting “by applying a second risk assessment model”, determining a second risk indicator may be performed manually by a user observing/analyzing a set of predictor variables associated with the target entity (such as variables indicating demographic characteristics, variables indicative of prior actions or transactions, variables indicative of one or more behavior traits of an entity, etc. as supported by Applicant’s specification Par. [0030]) and accordingly using judgement/evaluation to determine a “risk indicator” (e.g., credit score per Applicant’s specification Par. [0029]) based on the analysis and consideration of said predictor variables) 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: a method that includes one or more processing devices […] (recited at a high-level of generality (i.e., as generic one or more processing devices) such that it amounts to no more than mere instructions to apply the exception using generic computer components) receiving, from a remote computing device, a risk assessment query for a target entity (Adding insignificant extra-solution activity to the judicial exception – see MPEP 2106.05(g)) […] by applying a first risk assessment model […] (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: high level recitation of applying a machine learning model to previously determined data without significantly more) […] by applying a second risk assessment model […] (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: high level recitation of applying a machine learning model to previously determined data without significantly more) transmitting, to the remote computing device, a response message including the first risk indicator, the explanatory data, and the second risk indicator, for use in controlling access to one or more interactive computing environments by the target entity (Adding insignificant extra-solution activity to the judicial exception – see MPEP 2106.05(g)) 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: a method that includes one or more processing devices […] (mere instructions to apply the exception using generic computer components cannot provide an inventive concept) receiving, from a remote computing device, a risk assessment query for a target entity (MPEP 2106.05(d)(II) indicates that merely “Receiving or transmitting data over a network” is a well-understood, routine, conventional function when it is claimed in a merely generic manner (as it is in the present claim). Thereby, a conclusion that the claimed limitation is well-understood, routine, conventional activity is supported under Berkheimer) […] by applying a first risk assessment model […] (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: high level recitation of applying a machine learning model to previously determined data without significantly more. This cannot provide an inventive concept) […] by applying a second risk assessment model […] (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: high level recitation of applying a machine learning model to previously determined data without significantly more. This cannot provide an inventive concept) transmitting, to the remote computing device, a response message including the first risk indicator, the explanatory data, and the second risk indicator, for use in controlling access to one or more interactive computing environments by the target entity (MPEP 2106.05(d)(II) indicates that merely “Receiving or transmitting data over a network” is a well-understood, routine, conventional function when it is claimed in a merely generic manner (as it is in the present claim). Thereby, a conclusion that the claimed limitation is well-understood, routine, conventional activity is supported under Berkheimer) For the reasons above, Claim 1 is rejected as being directed to an abstract idea without significantly more. This rejection applies equally to dependent claims 2-7. The additional limitations of the dependent claims are addressed below. Regarding Claim 2: Step 2A Prong 1: See the rejection of Claim 1 above, which Claim 2 depends on. Step 2A Prong 2 & Step 2B: wherein the first risk assessment model comprises an explainable risk assessment model and the second risk assessment model comprises a second- stage risk assessment model that is generated without an explainability constraint (Field of Use – limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception does not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application; in this case specifying that the first risk assessment model comprises an explainable risk assessment model and the second risk assessment model comprises a second-stage risk assessment model generated without an explainability constraint does not integrate the exception into a practical application nor amount to significantly more – See MPEP 2106.05(h)) Accordingly, under Step 2A Prong 2 and Step 2B, these additional elements do not integrate the abstract idea into practical application because they do not impose any meaningful limits on practicing the abstract idea, as discussed above in the rejection of claim 1. Regarding Claim 3: Step 2A Prong 1: See the rejection of Claim 2 above, which Claim 3 depends on. Step 2A Prong 2 & Step 2B: wherein the first risk assessment model comprises a logistic regression model, a linear regression model, monotonic decision trees, or a monotonic neural network (Field of Use – limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception does not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application; in this case specifying that the first risk assessment model comprises a logistic regression model, a linear regression model, monotonic decision trees, or a monotonic neural network does not integrate the exception into a practical application nor amount to significantly more – See MPEP 2106.05(h)) Accordingly, under Step 2A Prong 2 and Step 2B, these additional elements do not integrate the abstract idea into practical application because they do not impose any meaningful limits on practicing the abstract idea, as discussed above in the rejection of claim 1. Regarding Claim 4: Step 2A Prong 1: See the rejection of Claim 2 above, which Claim 4 depends on. Step 2A Prong 2 & Step 2B: wherein the second-stage risk assessment model comprises a deep neural network, a convolutional neural network, a recurrent neural network, or a recursive neural network (Field of Use – limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception does not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application; in this case specifying that the second-stage risk assessment model comprises a deep neural network, a convolutional neural network, a recurrent neural network, or a recursive neural network does not integrate the exception into a practical application nor amount to significantly more – See MPEP 2106.05(h)) Accordingly, under Step 2A Prong 2 and Step 2B, these additional elements do not integrate the abstract idea into practical application because they do not impose any meaningful limits on practicing the abstract idea, as discussed above in the rejection of claim 1. Regarding Claim 5: Step 2A Prong 1: See the rejection of Claim 1 above, which Claim 5 depends on. generating a second set of explanatory data based on the second risk indicator, the second set of explanatory data indicating whether a favorable action is recommended for the target entity (mathematical process – generating a second set of explanatory data based on the second risk indicator, where the second set of explanatory data indicates whether a favorable action is recommended for the target entity may be performed by mathematical process, utilizing a mathematical function/algorithm such as SHAP (Shapley Additive exPlanations) which is a commonly used mathematical process for providing explanations of a machine learning model) Step 2A Prong 2 & Step 2B: […] including the second set of explanatory data in the response message (mere instructions to apply the exception using generic computer components cannot provide an inventive concept) Accordingly, under Step 2A Prong 2 and Step 2B, these additional elements do not integrate the abstract idea into practical application because they do not impose any meaningful limits on practicing the abstract idea, as discussed above in the rejection of claim 1. Regarding Claim 6: Step 2A Prong 1: See the rejection of Claim 1 above, which Claim 6 depends on. Step 2A Prong 2 & Step 2B: wherein the explanatory data is generated for a subset of the predictor variables that have the highest impact on the first risk indicator (Field of Use – limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception does not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application; in this case specifying that the explanatory data is generated for a subset of predictor variables that have the highest impact (See 35 U.S.C. 112(b) rejection above) on the first risk indicator does not integrate the exception into a practical application nor amount to significantly more – See MPEP 2106.05(h)) Accordingly, under Step 2A Prong 2 and Step 2B, these additional elements do not integrate the abstract idea into practical application because they do not impose any meaningful limits on practicing the abstract idea, as discussed above in the rejection of claim 1. Regarding Claim 7: Step 2A Prong 1: See the rejection of Claim 1 above, which Claim 7 depends on. wherein the operations further comprise grouping the predictor variables into a plurality of groups […] (mental process – grouping predictor values may be performed manually by a user observing/analyzing the predictor values and accordingly using judgement/evaluation to group predictor values into groups based on their features/characteristics) […] wherein generating the explanatory data for the predictor variables comprises generating a same reason code for each group of the plurality of groups (mental process – generating a same reason code for each group may be performed manually by a user observing/analyzing the plurality of groups and accordingly using judgement/evaluation to generate a same reason code for each group of the plurality of groups) Step 2A Prong 2 & Step 2B: Accordingly, under Step 2A Prong 2 and Step 2B, these additional elements do not integrate the abstract idea into practical application because they do not impose any meaningful limits on practicing the abstract idea, as discussed above in the rejection of claim 1. Independent Claim 8 recites substantially the same limitations as Claim 1, in the form of a system, including generic computer components. The claim is also directed to performing mental/mathematical processes without significantly more, therefore it is rejected under the same rationale. For the reasons above, Claim 8 is rejected as being directed to an abstract idea without significantly more. This rejection applies equally to dependent claims 9-14. The additional limitations of the dependent claims are addressed below. Claim 9 recites substantially the same limitations as Claim 2, in the form of a system, including generic computer components. The claim is also directed to performing mental/mathematical processes without significantly more, therefore it is rejected under the same rationale. Claim 10 recites substantially the same limitations as Claim 3, in the form of a system, including generic computer components. The claim is also directed to performing mental/mathematical processes without significantly more, therefore it is rejected under the same rationale. Claim 11 recites substantially the same limitations as Claim 4, in the form of a system, including generic computer components. The claim is also directed to performing mental/mathematical processes without significantly more, therefore it is rejected under the same rationale. Claim 12 recites substantially the same limitations as Claim 5, in the form of a system, including generic computer components. The claim is also directed to performing mental/mathematical processes without significantly more, therefore it is rejected under the same rationale. Claim 13 recites substantially the same limitations as Claim 6, in the form of a system, including generic computer components. The claim is also directed to performing mental/mathematical processes without significantly more, therefore it is rejected under the same rationale. Claim 14 recites substantially the same limitations as Claim 7, in the form of a system, including generic computer components. The claim is also directed to performing mental/mathematical processes without significantly more, therefore it is rejected under the same rationale. Independent Claim 15 recites substantially the same limitations as Claim 1, in the form of a non-transitory computer-readable storage medium, including generic computer components. The claim is also directed to performing mental/mathematical processes without significantly more, therefore it is rejected under the same rationale. For the reasons above, Claim 15 is rejected as being directed to an abstract idea without significantly more. This rejection applies equally to dependent claims 16-20. The additional limitations of the dependent claims are addressed below. Claim 16 recites substantially the same limitations as Claim 2, in the form of a non-transitory computer-readable storage medium, including generic computer components. The claim is also directed to performing mental/mathematical processes without significantly more, therefore it is rejected under the same rationale. Claim 17 recites substantially the same limitations as Claims 3 and 4, in the form of a non-transitory computer-readable storage medium, including generic computer components. The claim is also directed to performing mental/mathematical processes without significantly more, therefore it is rejected under the same rationale. Claim 18 recites substantially the same limitations as Claim 5, in the form of a non-transitory computer-readable storage medium, including generic computer components. The claim is also directed to performing mental/mathematical processes without significantly more, therefore it is rejected under the same rationale. Claim 19 recites substantially the same limitations as Claim 6, in the form of a non-transitory computer-readable storage medium, including generic computer components. The claim is also directed to performing mental/mathematical processes without significantly more, therefore it is rejected under the same rationale. Claim 20 recites substantially the same limitations as Claim 7, in the form of a non-transitory computer-readable storage medium, including generic computer components. The claim is also directed to performing mental/mathematical processes without significantly more, therefore it is rejected under the same rationale. Claim Rejections - 35 USC § 103 9. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 10. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Sardari et al. (hereinafter Sardari) (US PG-PUB 20230316280), in view of Kamkar et al. (hereinafter Kamkar) (US PG-PUB 20220164877). Regarding Claim 1, Sardari teaches a method that includes one or more processing devices performing operations (Sardari, Par. [0089], “The processes described herein are illustrated as a collection of blocks in a logical flow graph, which represent a sequence of operations that can be implemented in hardware, software, or a combination thereof. In the context of software, the blocks represent computer-executable instructions that, when executed by one or more processors, perform the recited operations.”, therefore, methods/processes including one or more processing devices to perform operations are disclosed) comprising: receiving, from a remote computing device, a risk assessment query for a target entity (Sardari, Par. [0067], “For example, the payment service computing platform 110 may query one or more external services using the user data 124 received from the electronic device 104 of the user 102, such as the user-provided name, mailing address, phone number and/or email address, and may receive, from the external service(s), additional user data about the user 102 that can be analyzed to generate one or more external signals 210 that are used to determine the risk metric associated with the user 102.”, therefore, a risk assessment query for a target entity (user) is received from a remote computing device (user device) – this is similarly supported by Figure 2 and Figure 5 label 502, which also states that user data is received from an electronic device, in order to determine a risk metric associated with the user based on the user data); determining, responsive to the risk assessment query, a first risk indicator for the target entity by applying a first risk assessment model to predictor variables associated with the target entity (Sardari, Par. [0092], “In some examples, the payment service computing platform 110 may determine the risk metric at block 504. At 506, in some examples, a trained machine learning model(s) 122 is used to determine the risk metric. For example, the risk metric may be determined based on analyzing the user data 124 using a trained machine learning model(s) 122. The trained machine learning model(s) 122 used at block 506 may have been trained based on previously collected user data, such as user data associated with newly created user accounts, is described in more detail elsewhere herein (e.g., FIG. 10 ).”, thus, a first risk indicator/risk metric is determined, responsive to the risk assessment query (See Figure 5 labels 502-506, which shows how the risk metric is only determined after the query is received). Further, the risk indicator/metric is determined by applying a first risk assessment model (trained machine learning model) to predictor variables associated with the target entity (user data which may comprise an Internet Protocol/IP address, a geolocation, a payment card number, a bank account number, a personal name of the user, and/or contacts of the user, as supported by Sardari Par. [0051] and Applicant’s specification Par. [0030] similarly supports this interpretation)); responsive to determining that the first risk indicator indicates a risk higher than a threshold value (Sardari, Par. [0094], “At 512, a determination may be made as to whether the risk metric determined at block 504 is a high risk metric (or a low risk metric, as the case may be), such as by determining whether the risk metric satisfies a threshold. Satisfying a threshold may include meeting or exceeding the threshold, or strictly exceeding the threshold.”, therefore, the first risk indicator/metric may be compared to a threshold value, to determine whether the risk indicator/metric indicates a high risk. Further, as shown by Figure 5, after making this determination (label 512), the subsequent operations may be performed (labels 514A, 514B, 516, 518, 522, 524)), generating explanatory data for the predictor variables, the explanatory data indicating an effect that a predictor variable has on the first risk indicator (Sardari broadly discloses utilizing predictive analytic techniques to determine relationships between explanatory variables and predicted variables in Par. [0129], however this concept is not further expanded upon in Sardari. Therefore, see introduction of Kamkar reference below for explicit disclosure of generating explanatory data for the predictor variables, the explanatory data indicating an effect that a predictor variable has on the first risk indicator); and determining a second risk indicator for the target entity by applying a second risk assessment model to the predictor variables associated with the target entity (Sardari, Par. [0092], “In some examples, the payment service computing platform 110 may determine the risk metric at block 504. At 506, in some examples, a trained machine learning model(s) 122 is used to determine the risk metric. For example, the risk metric may be determined based on analyzing the user data 124 using a trained machine learning model(s) 122. The trained machine learning model(s) 122 used at block 506 may have been trained based on previously collected user data, such as user data associated with newly created user accounts, is described in more detail elsewhere herein (e.g., FIG. 10 ).” & Par. [0057], “The trained machine learning model(s) 122 used by one or more of the risk component 126, the incentive component 128, and/or the ranking component 130 may represent a single model or an ensemble of base-level machine learning models, and may be implemented as any type of machine learning model.”, thus, a second risk indicator/risk metric is determined, by applying a second risk assessment model (trained machine learning model) to predictor variables associated with the target entity (user data which may comprise an Internet Protocol/IP address, a geolocation, a payment card number, a bank account number, a personal name of the user, and/or contacts of the user, as supported by Sardari Par. [0051] and Applicant’s specification Par. [0030] similarly supports this interpretation). Examiner also notes that Sardari Par. [0057] and Par. [0093] explicitly mention that the determination of a risk indicator/metric may involve one or more machine learning models applied to the same predictor values associated with the target entity (user data) – hence, Sardari teaches generating a second risk indicator/metric by applying a second risk assessment model); and transmitting, to the remote computing device, a response message including the first risk indicator, the explanatory data, and the second risk indicator, for use in controlling access to one or more interactive computing environments by the target entity (Sardari, Par. [0096], “At 516, a user interface is caused to be displayed via the payment application 106 executing on the electronic device 104 associated with the user 102, the user interface presenting an interactive element(s) for receiving the incentive in exchange for the user 102 referring at least one other user to a payment service 108. […] Examples of the user interface that can be displayed at block 516 include the user interfaces 118, 400, and 402, as described above with reference to FIGS. 1, 3A, 3B, and 4A.”, thus, a response message may be transmitted to the remote computing device, to display the risk indicators and explanatory data for use in controlling access to one or more interactive computing environments (payment application) by the target entity (user). For example, See Sardari Figure 3B which displays a risk rating (label 302, based on the first/second risk metrics) and explanatory data (label 304, which may be used to explain the risk rating and how it may be improved) and further an incentive (label 120(n)) which may be utilized to control access/invite other users to the one or more interactive computing environments/payment application) . While Sardari Par. [0129] broadly states “The predictive analytic techniques may be utilized to determine associations and/or relationships between explanatory variables and predicted variables from past occurrences and utilizing these variables to predict the unknown outcome.”, Sardari does not explicitly detail generating explanatory data for the predictor variables, the explanatory data indicating an effect that a predictor variable has on the first risk indicator; However, Kamkar explicitly discloses generating explanatory data for the predictor variables, the explanatory data indicating an effect that a predictor variable has on the first risk indicator (Kamkar, Par. [0109], “In variants, generating evaluation information (S240) includes generating explanation information by performing a credit assignment process that assigns an importance to the data variables of inputs used the tree ensemble to generate a score or result, and using the explanation information to generate an evaluation result. The data variables of inputs used by the tree ensemble may include various predictors, including: numeric variables, binary variables, categorical variables, ratios, rates, values, times, amounts, quantities, matrices, scores, or outputs of other models.”, therefore, explanatory data/explanation information is generated for the data/predictor variables, where the explanatory data indicates an effect/importance that a predictor variable has on a risk metric (See Kamkar Par. [0002] & Par. [0114] which indicates how the system may be applied to the field of credit risk modeling & Kamkar Figure 2 which exemplifies this process); It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of claim 1, as disclosed by Sardari to include generating explanatory data for the predictor variables, the explanatory data indicating an effect that a predictor variable has on the first risk indicator, as disclosed by Kamkar. One of ordinary skill in the art would have been motivated to make this modification to enable the generation of explanatory data, which may be utilized to better understand the importance of different predictor variables and their corresponding impact on the model output, including model fairness with respect to sensitive attributes (Kamkar, Par. [0023], “Model explanation information can optionally be generated for the trained tree-based predictive model. A fairness penalty parameter can be configured to control the degree of importance that the fairness objective carries during the model training process. This permits the fairness-enabled tree-based boosting module to generate a range of models, each resulting from a different fairness penalty value. One or more of the trained tree-based predictive models can be analyzed to project business outcomes (including measures of profitability and fairness with respect to a sensitive attribute) and selected for use in production. In embodiments, the system generates compliance documentation reflecting the search process and properties of the selected model.”). Regarding Claim 2, Sardari in view of Kamkar teaches the method of claim 1, wherein the first risk assessment model comprises an explainable risk assessment model and the second risk assessment model comprises a second-stage risk assessment model that is generated without an explainability constraint (Sardari, Par. [0129], “Information from stored and/or accessible data may be extracted from one or more databases, such as the datastore(s) 116, and may be utilized to predict trends and behavior patterns. The predictive analytic techniques may be utilized to determine associations and/or relationships between explanatory variables and predicted variables from past occurrences and utilizing these variables to predict the unknown outcome. The predictive analytic techniques may include defining the outcome and data sets used to predict the outcome.”, thus, the first risk assessment model may comprise an explainable risk assessment model, including explanatory variables. However, as mentioned by Sardari Par. [0129], the application of these predictive analytic techniques is optional – therefore, the second-stage risk assessment model may not necessarily include explainability constraints). Regarding Claim 3, Sardari in view of Kamkar teaches the method of claim 2, wherein the first risk assessment model comprises a logistic regression model, a linear regression model, monotonic decision trees, or a monotonic neural network (Sardari, Par. [0057], “The trained machine learning model(s) 122 used by one or more of the risk component 126, the incentive component 128, and/or the ranking component 130 may represent a single model or an ensemble of base-level machine learning models, and may be implemented as any type of machine learning model. For example, suitable machine learning models 122 for use by the techniques and systems described herein include, without limitation, neural networks (e.g., deep neural networks (DNNs), recurrent neural networks (RNNs), etc.), tree-based models (e.g., eXtreme Gradient Boosting (XGBoost) models), support vector machines (SVMs), kernel methods, random forests, splines (e.g., multivariate adaptive regression splines), hidden Markov model (HMMs), Kalman filters (or enhanced Kalman filters), Bayesian networks (or Bayesian belief networks), multilayer perceptrons (MLPs), expectation maximization, genetic algorithms, linear regression algorithms, nonlinear regression algorithms, logistic regression-based classification models, or an ensemble thereof.”, thus, the first risk assessment model (of an ensemble of trained machine learning models) may comprise a linear regression model or logistic regression model). Regarding Claim 4, Sardari in view of Kamkar teaches the method of claim 2, wherein the second-stage risk assessment model comprises a deep neural network, a convolutional neural network, a recurrent neural network, or a recursive neural network (Sardari, Par. [0057], “ The trained machine learning model(s) 122 used by one or more of the risk component 126, the incentive component 128, and/or the ranking component 130 may represent a single model or an ensemble of base-level machine learning models, and may be implemented as any type of machine learning model. For example, suitable machine learning models 122 for use by the techniques and systems described herein include, without limitation, neural networks (e.g., deep neural networks (DNNs), recurrent neural networks (RNNs), etc.), tree-based models (e.g., eXtreme Gradient Boosting (XGBoost) models), support vector machines (SVMs), kernel methods, random forests, splines (e.g., multivariate adaptive regression splines), hidden Markov model (HMMs), Kalman filters (or enhanced Kalman filters), Bayesian networks (or Bayesian belief networks), multilayer perceptrons (MLPs), expectation maximization, genetic algorithms, linear regression algorithms, nonlinear regression algorithms, logistic regression-based classification models, or an ensemble thereof.”, thus, the second-stage risk assessment model (of an ensemble of trained machine learning models) may comprise a deep neural network or a recurrent neural network). Regarding Claim 5, Sardari in view of Kamkar teaches the method of claim 1, wherein the operations further comprise: generating a second set of explanatory data based on the second risk indicator (Kamkar, Par. [0109], “In variants, generating evaluation information (S240) includes generating explanation information by performing a credit assignment process that assigns an importance to the data variables of inputs used the tree ensemble to generate a score or result, and using the explanation information to generate an evaluation result. The data variables of inputs used by the tree ensemble may include various predictors, including: numeric variables, binary variables, categorical variables, ratios, rates, values, times, amounts, quantities, matrices, scores, or outputs of other models.”, therefore, explanatory data/explanation information is generated for the data/predictor variables, where the explanatory data indicates an effect/importance that a predictor variable has on a risk metric (See Kamkar Par. [0002] & Par. [0114] which indicates how the system may be applied to the field of credit risk modeling & Kamkar Figure 2 which exemplifies this process), the second set of explanatory data indicating whether a favorable action is recommended for the target entity (Kamkar, Par. [0050], “ In variations, the less discriminatory alternative model provides model explanations and adverse action reason codes to the decisioning system along with each score. If the loan is denied based on the less discriminatory alternative model score, the decisioning system causes an adverse action letter to be generated, printed, and sent to the person applying for a loan. In variations, the adverse action letter contains natural language statements of specific reason based on explanations of the less discriminatory alternative model's score.”, thus, the set of explanatory data may indicate whether a favorable or adverse action (adverse action letter) is recommended for the target entity. Examiner notes that the term “favorable action” is indefinite and rejected under 35 U.S.C. 112(b) above – therefore, Examiner interprets the term “favorable action” to be analogous to the action of not issuing an “adverse action letter” with respect to the Kamkar reference, as an “adverse action letter” being issued would be unfavorable (loan denial) and not issuing this “adverse action letter” would be favorable (loan granted)); and including the second set of explanatory data in the response message (Kamkar, Par. [0120-0121], “n some variations, the selection report includes a histogram of adverse action reason codes or model explanations for each alternative model. In some variations, the selection report includes partial dependence plots, ICE plots, and other charts showing the influence of each model input variable over a range of values, with respect to each model and disaggregated by protected attributes. In some variations, a user interface (e.g., 115) includes the selection report”, therefore, the second set of explanatory data (included in the selection report) may be included in the response message (user interface)). The reasons of obviousness have been noted in the rejection of Claim 1 above and applicable herein. Regarding Claim 6, Sardari in view of Kamkar teaches the method of claim 1, wherein the explanatory data is generated for a subset of the predictor variables that have the highest impact on the first risk indicator (Kamkar, Par. [0104], “In some variants, at least one component of the system 100 generates a fairness metric for the tree ensemble by generating feature importance information for one or more features used by the tree ensemble. In some implementations, the modeling system 100 identifies feature importance values for each sensitive attribute identified at S210, and compares the feature importance of each sensitive attribute to a respective fairness criteria.”, therefore, explanatory data may be generated for only a subset of predictor variables that have the highest impact on risk (sensitive attributes identified at step 210 in Figure 2). Examiner notes that the term “highest impact” is indefinite and rejected under 35 U.S.C. 112(b) above – therefore, Examiner interprets the term “highest impact” to be analogous to the sensitive attributes identified by Kamkar, as these sensitive attributes are unique to each user, and therefore, may hold more importance (highest impact) to predictions related to that particular user). The reasons of obviousness have been noted in the rejection of Claim 1 above and applicable herein. Regarding Claim 7, Sardari in view of Kamkar teaches the method of claim 1, wherein the operations further comprise grouping the predictor variables into a plurality of groups (Kamkar, Par. [0098], “When generating model alternatives that serve members (observations) with multiple sensitive: attributes (e.g., African American, Hispanic female, elderly groups), some or all protected classes can be aggregated into a single protected group and alternatives can be searched by adjusting a single fairness penalty parameter.”, thus, the predictor variables (including sensitive attributes) may be grouped into a plurality of groups), wherein generating the explanatory data for the predictor variables comprises generating a same reason code for each group of the plurality of groups (Kamkar, Par. [0120], “In some variations, the selection report includes a histogram of adverse action reason codes or model explanations for each alternative model. In some variations, the selection report includes partial dependence plots, ICE plots, and other charts showing the influence of each model input variable over a range of values, with respect to each model and disaggregated by protected attributes.”, therefore, the explanatory data for the predictor variables may comprise generating a same reason code (adverse action reason code) for each aggregated group of sensitive/protected attributes). The reasons of obviousness have been noted in the rejection of Claim 1 above and applicable herein. Regarding Claim 8, Sardari in view of Kamkar teaches a system comprising: a processing device; and a memory device in which instructions executable by the processing device are stored for causing the processing device to perform operations (Sardari, Claim 6, “A system comprising: one or more processors; and computer-executable instructions that, when executed by the one or more processors, cause the one or more processors to perform operations comprising: […]”, therefore, a system comprising one or more processors and a memory device storing instructions executable by the one or more processors is disclosed) comprising: […] The rest of the claim language in Claim 8 recites substantially the same limitations as Claim 1, in the form of a system, therefore it is rejected under the same rationale. The reasons of obviousness have been noted in the rejection of Claim 1 above and applicable herein. Claim 9 recites substantially the same limitations as Claim 2 in the form of a system, therefore it is rejected under the same rationale. Claim 10 recites substantially the same limitations as Claim 3 in the form of a system, therefore it is rejected under the same rationale. Claim 11 recites substantially the same limitations as Claim 4 in the form of a system, therefore it is rejected under the same rationale. Claim 12 recites substantially the same limitations as Claim 5 in the form of a system, therefore it is rejected under the same rationale. Claim 13 recites substantially the same limitations as Claim 6 in the form of a system, therefore it is rejected under the same rationale. Claim 14 recites substantially the same limitations as Claim 7 in the form of a system, therefore it is rejected under the same rationale. Regarding Claim 15, Sardari in view of Kamkar teaches a non-transitory computer-readable storage medium having program code that is executable by a processor to cause a computing device to perform operations (Sardari, Par. [0259], “Depending on the configuration of the server(s) 1604, the computer-readable media 1630 can be a type of computer-readable storage media and/or can be a tangible non-transitory media to the extent that when mentioned, non-transitory computer-readable media exclude media such as energy, carrier signals, electromagnetic waves, and signals per se.”, thus, a non-transitory computer-readable media (also pictured by label 1630 in figure 16) comprising program code to be executed by a processor (label 1628 in figure 16) is disclosed), the operations comprising: […] The rest of the claim language in Claim 15 recites substantially the same limitations as Claim 1, in the form of a non-transitory computer-readable storage medium, therefore it is rejected under the same rationale. The reasons of obviousness have been noted in the rejection of Claim 1 above and applicable herein. Claim 16 recites substantially the same limitations as Claim 2 in the form of a non-transitory computer-readable storage medium, therefore it is rejected under the same rationale. Claim 17 recites substantially the same limitations as Claims 3 and 4 in the form of a non-transitory computer-readable storage medium, therefore it is rejected under the same rationale. Claim 18 recites substantially the same limitations as Claim 5 in the form of a non-transitory computer-readable storage medium, therefore it is rejected under the same rationale. Claim 19 recites substantially the same limitations as Claim 6 in the form of a non-transitory computer-readable storage medium, therefore it is rejected under the same rationale. Claim 20 recites substantially the same limitations as Claim 7 in the form of a non-transitory computer-readable storage medium, therefore it is rejected under the same rationale. Conclusion 11. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Devika S Maharaj whose telephone number is (571)272-0829. The examiner can normally be reached Monday - Thursday 8:30am - 5:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alexey Shmatov can be reached at (571)270-3428. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DEVIKA S MAHARAJ/Examiner, Art Unit 2123
Read full office action

Prosecution Timeline

Apr 24, 2023
Application Filed
Jan 07, 2026
Non-Final Rejection — §101, §103, §112
Apr 01, 2026
Applicant Interview (Telephonic)
Apr 01, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585948
NEURAL PROCESSING DEVICE AND METHOD FOR PRUNING THEREOF
2y 5m to grant Granted Mar 24, 2026
Patent 12579426
Training a Neural Network having Sparsely-Activated Sub-Networks using Regularization
2y 5m to grant Granted Mar 17, 2026
Patent 12572795
ANSWER SPAN CORRECTION
2y 5m to grant Granted Mar 10, 2026
Patent 12561577
AUTOMATIC FILTER SELECTION IN DECISION TREE FOR MACHINE LEARNING CORE
2y 5m to grant Granted Feb 24, 2026
Patent 12554969
METHOD AND SYSTEM FOR THE AUTOMATIC SEGMENTATION OF WHITE MATTER HYPERINTENSITIES IN MAGNETIC RESONANCE BRAIN IMAGES
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
55%
Grant Probability
63%
With Interview (+7.7%)
5y 0m
Median Time to Grant
Low
PTA Risk
Based on 78 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month