Prosecution Insights
Last updated: April 19, 2026
Application No. 18/790,920

DETERMINING BIAS-CAUSING FEATURES OF A MACHINE LEARNING MODEL

Non-Final OA §101§103
Filed
Jul 31, 2024
Examiner
TRAN, AMY NMN
Art Unit
2126
Tech Center
2100 — Computer Architecture & Software
Assignee
Snowflake Inc.
OA Round
3 (Non-Final)
36%
Grant Probability
At Risk
3-4
OA Rounds
5y 2m
To Grant
84%
With Interview

Examiner Intelligence

Grants only 36% of cases
36%
Career Allow Rate
10 granted / 28 resolved
-19.3% vs TC avg
Strong +48% interview lift
Without
With
+47.9%
Interview Lift
resolved cases with interview
Typical timeline
5y 2m
Avg Prosecution
24 currently pending
Career history
52
Total Applications
across all art units

Statute-Specific Performance

§101
32.5%
-7.5% vs TC avg
§103
44.2%
+4.2% vs TC avg
§102
6.0%
-34.0% vs TC avg
§112
15.6%
-24.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 28 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 04/23/2025; 08/20/2025 and 12/19/2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 04/23/2025 has been entered. Response to Amendment The amendments filed 04/23/2025 have been entered. The status of the claims is as follows: Claims 1-2,4-6,8-9, 11-12, 14-17 and 19-20 are pending in the application. Claims 3, 7, 10, 13 and 18 are cancelled. Claims 1, 11 and 16 are amended. Response to Arguments In reference to the Rejection of Claims under 35 U.S.C 101: Argument for Step 2A, Prong 1: Claims Do Not Recite an Abstract Idea: Applicant asserts on Remarks page 2 that the claims at issue do not involve abstract ideas but instead present a specific technological solution for identifying and mitigating bias in machine learning models. Examiner appreciates Applicant’s detailed explanation of the invention and its advantages, but respectfully notes that while the claims aim to present a solution for mitigating bias in machine learning models, the steps outlined are fundamentally abstract ideas and represent insignificant extra-solution activities rather than concrete technological advancements. Determining performance, determining bias, identifying features, pinpointing features that cause bias and calculating one or more statistical metrics are examples of “concepts performed in the human mind (including an observation, evaluation, judgement, opinion)” (see MPEP 2106.04(a)) or processes that can be performed in the human mind with pen and paper and the Examiner notes that the improvements can’t come from abstract idea itself. The analysis for Step 2A Prong Two involves evaluating if the remaining additional elements recited in the claims integrate the mental processes into a practical application. Those additional elements comprise retraining the model, utilizing the retrained model to make an inference, and displaying information about the bias, features and one or more statistical metrics. Although these limitations are not directed to abstract ideas, they are recited in a very generic way that do not contribute significantly to solving the stated problem. Examiner notes that the re-training step and the utilizing retrained model to make an inference step amount to mere instructions to implement the abstract idea on a computer, as each of these limitations are recited at a high level of generality and therefore the retrained machine learning model amounts to the use of a programmed generic computer. Displaying information about the bias is insignificant extra solution activity (mere data outputting, see MPEP 2106.05(g)(3)). The claims do not recite any specific details about or improvements to the machine learning techniques, and thus they are not sufficient to recite improvements to the functioning of a computer or technology as described in MPEP 2106.05(a). Applicant’s arguments filed 04/23/2025 have been fully considered but they are not persuasive. Argument for Step 2A, Prong 2: Alleged Abstract Idea is Integrated into a Practical Application. Applicant asserts on Remarks page 3 that even if the claims are considered to recite an abstract idea, they are integrated into a practical application. The practical application is the improvement of the machine learning models by identifying and mitigating bias, which is a significant technical problem in the field of artificial intelligence. The Applicant emphasizes that the claims provide a concrete solution to this problem by adjusting the features causing bias and retraining the models, thereby improving the accuracy and fairness of the model’s predictions. Applicant further asserts that the user interface displaying information about bias and adjustments integrates the abstract idea into a practical application by providing users with actionable insights to address bias in machine learning models. Applicant concludes that this practical application is a specific improvement to the functioning of machine learning models and is not merely a generic computer implementation of an abstract idea. Examiner appreciates Applicant’s detailed explanation of the invention and its advantages, but respectfully disagrees that the abstract idea is integrated into a practical application. The analysis of Step 2A Prong 2 involves evaluating if the remaining additional elements recited in the claims integrate the abstract idea into a practical application. Those additional elements comprise retraining the model, utilizing the retrained model to make an inference, and displaying information about the bias, features and one or more statistical metrics. Although these limitations are not directed to abstract ideas, they are recited in a very generic way that do not contribute significantly to solving the stated problem. Examiner notes that the re-training step and the utilizing retrained model to make an inference step amount to mere instructions to implement the abstract idea on a computer, as each of these limitations are recited at a high level of generality and therefore the retrained machine learning model amounts to the use of a programmed generic computer. Displaying information about the bias is insignificant extra solution activity (mere data outputting, see MPEP 2106.05(g)(3)). The claims do not recite any specific details about or improvements to the machine learning techniques, and thus they are not sufficient to recite improvements to the functioning of a computer or technology as described in MPEP 2106.05(a). Applicant’s arguments filed 04/23/2025 have been fully considered but they are not persuasive. Argument for Step 2B: Claims Recite Additional Elements that Amount to Significantly More than the Judicial Exception: Applicant asserts on Remarks pages 3-4 that the claims recite additional elements that amount to significantly more than the judicial exception and that these elements are not well-understood, routine, or conventional activities in the field of machine learning. Examiner respectfully disagrees and notes that as for the displaying the information about bias using a user interface, the claims do not recite any novel or unconventional technology to display this data, and thus this is well-understood, routine, and conventional activity, particularly “receiving or transmitting data over a network” (see MPEP 2106.05(d)(II)(I)). Furthermore, as discussed above, the remaining additional elements regarding retraining the machine learning model and utilizing the retrained model to make an inference amount to merely applying the abstract idea in a computer environment. Applicant’s arguments filed 04/23/2025 have been fully considered but they are not persuasive. In reference to the Argument for the Response to Arguments: Applicant asserts in Remarks pg. 4-5 that Examiner applied an incorrect legal statement of “the analysis for Step 2A Prong Two involves evaluating if the remaining additional elements recited in the claims integrate the mental processes into a practical application” and requested the Office to look at the claim as a whole to determine if the claim recites a practical application. In addition, Applicant asserts that the Office failed to reply to the argument about practical application e.g., by stating that the claims do not recite a practical application. Instead, the Office referred to aspects not argued, such as instruction to implement the abstract idea on a computer, high level of generality, or specific details about or improvements to the machine learning techniques. Applicant’s arguments have been fully considered but are not persuasive. The Examiner did not apply an incorrect legal standard under Step 2A, Prong Two. Consistent with MPEP 2106.04(d), the Examiner evaluated the claim as a whole to determine whether the recited judicial exception is integrated into a practical application. In doing so, the Examiner appropriately considered the additional elements beyond the judicial exception to determine whether they meaningfully limit the exception, which is required by the MPEP and controlling guidance. The Examiner previously explained that the additional elements merely recite implementation of the judicial exception using generic computing components and high-level, result-oriented steps. Such elements do not improve the functioning of a computer or another technology, do not effect a particular transformation, and do not apply the judicial exception in a meaningful way beyond its abstract formulation. Accordingly, the claims do not integrate the judicial exception into a practical application under Step 2A, Prong Two. Applicant’s assertion that the Office failed to address practical application is not supported. The Office explicitly identified why the claims lack a practical application, including that the alleged machine learning aspects are described at a high level of generality and amount to instructions to apply the abstract idea on a computer. Applicant’s arguments filed on 04/23/2025 have been fully considered but they are not persuasive. In reference to Rejections of Claims Under 35 U.S.C 103: Applicant’s arguments, see Remarks pages 5-15, filed 04/23/2025, with respect to the rejection(s) of claim(s) under 35 U.S.C 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Datta et al. (“Algorithmic Transparency via Quantitative Input Influence: Theory and Experiments with Learning Systems”) Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-2,4-6,8-12,14-17 and 19-20 are rejected under U.S.C 101 for containing an abstract idea without significantly more. Regarding claim 1: Step 1 – Is the claim to a process, machine, manufacture or composition of matter? Yes, the claim is a process. Step 2A – Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon? Yes, the claim recites an abstract idea. determining a performance of a machine learning (ML) model for a first group and a second group; - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.) determining bias by the ML model based on a difference of performance between the first group and the second group - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.) the determining of the bias comprising determining feature influence in the ML model using Quantitative Input Influence (QII) that measures a degree of influence that each input feature exerts on outputs of the ML model - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.) the determining the bias further comprising determining that a difference in performance of the first group and the second group exceeds a predetermined threshold; - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.) identifying one or more features of the ML model that cause the bias by the ML model; - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.) determining an adjustment to the identified one or more features that cause the bias; This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.) calculating one or more statistical metrics associated with the one or more features and the bias This limitation is directed to mathematical calculation (see MPEP 2106.04(a)(2) l. C.) Step 2A – Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application? No, there are no additional elements that integrate the judicial exception into a practical application. The additional elements: a memory comprising instructions This limitation is directed to a computer merely used as a tool to perform an existing process (see MPEP 2106.05(f) (2)). one or more computer processors, wherein the instructions, when executed by the one or more computer processors, cause the system to perform operations comprising This limitation is directed to a computer merely used as a tool to perform an existing process (see MPEP 2106.05(f) (2)). retraining the ML model based on the adjustment to the one or more features in the ML model to obtain a retrained ML model without the bias; Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the exception into a practical application. utilizing the retrained model to make an inference without the determined bias; Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the exception into a practical application. providing a user interface (UI) comprising a display of information about the bias, the one or more features, and the one or more statistical metrics. This limitation is directed to insignificant extra-solution activity – mere data outputting (see MPEP 2106.05(g)). Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception? No, there are no additional elements that amount to significantly more than the judicial exception. The additional elements are: a memory comprising instructions This limitation is directed to a computer merely used as a tool to perform an existing process (see MPEP 2106.05(f) (2)). one or more computer processors, wherein the instructions, when executed by the one or more computer processors, cause the system to perform operations comprising This limitation is directed to a computer merely used as a tool to perform an existing process (see MPEP 2106.05(f) (2)). retraining the ML model based on the adjustment to the one or more features in the ML model to obtain a retrained ML model without the bias; Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the exception into a practical application. utilizing the retrained model to make an inference without the determined bias; Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the exception into a practical application. providing a user interface (UI) comprising a display of information about the bias, the one or more features, and the one or more statistical metrics. This limitation is directed to receiving or transmitting data over a network. The courts have recognized receiving or transmitting data over a network as well understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity (see MPEP 2106.05(d) II.). Regarding claim 2, Claim 2 is rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 1 which includes an abstract idea (see rejection for claim 1). The additional limitations: discarding the one or more features from the ML model; and - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.) Regarding claim 4, Claim 4 is rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 1 which includes an abstract idea (see rejection for claim 1). The additional limitations: wherein the adjustment comprises modifying how the one or more features are bucketed by the ML model. - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.) because the limitation involves adjusting features by bucketizing the feature values differently, which also means grouping the feature values into bins or buckets differently. Regarding claim 5, Claim 5 is rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 1 which includes an abstract idea (see rejection for claim 1). The additional limitations: determining that the bias is caused by bias in training data of the ML model; - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.) updating the training data to eliminate the bias; and Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the exception into a practical application. retraining the ML model with the updated training data. Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the exception into a practical application. Regarding claim 6, Claim 6 is rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 1 which includes an abstract idea (see rejection for claim 1). The additional limitations: wherein the first group comprises a protected group and the second group comprises a complement group of the first group. – This claim merely recites a further limitation on the determining a performance of a machine learning (ML) model for a first group and a second group from claim 1, which is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.) Regarding claim 8, Claim 8 is rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 1 which includes an abstract idea (see rejection for claim 1). The additional limitations: wherein the UI comprises a chart for group disparity metrics showing a Difference in Means (DM) as a difference between the means of ML model scores for the first group and the second group. This claim merely recites a further limitation on the providing a user interface (UI) comprising a display of information about the bias, the one or more features, and the one or more statistical metrics from claim 1, which is directed to insignificant extra-solution activity – mere data outputting (see MPEP 2106.05(g)) under step 2A Prong 2, and is directed to receiving or transmitting data over a network. The courts have recognized receiving or transmitting data over a network as well understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity (see MPEP 2106.05(d) II.) under Step 2B. Regarding claim 9, Claim 9 is rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 1 which includes an abstract idea (see rejection for claim 1). The additional limitations: wherein the UI comprises a table for a plurality of features causing bias and an influence of each feature on the first group and the second group. This claim merely recites a further limitation on the providing a user interface (UI) comprising a display of information about the bias, the one or more features, and the one or more statistical metrics from claim 1, which is directed to insignificant extra-solution activity – mere data outputting (see MPEP 2106.05(g)) under Step 2A Prong 2, and is directed to receiving or transmitting data over a network. The courts have recognized receiving or transmitting data over a network as well understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity (see MPEP 2106.05(d) II.) under Step 2B. Regarding claim 11: Step 1 – Is the claim to a process, machine, manufacture or composition of matter? Yes, the claim is a process. Step 2A – Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon? Yes, the claim recites an abstract idea. determining a performance of a machine learning (ML) model for a first group and a second group; - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.) determining bias by the ML model based on a difference of performance between the first group and the second group - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.) the determining of the bias comprising determining feature influence in the ML model using Quantitative Input Influence (QII) that measures a degree of influence that each input feature exerts on outputs of the ML model - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.) the determining of the bias further comprising determining that a difference in performance of the first group and the second group exceeds a predetermined threshold; - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.) identifying one or more features of the ML model that cause the bias by the ML model; - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.) determining an adjustment to the identified one or more features that cause the bias; - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.) calculating one or more statistical metrics associated with the one or more features and the bias This limitation is directed to mathematical calculation (see MPEP 2106.04(a)(2) l. C.) Step 2A – Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application? No, there are no additional elements that integrate the judicial exception into a practical application. The additional elements: a memory comprising instructions This limitation is directed to a computer merely used as a tool to perform an existing process (see MPEP 2106.05(f) (2)). one or more computer processors, wherein the instructions, when executed by the one or more computer processors, cause the system to perform operations comprising This limitation is directed to a computer merely used as a tool to perform an existing process (see MPEP 2106.05(f) (2)). retraining the ML model based on the adjustment to the one or more features in the ML model to obtain a retrained ML model without the bias; Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the exception into a practical application. utilizing the retrained model to make an inference without the determined bias; Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the exception into a practical application. providing a user interface (UI) comprising a display of information about the bias, the one or more features, and the one or more statistical metrics. This limitation is directed to insignificant extra-solution activity – mere data outputting (see MPEP 2106.05(g)). Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception? No, there are no additional elements that amount to significantly more than the judicial exception. The additional elements are: a memory comprising instructions This limitation is directed to a computer merely used as a tool to perform an existing process (see MPEP 2106.05(f) (2)). one or more computer processors, wherein the instructions, when executed by the one or more computer processors, cause the system to perform operations comprising This limitation is directed to a computer merely used as a tool to perform an existing process (see MPEP 2106.05(f) (2)). retraining the ML model based on the adjustment to the one or more features in the ML model to obtain a retrained ML model without the bias; Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the exception into a practical application. utilizing the retrained model to make an inference without the determined bias; Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the exception into a practical application. providing a user interface (UI) comprising a display of information about the bias, the one or more features, and the one or more statistical metrics. This limitation is directed to receiving or transmitting data over a network. The courts have recognized receiving or transmitting data over a network as well understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity (see MPEP 2106.05(d) II.). Regarding claim 12, Claim 12 is rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 11 which includes an abstract idea (see rejection for claim 11). The additional limitations: discarding the one or more features from the ML model; and - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.) Regarding claim 14, Claim 14 is rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 11 which includes an abstract idea (see rejection for claim 11). The additional limitations: wherein the adjustment comprises modifying how the one or more features are bucketed by the ML model. - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.) because the limitation involves adjusting features by bucketizing the feature values differently, which also means grouping the feature values into bins or buckets differently. Regarding claim 15, Claim 15 is rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 11 which includes an abstract idea (see rejection for claim 11). The additional limitations: determining that the bias is caused by bias in training data of the ML model; - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.) updating the training data to eliminate the bias; and Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the exception into a practical application. retraining the ML model with the updated training data. Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the exception into a practical application. Regarding claim 16: Step 1 – Is the claim to a process, machine, manufacture or composition of matter? Yes, the claim is a process. Step 2A – Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon? Yes, the claim recites an abstract idea. determining a performance of a machine learning (ML) model for a first group and a second group; - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.) determining bias by the ML model based on a difference of performance between the first group and the second group - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.) the determining of the bias comprising determining feature influence in the ML model using Quantitative Input Influence (QII) that measures a degree of influence that each input feature exerts on outputs of the ML model - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.) the determining of the bias further comprising determining that a difference in performance of the first group and the second group exceeds a predetermined threshold; - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.) identifying one or more features of the ML model that cause the bias by the ML model; - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.) determining an adjustment to the identified one or more features that cause the bias; - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.) calculating one or more statistical metrics associated with the one or more features and the bias This limitation is directed to mathematical calculation (see MPEP 2106.04(a)(2) l. C.) Step 2A – Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application? No, there are no additional elements that integrate the judicial exception into a practical application. The additional elements: A non-transitory machine-readable storage medium including instructions that, when executed by a machine, cause the machine to perform operations comprising This limitation is directed to a computer merely used as a tool to perform an existing process (see MPEP 2106.05(f) (2)). retraining the ML model based on the adjustment to the one or more features in the ML model to obtain a retrained ML model without the bias; Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the exception into a practical application. utilizing the retrained model to make an inference without the determined bias; Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the exception into a practical application. providing a user interface (UI) comprising a display of information about the bias, the one or more features, and the one or more statistical metrics. This limitation is directed to insignificant extra-solution activity – mere data outputting (see MPEP 2106.05(g)). Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception? No, there are no additional elements that amount to significantly more than the judicial exception. The additional elements are: A non-transitory machine-readable storage medium including instructions that, when executed by a machine, cause the machine to perform operations comprising This limitation is directed to a computer merely used as a tool to perform an existing process (see MPEP 2106.05(f) (2)). retraining the ML model based on the adjustment to the one or more features in the ML model to obtain a retrained ML model without the bias; Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the exception into a practical application. utilizing the retrained model to make an inference without the determined bias; Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the exception into a practical application. providing a user interface (UI) comprising a display of information about the bias, the one or more features, and the one or more statistical metrics. This limitation is directed to receiving or transmitting data over a network. The courts have recognized receiving or transmitting data over a network as well understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity (see MPEP 2106.05(d) II.). Regarding claim 17, Claim 17 is rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 16 which includes an abstract idea (see rejection for claim 16). The additional limitations: discarding the one or more features from the ML model; and - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.) Regarding claim 19, Claim 19 is rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 16 which includes an abstract idea (see rejection for claim 16). The additional limitations: wherein the adjustment comprises modifying how the one or more features are bucketed by the ML model. - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.) because the limitation involves adjusting features by bucketizing the feature values differently, which also means grouping the feature values into bins or buckets differently. Regarding claim 20, Claim 20 is rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 16 which includes an abstract idea (see rejection for claim 16). The additional limitations: determining that the bias is caused by bias in training data of the ML model; - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.) updating the training data to eliminate the bias; and Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the exception into a practical application. retraining the ML model with the updated training data. Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the exception into a practical application. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-2, 4-6, 8-9, 11- 12, 14-17 and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Bhide et al (US 2020/0184350 A1) (hereafter referred to as “Bhide”) in view of Lehr et al (US 2020/0160180 A1) (hereafter referred to as “Lehr”), Cabrera et al. (“FAIRVIS: Visual Analytics for Discovering Intersectional Bias in Machine Learning”) (hereafter referred to as “Cabrera”) and further in view of Datta et al. (“Algorithmic Transparency via Quantitative Input Influence: Theory and Experiments with Learning Systems”) (hereafter referred to as “Datta”) Regarding claim 1, Bhide explicitly discloses: A system comprising: a memory comprising instructions; and (Bhide, ¶[0103]: “The computer program product may include a computer readable storage medium ( or media) having computer readable program instructions thereon for causing a processor to carry, out aspects of the present invention.”) one or more computer processors, wherein the instructions, when executed by the one or more computer processors, cause the system to perform operations comprising: (Bhide, ¶[0103]: “The computer program product may include a computer readable storage medium ( or media) having computer readable program instructions thereon for causing a processor to carry, out aspects of the present invention.”) determining bias by the ML model based on a difference of performance between the first group and the second group, wherein determining the bias further comprises determining that a difference in performance of the first group and the second group exceeds a predetermined threshold; (Bhide, ¶[0008]: “The starting point for the inventive approach that solves problems in the art is an individual bias detector, which finds samples whose model prediction changes when the protected attributes change, leaving all other features constant.”, ¶[0009]: “In an exemplary embodiment, the present invention can provide a post-processing computer-implemented method for post-hoc improvement of instance-level and group-level prediction metrics, the post-processing method including training a bias detector that learns to detect a sample that has an individual bias greater than a predetermined individual bias threshold value with constraints on a group bias, applying the bias detector on a run-time sample to select a biased sample in the run-time sample having a bias greater than the predetermined individual bias threshold bias value, and suggesting a de-biased prediction for the biased sample.”) [Examiner’s note: ¶[0008] describes a system that detects individual bias in samples by observing how predictions change when protected attributes (such as race or gender) are altered while keeping other features constant. This process identifies instances where bias may arise, contributing to understanding disparities in performance between groups. ¶[0009] discloses a bias detector that identifies samples with bias exceeding a predefined threshold. This threshold provides a concrete measure for determining when the difference in performance between groups qualifies as bias.] identifying one or more features of the ML model that cause the bias by the ML model; (Bhide, ¶[0045]: “This generalizes from a computationally expensive individual bias checker to create a model that identifies new samples that likely have individual bias and to alter these samples first to achieve group fairness metric requirements.”, ¶[0047]: “Each sample from the unprivileged group (di=0) is tested for individual bias and if it is likely to have individual bias (i.e., bi=1), then this sample is assigned the outcome it would have received if it were in the favorable class, (i.e., yi=y(xk, 1)”, ¶[0048]: “At periodic intervals, an individual bias check is conducted on test samples and generalized to the entire feature space. When a new unlabeled sample comes in, the model scores it and the generalized individual bias checker predicts whether it will have individual bias.”) [Examiner’s note: The highlights discloses how an individual bias check is conducted on test samples and generalized to an entire feature space, which illustrates that specific features contributing to bias are identified during this process since the bias detector predicts whether new samples will have individual bias] determining an adjustment to the identified one or more features that cause the bias; (Bhide, ¶[0050]: “The de-biased predictions 350 is performed in post-processing by de-biasing procedure for each sample point by perturbing the protected attribute(s) in a training set (e.g., in 303 of FIG. 3), run the perturbed examples through customer model 320, and picking the most likely prediction for the perturbed data as suggested values to modify.”, ¶[0051]: “Thereby, samples predicted to have highest individual biases (among the 'unprivileged group') by the detector are prioritized for correction, suggested correction involves running perturbed examples through the customer model and picking the most likely prediction, and an arbiter can decide whether to choose the original or the suggested de-biased prediction.”) [Examiner’s note: The de-biasing process involves systematically altering (or “perturbing”) the features associated with protected attributes and running them through the model. By running perturbed examples through the model and selecting the “most likely prediction” for the perturbed data, the process effectively determines the adjustments needed to address the bias in the model’s outputs] calculating one or more statistical metrics associated with the one or more features and the bias; and (Bhide, ¶[0057]: “During the training stage of the bias detector, the invention implements an individual bias Checker that perturbs the protected attribute in the payload data for the unprivileged group samples, and computes the individual bias scores for them by finding the difference between the probability of favorable outcomes for the perturbed and the original data.”) [Examiner’s note: The highlight illustrates the process of calculating a statistical metric (the probability difference) that quantifies the bias associated with the protected feature.] Bhide fails to disclose: determining a performance of a machine learning (ML) model for a first group and a second group; retraining the ML model based on the adjustment to the one or more features in the ML model to obtain a retrained ML model without the bias; utilizing the retrained model to make an inference without the determined bias; the determining of the bias comprising determining feature influence in the ML model using Quantitative Input Influence (QII) that measures a degree of influence that each input feature exerts on outputs of the ML model, However, Cabrera explicitly discloses: determining a performance of a machine learning (ML) model for a first group and a second group; (Cabrera, Page 50, Figure 4: “In the Subgroup Overview users can see how different subgroups compare to one another according to various performance metrics. As more metrics are selected at the top, additional strip plots are added to the interface. Here, a user has pinned the Female subgroup and hovers over the Male subgroup. PNG media_image1.png 316 851 media_image1.png Greyscale ”, Page 50, Col. 1, ¶2]: “When a user clicks the “Generate Subgroups” button (Fig. 3), FAIRVIS splits the data into the specified subgroups and calculates various performance metrics for them. These groups are then represented in the multiple strip plots as lines corresponding to their performance for the respective metric.”, Page 50, Col. 1, ¶[5]: “In total, users can select from the following metrics: Accuracy, Recall, Specificity, Precision, Negative Predictive Value, False Negative Rate, False Positive Rate, False Discovery Rate, False Omission Rate, and F1 score. These metrics were selected as they are typically the most common metrics used for evaluating the equity and performance of classification models.”) [Examiner’s note: a first group i.e., female group, a second group i.e., male group, a performance of machine learning model for these groups is determined by the following metrics: accuracy, recall, precision, etc. and shown in figure 4] providing a user interface (UI) comprising a display of information about the bias, the one or more features, and the one or more statistical metrics. (Cabrera, Page 52, Figure 6: PNG media_image2.png 538 442 media_image2.png Greyscale , Page 52, Col. 1, Section 5.5, ¶[2-3]: “A user is able to see the details for two groups in the Detailed Comparison View, the pinned and hovered group. A group can be pinned when a user clicks on it in the Subgroup Overview or Suggested and Similar Subgroup View, and is designated by a light red across the UI. The hovered group is designated by a light blue across the UI. These two distinct colors allow users to see a selected group’s information across various different views. There are three primary components in the Detailed Comparison View, as seen in Fig. 6. The topmost component is a bar chart displaying how a group performs for selected performance metrics.”, Col. 1, Section 5.5, ¶[4]: “The second component in the Detailed Comparison View is a bar chart for the ground truth label balance of both selected subgroups. The label imbalance is important because it can often explain extreme values for metrics like recall and precision and can suggest reasons for bias (C6)”) [Examiner’s note: a user interface i.e., the Detailed Comparison View UI, information about the bias i.e., the label imbalance between the statistical metrics (i.e., accuracy, precision, recall), the features i.e., sex, race, marital status, relationship] It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Bhide and Cabrera. Bhide teaches a bias detector used to prioritize data samples in a bias mitigation algorithm aiming to improve the group fairness measure of disparate impact. Cabrera teaches a mixed-initiative visual analytics system that integrates a novel subgroup discovery technique for users to audit the fairness of machine learning models. One of ordinary skill would have motivation to combine Bhide and Cabrera to help data scientists and the general public understand and create more equitable algorithmic systems by the interactive visualization and enable users to audit the fairness of machine learning models (Cabrera, Abstract) However, Datta explicitly discloses: the determining of the bias comprising determining feature influence in the ML model using Quantitative Input Influence (QII) that measures a degree of influence that each input feature exerts on outputs of the ML model, (Datta, Pg. 599, Col. 1, ¶[3]: “We formalize transparency reports by introducing a family of Quantitative Input Influence (QII) measures that capture the degree of influence of inputs on outputs of the system.”) It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Bhide and Datta. Bhide teaches a bias detector used to prioritize data samples in a bias mitigation algorithm aiming to improve the group fairness measure of disparate impact. Datta teaches a family of Quantitative Input Influence (QII) measures that capture the degree of influence of inputs on outputs of systems. One of ordinary skill would have motivation to combine Bhide and Datta because MPEP 2143 sets forth the Supreme Court rationales for obviousness including: (D) Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results; (E): “Obvious to try” choosing from a finite number of identified, predictable solutions, with a reasonable expectation of success; (F) Known work in one field of endeavor may prompt variations of it for use in either the same field or a different one based on design incentives or other market forces if the variations are predictable to one of the ordinary skill in the art. However, Lehr explicitly discloses: retraining the ML model based on the adjustment to the one or more features in the ML model to obtain a retrained ML model without the bias; (Lehr, ¶[0108]: “With reference to FIG. 13, shown is a model generation and training process 1300.”, ¶[0124]: “In one or more embodiments, if any of the one or more bias thresholds is exceeded, the process 1300 proceeds to bias removal process 1400”, ¶[0141]: “In various embodiments, when the process 1400 proceeds to process 1300, the intermediary model (e.g., metadata 145 thereof) can be used to initialize and train a new predictive model that excludes the one or more top-ranked, bias-contributing variables. Thus, in at least one embodiment, the predictive model generated based on each variable compartment can be a new, novel treatment of the dataset that we are comparing to the treatment of the primary dataset (e.g., in previous iterations of the predictive model). The new predictive model can be analyzed for validity and bias, and the processes 1300 and 1400 can be repeated as required to produce an iteration of the predictive model that does not violate validity or bias thresholds.”) [Examiner’s note: The highlight explains a process where a predictive model is retrained by modifying its features to address bias. If the intermediary model fails to meet validity or bias thresholds, certain high-ranking features identified as contributing to bias are excluded. Using this adjusted dataset (without the bias contributing features), a new predictive model is trained. The retraining process is repeated iteratively to refine the model until it meets the require thresholds for validity and bias] utilizing the retrained model to make an inference without the determined bias; (Lehr, ¶[0121]: “At step 1309, the process 1300 includes generating predicted outcomes 136 using the trained, validated, and error threshold-satisfying iteration of the predictive model… The predictive model can generate a set of predicted outcomes 136 based on the input data and the weighted variables”) [Examiner’s note: the final predictive model used for generating predicted outcomes (i.e., making an inference) using the validated and error threshold-satisfying model (i.e., the model excluding bias feature)] It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Bhide and Lehr. Bhide teaches a bias detector used to prioritize data samples in a bias mitigation algorithm aiming to improve the group fairness measure of disparate impact. Lehr teaches systems and processes for bias removal in a predictive performance model. One of ordinary skill would have motivation to combine Bhide and Lehr to promote fairness and equity in training prediction models. Removing bias ensures that the mode’s predictions are fair and do not disproportionately favor or disadvantage any specific group. A bias-free model provides more reliable and accurate predictions, enabling better and more informed decisions based on actual performance rather than systemic or historical disparities. (Lehr, ¶[0115]) Regarding claim 2, the combination of Bhide, Lehr and Cabrera discloses all the limitations of claim 1 (as shown in the rejections above). Bhide in view of Lehr and Cabrera further discloses: wherein the adjustment comprises discarding the one or more features from the ML model (Lehr, ¶[0141]: “In various embodiments, when the process 1400 proceeds to process 1300, the intermediary model (e.g., metadata 145 thereof) can be used to initialize and train a new predictive model that excludes the one or more top-ranked, bias-contributing variables.”) Regarding claim 4, the combination of Bhide, Lehr and Cabrera discloses all the limitations of claim 3 (as shown in the rejections above). Bhide in view of Lehr and Cabrera further discloses: wherein the adjustment comprises modifying how the one or more features are bucketed by the ML model. (Lehr, ¶[0163]: “The intermediary predictive model can generate two or more sets of predictive outcomes 136.Each set of predictive outcomes 136 can be generated by executing the intermediary predictive model with one of the two or more groupings (e.g., and the corresponding input dataset thereof). In other words, the intermediary predictive model can be duplicated into two or more additional intermediary predictive models ( e.g., one predictive model per grouping).”) [Examiner’s note: The highlight discloses modifying how features are grouped or bucketed (groupings) to create multiple versions of intermediary predictive models, demonstrating how feature is adjusted] Regarding claim 5, the combination of Bhide, Lehr and Cabrera discloses all the limitations of claim 1 (as shown in the rejections above). Bhide in view of Lehr and Cabrera further discloses: wherein the instructions further cause the system to perform operations comprising: determining that the bias is caused by bias in training data of the ML model; (Bhide, ¶[0008]: “The starting point for the inventive approach that solves problems in the art is an individual bias detector, which finds samples whose model prediction changes when the protected attributes change, leaving all other features constant.”, ¶[0009]: “In an exemplary embodiment, the present invention can provide a post-processing computer-implemented method for post-hoc improvement of instance-level and group-level prediction metrics, the post-processing method including training a bias detector that learns to detect a sample that has an individual bias greater than a predetermined individual bias threshold value with constraints on a group bias, applying the bias detector on a run-time sample to select a biased sample in the run-time sample having a bias greater than the predetermined individual bias threshold bias value, and suggesting a de-biased prediction for the biased sample.”) updating the training data to eliminate the bias; and (Lehr, ¶[0141]: “In various embodiments, when the process 1400 proceeds to process 1300, the intermediary model (e.g., metadata 145 thereof) can be used to initialize and train a new predictive model that excludes the one or more top-ranked, bias-contributing variables.”) retraining the ML model with the updated training data. (Lehr, ¶[0108]: “With reference to FIG. 13, shown is a model generation and training process 1300.”, ¶[0124]: “In one or more embodiments, if any of the one or more bias thresholds is exceeded, the process 1300 proceeds to bias removal process 1400”, ¶[0141]: “In various embodiments, when the process 1400 proceeds to process 1300, the intermediary model (e.g., metadata 145 thereof) can be used to initialize and train a new predictive model that excludes the one or more top-ranked, bias-contributing variables. Thus, in at least one embodiment, the predictive model generated based on each variable compartment can be a new, novel treatment of the dataset that we are comparing to the treatment of the primary dataset (e.g., in previous iterations of the predictive model). The new predictive model can be analyzed for validity and bias, and the processes 1300 and 1400 can be repeated as required to produce an iteration of the predictive model that does not violate validity or bias thresholds.”) [Examiner’s note: The highlight explains a process where a predictive model is retrained by modifying its features to address bias. If the intermediary model fails to meet validity or bias thresholds, certain high-ranking features identified as contributing to bias are excluded. Using this adjusted dataset (without the bias contributing features), a new predictive model is trained. The retraining process is repeated iteratively to refine the model until it meets the require thresholds for validity and bias] Regarding claim 6, the combination of Bhide, Lehr and Cabrera discloses all the limitations of claim 1 (as shown in the rejections above). Bhide in view of Lehr and Cabrera further discloses: wherein the first group comprises a protected group and the second group comprises a complement group of the first group. (Lehr, ¶[0156]: “At step 1503, the process 1500 includes defining two or more groupings of protected data based on a protected data category or class. The protected data is associated with input data (e.g., and corresponding subjects, such as users) utilized by an initial predictive model (e.g., a trained, validity and bias threshold-compliant performance model). In one example, the category is sex and the two or more groupings include a female grouping and a male grouping.”) [Examiner’s note: “a protected group” i.e., a female grouping, “a complement group of the first group” i.e., a male grouping] Regarding claim 8, the combination of Bhide, Lehr and Cabrera discloses all the limitations of claim 1 (as shown in the rejections above). Bhide in view of Lehr and Cabrera further discloses: wherein the UT comprises a chart for group disparity metrics showing a Difference in Means (DM) as a difference between the means of ML model scores for the first group and the second group. (Cabrera, Page 50, Figure 4: “In the Subgroup Overview users can see how different subgroups compare to one another according to various performance metrics. As more metrics are selected at the top, additional strip plots are added to the interface. Here, a user has pinned the Female subgroup and hovers over the Male subgroup.” PNG media_image3.png 379 1000 media_image3.png Greyscale ) [Examiner’s note: In Figure 4, the “disparity metrics” showing a Difference in Means can be interpreted as the differences in the performance metrics (e.g., accuracy, precision, recall) between the female group and male group, “Difference in Means” is being interpreted as the difference between average values of the performance metrics of the 2 groups] Regarding claim 9, the combination of Bhide, Lehr and Cabrera discloses all the limitations of claim 1 (as shown in the rejections above). Bhide in view of Lehr and Cabrera further discloses: wherein the UI comprises a table for a plurality of features causing bias and an influence of each feature on the first group and the second group. (Cabrera, Page 47, Col. 2, ¶2]: “Once a subgroup for which a model has poor performance has been identified, it can be useful to look at similar subgroups to compare their values and performance. We use similarity in the form of statistical divergence between feature distributions to find subgroups that are statistically similar. Users can then compare similar groups to discover which value differences impact performance or to form more general subgroups of fewer features.”, Page 52, Figure 5: “Here we can see the Suggested and Similar Subgroup View for both suggested and similar subgroups. Users can hover over any card to see detailed feature and performance information in the Detailed Comparison View.” PNG media_image4.png 398 1262 media_image4.png Greyscale ) [Examiner’s note: Figure 5 discloses the table of multiple features causing bias (e.g., marital-status, sex, relationship, education etc.) and the impact of them to the female and male subgroups] Regarding claim 11, Bhide explicitly discloses: determining bias by the ML model based on a difference of performance between the first group and the second group, wherein determining the bias further comprises determining that a difference in performance of the first group and the second group exceeds a predetermined threshold; (Bhide, ¶[0008]: “The starting point for the inventive approach that solves problems in the art is an individual bias detector, which finds samples whose model prediction changes when the protected attributes change, leaving all other features constant.”, ¶[0009]: “In an exemplary embodiment, the present invention can provide a post-processing computer-implemented method for post-hoc improvement of instance-level and group-level prediction metrics, the post-processing method including training a bias detector that learns to detect a sample that has an individual bias greater than a predetermined individual bias threshold value with constraints on a group bias, applying the bias detector on a run-time sample to select a biased sample in the run-time sample having a bias greater than the predetermined individual bias threshold bias value, and suggesting a de-biased prediction for the biased sample.”) [Examiner’s note: ¶[0008] describes a system that detects individual bias in samples by observing how predictions change when protected attributes (such as race or gender) are altered while keeping other features constant. This process identifies instances where bias may arise, contributing to understanding disparities in performance between groups. ¶[0009] discloses a bias detector that identifies samples with bias exceeding a predefined threshold. This threshold provides a concrete measure for determining when the difference in performance between groups qualifies as bias.] identifying one or more features of the ML model that cause the bias by the ML model; (Bhide, ¶[0045]: “This generalizes from a computationally expensive individual bias checker to create a model that identifies new samples that likely have individual bias and to alter these samples first to achieve group fairness metric requirements.”, ¶[0047]: “Each sample from the unprivileged group (di=0) is tested for individual bias and if it is likely to have individual bias (i.e., bi=1), then this sample is assigned the outcome it would have received if it were in the favorable class, (i.e., yi=y(xk, 1)”, ¶[0048]: “At periodic intervals, an individual bias check is conducted on test samples and generalized to the entire feature space. When a new unlabeled sample comes in, the model scores it and the generalized individual bias checker predicts whether it will have individual bias.”) [Examiner’s note: The highlights discloses how an individual bias check is conducted on test samples and generalized to an entire feature space, which illustrates that specific features contributing to bias are identified during this process since the bias detector predicts whether new samples will have individual bias] determining an adjustment to the identified one or more features that cause the bias; (Bhide, ¶[0050]: “The de-biased predictions 350 is performed in post-processing by de-biasing procedure for each sample point by perturbing the protected attribute(s) in a training set (e.g., in 303 of FIG. 3), run the perturbed examples through customer model 320, and picking the most likely prediction for the perturbed data as suggested values to modify.”, ¶[0051]: “Thereby, samples predicted to have highest individual biases (among the 'unprivileged group') by the detector are prioritized for correction, suggested correction involves running perturbed examples through the customer model and picking the most likely prediction, and an arbiter can decide whether to choose the original or the suggested de-biased prediction.”) [Examiner’s note: The de-biasing process involves systematically altering (or “perturbing”) the features associated with protected attributes and running them through the model. By running perturbed examples through the model and selecting the “most likely prediction” for the perturbed data, the process effectively determines the adjustments needed to address the bias in the model’s outputs] calculating one or more statistical metrics associated with the one or more features and the bias; and (Bhide, ¶[0057]: “During the training stage of the bias detector, the invention implements an individual bias Checker that perturbs the protected attribute in the payload data for the unprivileged group samples, and computes the individual bias scores for them by finding the difference between the probability of favorable outcomes for the perturbed and the original data.”) [Examiner’s note: The highlight illustrates the process of calculating a statistical metric (the probability difference) that quantifies the bias associated with the protected feature.] Bhide fails to disclose: determining a performance of a machine learning (ML) model for a first group and a second group; the determining of the bias comprising determining feature influence in the ML model using Quantitative Input Influence (QII) that measures a degree of influence that each input feature exerts on outputs of the ML model, retraining the ML model based on the adjustment to the one or more features in the ML model to obtain a retrained ML model without the bias; utilizing the retrained model to make an inference without the determined bias; providing a user interface (UI) comprising a display of information about the bias, the one or more features, and the one or more statistical metrics. However, Cabrera explicitly discloses: determining a performance of a machine learning (ML) model for a first group and a second group; (Cabrera, Page 50, Figure 4: “In the Subgroup Overview users can see how different subgroups compare to one another according to various performance metrics. As more metrics are selected at the top, additional strip plots are added to the interface. Here, a user has pinned the Female subgroup and hovers over the Male subgroup. PNG media_image1.png 316 851 media_image1.png Greyscale ”, Page 50, Col. 1, ¶2]: “When a user clicks the “Generate Subgroups” button (Fig. 3), FAIRVIS splits the data into the specified subgroups and calculates various performance metrics for them. These groups are then represented in the multiple strip plots as lines corresponding to their performance for the respective metric.”, Page 50, Col. 1, ¶[5]: “In total, users can select from the following metrics: Accuracy, Recall, Specificity, Precision, Negative Predictive Value, False Negative Rate, False Positive Rate, False Discovery Rate, False Omission Rate, and F1 score. These metrics were selected as they are typically the most common metrics used for evaluating the equity and performance of classification models.”) [Examiner’s note: a first group i.e., female group, a second group i.e., male group, a performance of machine learning model for these groups is determined by the following metrics: accuracy, recall, precision, etc. and shown in figure 4] providing a user interface (UI) comprising a display of information about the bias, the one or more features, and the one or more statistical metrics. (Cabrera, Page 52, Figure 6: PNG media_image2.png 538 442 media_image2.png Greyscale , Page 52, Col. 1, Section 5.5, ¶[2-3]: “A user is able to see the details for two groups in the Detailed Comparison View, the pinned and hovered group. A group can be pinned when a user clicks on it in the Subgroup Overview or Suggested and Similar Subgroup View, and is designated by a light red across the UI. The hovered group is designated by a light blue across the UI. These two distinct colors allow users to see a selected group’s information across various different views. There are three primary components in the Detailed Comparison View, as seen in Fig. 6. The topmost component is a bar chart displaying how a group performs for selected performance metrics.”, Col. 1, Section 5.5, ¶[4]: “The second component in the Detailed Comparison View is a bar chart for the ground truth label balance of both selected subgroups. The label imbalance is important because it can often explain extreme values for metrics like recall and precision and can suggest reasons for bias (C6)”) [Examiner’s note: a user interface i.e., the Detailed Comparison View UI, information about the bias i.e., the label imbalance between the statistical metrics (i.e., accuracy, precision, recall), the features i.e., sex, race, marital status, relationship] It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Bhide and Cabrera. Bhide teaches a bias detector used to prioritize data samples in a bias mitigation algorithm aiming to improve the group fairness measure of disparate impact. Cabrera teaches a mixed-initiative visual analytics system that integrates a novel subgroup discovery technique for users to audit the fairness of machine learning models. One of ordinary skill would have motivation to combine Bhide and Cabrera to help data scientists and the general public understand and create more equitable algorithmic systems by the interactive visualization and enable users to audit the fairness of machine learning models (Cabrera, Abstract) However, Datta explicitly discloses: the determining of the bias comprising determining feature influence in the ML model using Quantitative Input Influence (QII) that measures a degree of influence that each input feature exerts on outputs of the ML model, (Datta, Pg. 599, Col. 1, ¶[3]: “We formalize transparency reports by introducing a family of Quantitative Input Influence (QII) measures that capture the degree of influence of inputs on outputs of the system.”) It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Bhide and Datta. Bhide teaches a bias detector used to prioritize data samples in a bias mitigation algorithm aiming to improve the group fairness measure of disparate impact. Datta teaches a family of Quantitative Input Influence (QII) measures that capture the degree of influence of inputs on outputs of systems. One of ordinary skill would have motivation to combine Bhide and Datta because MPEP 2143 sets forth the Supreme Court rationales for obviousness including: (D) Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results; (E): “Obvious to try” choosing from a finite number of identified, predictable solutions, with a reasonable expectation of success; (F) Known work in one field of endeavor may prompt variations of it for use in either the same field or a different one based on design incentives or other market forces if the variations are predictable to one of the ordinary skill in the art. However, Lehr explicitly discloses: retraining the ML model based on the adjustment to the one or more features in the ML model to obtain a retrained ML model without the bias; (Lehr, ¶[0108]: “With reference to FIG. 13, shown is a model generation and training process 1300.”, ¶[0124]: “In one or more embodiments, if any of the one or more bias thresholds is exceeded, the process 1300 proceeds to bias removal process 1400”, ¶[0141]: “In various embodiments, when the process 1400 proceeds to process 1300, the intermediary model (e.g., metadata 145 thereof) can be used to initialize and train a new predictive model that excludes the one or more top-ranked, bias-contributing variables. Thus, in at least one embodiment, the predictive model generated based on each variable compartment can be a new, novel treatment of the dataset that we are comparing to the treatment of the primary dataset (e.g., in previous iterations of the predictive model). The new predictive model can be analyzed for validity and bias, and the processes 1300 and 1400 can be repeated as required to produce an iteration of the predictive model that does not violate validity or bias thresholds.”) [Examiner’s note: The highlight explains a process where a predictive model is retrained by modifying its features to address bias. If the intermediary model fails to meet validity or bias thresholds, certain high-ranking features identified as contributing to bias are excluded. Using this adjusted dataset (without the bias contributing features), a new predictive model is trained. The retraining process is repeated iteratively to refine the model until it meets the require thresholds for validity and bias] utilizing the retrained model to make an inference without the determined bias; (Lehr, ¶[0121]: “At step 1309, the process 1300 includes generating predicted outcomes 136 using the trained, validated, and error threshold-satisfying iteration of the predictive model… The predictive model can generate a set of predicted outcomes 136 based on the input data and the weighted variables”) [Examiner’s note: the final predictive model used for generating predicted outcomes (i.e., making an inference) using the validated and error threshold-satisfying model (i.e., the model excluding bias feature)] It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Bhide and Lehr. Bhide teaches a bias detector used to prioritize data samples in a bias mitigation algorithm aiming to improve the group fairness measure of disparate impact. Lehr teaches systems and processes for bias removal in a predictive performance model. One of ordinary skill would have motivation to combine Bhide and Lehr to promote fairness and equity in training prediction models. Removing bias ensures that the mode’s predictions are fair and do not disproportionately favor or disadvantage any specific group. A bias-free model provides more reliable and accurate predictions, enabling better and more informed decisions based on actual performance rather than systemic or historical disparities. (Lehr, ¶[0115]) Regarding claim 12, the combination of Bhide, Lehr and Cabrera discloses all the limitations of Claim 11 (as shown in the rejections above). Bhide in view of Lehr and Cabrera further discloses: wherein the adjustment comprises discarding the one or more features from the ML model; (Lehr, ¶[0141]: “In various embodiments, when the process 1400 proceeds to process 1300, the intermediary model (e.g., metadata 145 thereof) can be used to initialize and train a new predictive model that excludes the one or more top-ranked, bias-contributing variables.”) Regarding claim 14, the combination of Bhide, Lehr and Cabrera discloses all the limitations of Claim 11 (as shown in the rejections above). Bhide in view of Lehr and Cabrera further discloses: the adjustment comprises modifying how the one or more features are bucketed by the ML model. (Lehr, ¶[0163]: “The intermediary predictive model can generate two or more sets of predictive outcomes 136.Each set of predictive outcomes 136 can be generated by executing the intermediary predictive model with one of the two or more groupings (e.g., and the corresponding input dataset thereof). In other words, the intermediary predictive model can be duplicated into two or more additional intermediary predictive models ( e.g., one predictive model per grouping).”) [Examiner’s note: The highlight discloses modifying how features are grouped or bucketed (groupings) to create multiple versions of intermediary predictive models, demonstrating how feature is adjusted] Regarding claim 15, the combination of Bhide, Lehr and Cabrera discloses all the limitations of Claim 11 (as shown in the rejections above). Bhide in view of Lehr and Cabrera further discloses: determining that the bias is caused by bias in training data of the ML model; (Bhide, ¶[0008]: “The starting point for the inventive approach that solves problems in the art is an individual bias detector, which finds samples whose model prediction changes when the protected attributes change, leaving all other features constant.”, ¶[0009]: “In an exemplary embodiment, the present invention can provide a post-processing computer-implemented method for post-hoc improvement of instance-level and group-level prediction metrics, the post-processing method including training a bias detector that learns to detect a sample that has an individual bias greater than a predetermined individual bias threshold value with constraints on a group bias, applying the bias detector on a run-time sample to select a biased sample in the run-time sample having a bias greater than the predetermined individual bias threshold bias value, and suggesting a de-biased prediction for the biased sample.”) updating the training data to eliminate the bias; and (Lehr, ¶[0141]: “In various embodiments, when the process 1400 proceeds to process 1300, the intermediary model (e.g., metadata 145 thereof) can be used to initialize and train a new predictive model that excludes the one or more top-ranked, bias-contributing variables.”) retraining the ML model with the updated training data. (Lehr, ¶[0108]: “With reference to FIG. 13, shown is a model generation and training process 1300.”, ¶[0124]: “In one or more embodiments, if any of the one or more bias thresholds is exceeded, the process 1300 proceeds to bias removal process 1400”, ¶[0141]: “In various embodiments, when the process 1400 proceeds to process 1300, the intermediary model (e.g., metadata 145 thereof) can be used to initialize and train a new predictive model that excludes the one or more top-ranked, bias-contributing variables. Thus, in at least one embodiment, the predictive model generated based on each variable compartment can be a new, novel treatment of the dataset that we are comparing to the treatment of the primary dataset (e.g., in previous iterations of the predictive model). The new predictive model can be analyzed for validity and bias, and the processes 1300 and 1400 can be repeated as required to produce an iteration of the predictive model that does not violate validity or bias thresholds.”) [Examiner’s note: The highlight explains a process where a predictive model is retrained by modifying its features to address bias. If the intermediary model fails to meet validity or bias thresholds, certain high-ranking features identified as contributing to bias are excluded. Using this adjusted dataset (without the bias contributing features), a new predictive model is trained. The retraining process is repeated iteratively to refine the model until it meets the require thresholds for validity and bias] Regarding claim 16, Bhide explicitly discloses: A non-transitory machine-readable storage medium including instructions that, when executed by a machine, cause the machine to perform operations comprising: (Bhide, ¶[0103]: “The computer program product may include a computer readable storage medium ( or media) having computer readable program instructions thereon for causing a processor to carry, out aspects of the present invention.”) determining bias by the ML model based on a difference of performance between the first group and the second group, wherein determining the bias further comprises determining that a difference in performance of the first group and the second group exceeds a predetermined threshold; (Bhide, ¶[0008]: “The starting point for the inventive approach that solves problems in the art is an individual bias detector, which finds samples whose model prediction changes when the protected attributes change, leaving all other features constant.”, ¶[0009]: “In an exemplary embodiment, the present invention can provide a post-processing computer-implemented method for post-hoc improvement of instance-level and group-level prediction metrics, the post-processing method including training a bias detector that learns to detect a sample that has an individual bias greater than a predetermined individual bias threshold value with constraints on a group bias, applying the bias detector on a run-time sample to select a biased sample in the run-time sample having a bias greater than the predetermined individual bias threshold bias value, and suggesting a de-biased prediction for the biased sample.”) [Examiner’s note: ¶[0008] describes a system that detects individual bias in samples by observing how predictions change when protected attributes (such as race or gender) are altered while keeping other features constant. This process identifies instances where bias may arise, contributing to understanding disparities in performance between groups. ¶[0009] discloses a bias detector that identifies samples with bias exceeding a predefined threshold. This threshold provides a concrete measure for determining when the difference in performance between groups qualifies as bias.] identifying one or more features of the ML model that cause the bias by the ML model; (Bhide, ¶[0045]: “This generalizes from a computationally expensive individual bias checker to create a model that identifies new samples that likely have individual bias and to alter these samples first to achieve group fairness metric requirements.”, ¶[0047]: “Each sample from the unprivileged group (di=0) is tested for individual bias and if it is likely to have individual bias (i.e., bi=1), then this sample is assigned the outcome it would have received if it were in the favorable class, (i.e., yi=y(xk, 1)”, ¶[0048]: “At periodic intervals, an individual bias check is conducted on test samples and generalized to the entire feature space. When a new unlabeled sample comes in, the model scores it and the generalized individual bias checker predicts whether it will have individual bias.”) [Examiner’s note: The highlights discloses how an individual bias check is conducted on test samples and generalized to an entire feature space, which illustrates that specific features contributing to bias are identified during this process since the bias detector predicts whether new samples will have individual bias] determining an adjustment to the identified one or more features that cause the bias; (Bhide, ¶[0050]: “The de-biased predictions 350 is performed in post-processing by de-biasing procedure for each sample point by perturbing the protected attribute(s) in a training set (e.g., in 303 of FIG. 3), run the perturbed examples through customer model 320, and picking the most likely prediction for the perturbed data as suggested values to modify.”, ¶[0051]: “Thereby, samples predicted to have highest individual biases (among the 'unprivileged group') by the detector are prioritized for correction, suggested correction involves running perturbed examples through the customer model and picking the most likely prediction, and an arbiter can decide whether to choose the original or the suggested de-biased prediction.”) [Examiner’s note: The de-biasing process involves systematically altering (or “perturbing”) the features associated with protected attributes and running them through the model. By running perturbed examples through the model and selecting the “most likely prediction” for the perturbed data, the process effectively determines the adjustments needed to address the bias in the model’s outputs] calculating one or more statistical metrics associated with the one or more features and the bias; and (Bhide, ¶[0057]: “During the training stage of the bias detector, the invention implements an individual bias Checker that perturbs the protected attribute in the payload data for the unprivileged group samples, and computes the individual bias scores for them by finding the difference between the probability of favorable outcomes for the perturbed and the original data.”) [Examiner’s note: The highlight illustrates the process of calculating a statistical metric (the probability difference) that quantifies the bias associated with the protected feature.] Bhide fails to disclose: determining a performance of a machine learning (ML) model for a first group and a second group; the determining of the bias comprising determining feature influence in the ML model using Quantitative Input Influence (QII) that measures a degree of influence that each input feature exerts on outputs of the ML model, retraining the ML model based on the adjustment to the one or more features in the ML model to obtain a retrained ML model without the bias; utilizing the retrained model to make an inference without the determined bias; providing a user interface (UI) comprising a display of information about the bias, the one or more features, and the one or more statistical metrics. However, Cabrera explicitly discloses: determining a performance of a machine learning (ML) model for a first group and a second group; (Cabrera, Page 50, Figure 4: “In the Subgroup Overview users can see how different subgroups compare to one another according to various performance metrics. As more metrics are selected at the top, additional strip plots are added to the interface. Here, a user has pinned the Female subgroup and hovers over the Male subgroup. PNG media_image1.png 316 851 media_image1.png Greyscale ”, Page 50, Col. 1, ¶2]: “When a user clicks the “Generate Subgroups” button (Fig. 3), FAIRVIS splits the data into the specified subgroups and calculates various performance metrics for them. These groups are then represented in the multiple strip plots as lines corresponding to their performance for the respective metric.”, Page 50, Col. 1, ¶[5]: “In total, users can select from the following metrics: Accuracy, Recall, Specificity, Precision, Negative Predictive Value, False Negative Rate, False Positive Rate, False Discovery Rate, False Omission Rate, and F1 score. These metrics were selected as they are typically the most common metrics used for evaluating the equity and performance of classification models.”) [Examiner’s note: a first group i.e., female group, a second group i.e., male group, a performance of machine learning model for these groups is determined by the following metrics: accuracy, recall, precision, etc. and shown in figure 4] providing a user interface (UI) comprising a display of information about the bias, the one or more features, and the one or more statistical metrics. (Cabrera, Page 52, Figure 6: PNG media_image2.png 538 442 media_image2.png Greyscale , Page 52, Col. 1, Section 5.5, ¶[2-3]: “A user is able to see the details for two groups in the Detailed Comparison View, the pinned and hovered group. A group can be pinned when a user clicks on it in the Subgroup Overview or Suggested and Similar Subgroup View, and is designated by a light red across the UI. The hovered group is designated by a light blue across the UI. These two distinct colors allow users to see a selected group’s information across various different views. There are three primary components in the Detailed Comparison View, as seen in Fig. 6. The topmost component is a bar chart displaying how a group performs for selected performance metrics.”, Col. 1, Section 5.5, ¶[4]: “The second component in the Detailed Comparison View is a bar chart for the ground truth label balance of both selected subgroups. The label imbalance is important because it can often explain extreme values for metrics like recall and precision and can suggest reasons for bias (C6)”) [Examiner’s note: a user interface i.e., the Detailed Comparison View UI, information about the bias i.e., the label imbalance between the statistical metrics (i.e., accuracy, precision, recall), the features i.e., sex, race, marital status, relationship] It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Bhide and Cabrera. Bhide teaches a bias detector used to prioritize data samples in a bias mitigation algorithm aiming to improve the group fairness measure of disparate impact. Cabrera teaches a mixed-initiative visual analytics system that integrates a novel subgroup discovery technique for users to audit the fairness of machine learning models. One of ordinary skill would have motivation to combine Bhide and Cabrera to help data scientists and the general public understand and create more equitable algorithmic systems by the interactive visualization and enable users to audit the fairness of machine learning models (Cabrera, Abstract) However, Datta explicitly discloses: the determining of the bias comprising determining feature influence in the ML model using Quantitative Input Influence (QII) that measures a degree of influence that each input feature exerts on outputs of the ML model, (Datta, Pg. 599, Col. 1, ¶[3]: “We formalize transparency reports by introducing a family of Quantitative Input Influence (QII) measures that capture the degree of influence of inputs on outputs of the system.”) It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Bhide and Datta. Bhide teaches a bias detector used to prioritize data samples in a bias mitigation algorithm aiming to improve the group fairness measure of disparate impact. Datta teaches a family of Quantitative Input Influence (QII) measures that capture the degree of influence of inputs on outputs of systems. One of ordinary skill would have motivation to combine Bhide and Datta because MPEP 2143 sets forth the Supreme Court rationales for obviousness including: (D) Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results; (E): “Obvious to try” choosing from a finite number of identified, predictable solutions, with a reasonable expectation of success; (F) Known work in one field of endeavor may prompt variations of it for use in either the same field or a different one based on design incentives or other market forces if the variations are predictable to one of the ordinary skill in the art. However, Lehr explicitly discloses: retraining the ML model based on the adjustment to the one or more features in the ML model to obtain a retrained ML model without the bias; (Lehr, ¶[0108]: “With reference to FIG. 13, shown is a model generation and training process 1300.”, ¶[0124]: “In one or more embodiments, if any of the one or more bias thresholds is exceeded, the process 1300 proceeds to bias removal process 1400”, ¶[0141]: “In various embodiments, when the process 1400 proceeds to process 1300, the intermediary model (e.g., metadata 145 thereof) can be used to initialize and train a new predictive model that excludes the one or more top-ranked, bias-contributing variables. Thus, in at least one embodiment, the predictive model generated based on each variable compartment can be a new, novel treatment of the dataset that we are comparing to the treatment of the primary dataset (e.g., in previous iterations of the predictive model). The new predictive model can be analyzed for validity and bias, and the processes 1300 and 1400 can be repeated as required to produce an iteration of the predictive model that does not violate validity or bias thresholds.”) [Examiner’s note: The highlight explains a process where a predictive model is retrained by modifying its features to address bias. If the intermediary model fails to meet validity or bias thresholds, certain high-ranking features identified as contributing to bias are excluded. Using this adjusted dataset (without the bias contributing features), a new predictive model is trained. The retraining process is repeated iteratively to refine the model until it meets the require thresholds for validity and bias] utilizing the retrained model to make an inference without the determined bias; (Lehr, ¶[0121]: “At step 1309, the process 1300 includes generating predicted outcomes 136 using the trained, validated, and error threshold-satisfying iteration of the predictive model… The predictive model can generate a set of predicted outcomes 136 based on the input data and the weighted variables”) [Examiner’s note: the final predictive model used for generating predicted outcomes (i.e., making an inference) using the validated and error threshold-satisfying model (i.e., the model excluding bias feature)] It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Bhide and Lehr. Bhide teaches a bias detector used to prioritize data samples in a bias mitigation algorithm aiming to improve the group fairness measure of disparate impact. Lehr teaches systems and processes for bias removal in a predictive performance model. One of ordinary skill would have motivation to combine Bhide and Lehr to promote fairness and equity in training prediction models. Removing bias ensures that the mode’s predictions are fair and do not disproportionately favor or disadvantage any specific group. A bias-free model provides more reliable and accurate predictions, enabling better and more informed decisions based on actual performance rather than systemic or historical disparities. (Lehr, ¶[0115]) Regarding claim 17, the combination of Bhide, Lehr and Cabrera discloses all the limitations of Claim 16 (as shown in the rejections above). Bhide in view of Lehr and Cabrera further discloses: wherein the adjustment comprises discarding the one or more features from the ML model. (Lehr, ¶[0141]: “In various embodiments, when the process 1400 proceeds to process 1300, the intermediary model (e.g., metadata 145 thereof) can be used to initialize and train a new predictive model that excludes the one or more top-ranked, bias-contributing variables.”) Regarding claim 19, the combination of Bhide, Lehr and Cabrera discloses all the limitations of Claim 16 (as shown in the rejections above). Bhide in view of Lehr and Cabrera further discloses: wherein the adjustment comprises modifying how the one or more features are bucketed by the ML model. (Lehr, ¶[0163]: “The intermediary predictive model can generate two or more sets of predictive outcomes 136.Each set of predictive outcomes 136 can be generated by executing the intermediary predictive model with one of the two or more groupings (e.g., and the corresponding input dataset thereof). In other words, the intermediary predictive model can be duplicated into two or more additional intermediary predictive models ( e.g., one predictive model per grouping).”) [Examiner’s note: The highlight discloses modifying how features are grouped or bucketed (groupings) to create multiple versions of intermediary predictive models, demonstrating how feature is adjusted] Regarding claim 20, the combination of Bhide, Lehr and Cabrera discloses all the limitations of Claim 16 (as shown in the rejections above). Bhide in view of Lehr and Cabrera further discloses: wherein the machine further performs operations comprising: determining that the bias is caused by bias in training data of the ML model; (Bhide, ¶[0008]: “The starting point for the inventive approach that solves problems in the art is an individual bias detector, which finds samples whose model prediction changes when the protected attributes change, leaving all other features constant.”, ¶[0009]: “In an exemplary embodiment, the present invention can provide a post-processing computer-implemented method for post-hoc improvement of instance-level and group-level prediction metrics, the post-processing method including training a bias detector that learns to detect a sample that has an individual bias greater than a predetermined individual bias threshold value with constraints on a group bias, applying the bias detector on a run-time sample to select a biased sample in the run-time sample having a bias greater than the predetermined individual bias threshold bias value, and suggesting a de-biased prediction for the biased sample.”) updating the training data to eliminate the bias; and (Lehr, ¶[0141]: “In various embodiments, when the process 1400 proceeds to process 1300, the intermediary model (e.g., metadata 145 thereof) can be used to initialize and train a new predictive model that excludes the one or more top-ranked, bias-contributing variables.”) retraining the ML model with the updated training data. (Lehr, ¶[0108]: “With reference to FIG. 13, shown is a model generation and training process 1300.”, ¶[0124]: “In one or more embodiments, if any of the one or more bias thresholds is exceeded, the process 1300 proceeds to bias removal process 1400”, ¶[0141]: “In various embodiments, when the process 1400 proceeds to process 1300, the intermediary model (e.g., metadata 145 thereof) can be used to initialize and train a new predictive model that excludes the one or more top-ranked, bias-contributing variables. Thus, in at least one embodiment, the predictive model generated based on each variable compartment can be a new, novel treatment of the dataset that we are comparing to the treatment of the primary dataset (e.g., in previous iterations of the predictive model). The new predictive model can be analyzed for validity and bias, and the processes 1300 and 1400 can be repeated as required to produce an iteration of the predictive model that does not violate validity or bias thresholds.”) [Examiner’s note: The highlight explains a process where a predictive model is retrained by modifying its features to address bias. If the intermediary model fails to meet validity or bias thresholds, certain high-ranking features identified as contributing to bias are excluded. Using this adjusted dataset (without the bias contributing features), a new predictive model is trained. The retraining process is repeated iteratively to refine the model until it meets the require thresholds for validity and bias] Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMY TRAN whose telephone number is (571)270-0693. The examiner can normally be reached Monday - Friday 7:30 am - 5:00 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Yi can be reached at (571) 270-7519. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AMY TRAN/Examiner, Art Unit 2126 /VAN C MANG/Primary Examiner, Art Unit 2126
Read full office action

Prosecution Timeline

Jul 31, 2024
Application Filed
Sep 25, 2024
Non-Final Rejection — §101, §103
Dec 17, 2024
Interview Requested
Dec 23, 2024
Response Filed
Dec 23, 2024
Applicant Interview (Telephonic)
Dec 23, 2024
Examiner Interview Summary
Jan 24, 2025
Final Rejection — §101, §103
Apr 23, 2025
Request for Continued Examination
May 04, 2025
Response after Non-Final Action
Jan 05, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602582
DYNAMIC DISTRIBUTED TRAINING OF MACHINE LEARNING MODELS
2y 5m to grant Granted Apr 14, 2026
Patent 12468932
IDENTIFYING RELATED MESSAGES IN A NATURAL LANGUAGE INTERACTION
2y 5m to grant Granted Nov 11, 2025
Patent 12462185
SCENE GRAMMAR BASED REINFORCEMENT LEARNING IN AGENT TRAINING
2y 5m to grant Granted Nov 04, 2025
Patent 12423589
TRAINING DECISION TREE-BASED PREDICTIVE MODELS
2y 5m to grant Granted Sep 23, 2025
Patent 12288074
GENERATING AND PROVIDING PROPOSED DIGITAL ACTIONS IN HIGH-DIMENSIONAL ACTION SPACES USING REINFORCEMENT LEARNING MODELS
2y 5m to grant Granted Apr 29, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
36%
Grant Probability
84%
With Interview (+47.9%)
5y 2m
Median Time to Grant
High
PTA Risk
Based on 28 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month