Prosecution Insights
Last updated: April 19, 2026
Application No. 18/548,054

METHODS FOR MITIGATION OF ALGORITHMIC BIAS DISCRIMINATION, PROXY DISCRIMINATION AND DISPARATE IMPACT

Non-Final OA §103
Filed
Aug 25, 2023
Examiner
IDOWU, OLUGBENGA O
Art Unit
2494
Tech Center
2400 — Computer Networks
Assignee
Solasai
OA Round
1 (Non-Final)
71%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
90%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
452 granted / 636 resolved
+13.1% vs TC avg
Strong +19% interview lift
Without
With
+19.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
26 currently pending
Career history
662
Total Applications
across all art units

Statute-Specific Performance

§101
4.8%
-35.2% vs TC avg
§103
62.8%
+22.8% vs TC avg
§102
25.2%
-14.8% vs TC avg
§112
3.3%
-36.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 636 resolved cases

Office Action

§103
CTNF 18/548,054 CTNF 82671 DETAILED ACTION Notice of Pre-AIA or AIA Status 07-03-aia AIA 15-10-aia The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Claim Rejections - 35 USC § 103 07-20-aia AIA The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 07-21-aia AIA Claim (s) 1 – 11 and 14 - 20 are rejected under 35 U.S.C. 103 as being unpatentable over Kamkar, publication number: US 2022/0164877 in view of Nath, patent number: US 12 124 782 . As per claims 1, 19 and 20, Kamkar teaches a method for debiasing machine learning models, the method comprising: obtaining (i) an initial model that is a trained and tree-based machine learning model (first trained tree based machine learning model, [0125]), (ii) a minimum threshold accuracy (model accuracy condition, [0020][0078]), and (iii) one or more protected classes (protected classes, [0017][0075]), wherein the initial model demonstrates disparities with respect to the one or more protected classes (model not satisfying fairness constraints, [0017]); based on the one or more protected classes, generate one or more forest models, such that (i) predictive accuracy of the one or more forest models is above the minimum threshold accuracy, and (ii) the one or more forest models are less discriminatory than the initial model (training a new model to satisfy stopping criteria [0075], considering fairness and accuracy, [0025]). Kamkar does not teach identifying branches of the initial model to prune; and applying a pruning algorithm to prune the branches of the initial model In an analogous art, Nath teaches identifying branches of the initial model to prune; and applying a pruning algorithm to prune the branches of the initial model (pruning branches of a model based on hyper parameters to prevent overfitting, col. 8, lines 19-50). Therefore, it would have been obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention to modify Kamkar’s machine learning system that considers fairness to include pruning the model for improvements as described in Nath’s machine learning system for the advantage of producing a fair and efficient model that meets accuracy requirements. As per claim 2, the combination teaches , further comprising: obtaining a maximum number of nodes that can be removed; and while identifying branches of the initial model to prune, avoiding selecting branches that would remove more than the maximum number of nodes (Nath: minimum and maximum tree depth, col. 8, lines 19-50). As per claim 3, the combination teaches wherein identifying branches of the initial model comprises identifying branches that result in the largest disparity across protected and control groups (Kamkar: adverse impact ratio, [0107]). As per claim 4, the combination teaches wherein disparity is measured using a difference of average predictions (Kamkar: adverse impact ratio, [0107]). As per claim 5, the combination teaches wherein disparity is measured using a measure of disparate impact (Kamkar: disparate impact, [0026]). As per claim 6, the combination teaches wherein the measure of disparate impact is adverse impact ratio (AIR) (Kamkar: adverse impact ratio, [0107]). As per claim 7, the combination teaches wherein disparity caused by a split in the initial model is measured by disparity caused by the subtree originating from that split, thereby filtering observations seen by each split through nodes which precede it (Kamkar: disparate impact, [0026]). As per claim 8, the combination teaches wherein disparity caused by a single split in the initial model is measured by disparity of subtree of depth 1 originating from that split, thereby isolating the split of interest rather than depending on nodes which follow from the split (Kamkar: disparate impact, [0026]). As per claim 9, the combination teaches wherein measuring disparity comprises treating two children nodes of the split as leaves, and computing scores for the two children nodes using a weighted average (Kamkar: disparate impact, [0026]). As per claim 10, the combination teaches wherein branches of the initial model are identified by considering each node as a class predictor and ranking the nodes according to how well they separate classes, as measured by the Fl score (Kamkar: F1 score, [0108]). As per claim 11, the combination teaches wherein identifying branches of the initial model comprises calculating a group separation metric that indicates how well a given node separates group members based on the one or more protected classes (Kamkar: predicting, [0044]). As per claim 14, the combination teaches wherein identifying branches of the initial model includes ranking or ordering nodes of the initial model such that best candidates for removal are placed at the front (Nath: Bottom up pruning, col. 8, lines 19-50). As per claim 15, the combination teaches wherein the pruning algorithm is a sequential algorithm, wherein nodes are removed in order, and model accuracy and disparate impact on unseen data are tracked for every iteration (Nath: Bottom up pruning, col. 8, lines 19-50, Kamkar: iteration, [0074]). As per claim 16, the combination teaches selecting a node identifying scheme based on either disparity driving or group separation, for identifying branches of the initial model to prune, based on a context of the dataset used to train or validate the initial model (Nath: pruning, col. 8, lines 19-50). As per claim 17, the combination teaches wherein nodes are identified for removal based on path traversals of a training dataset used to train the initial model (Nath: pruning, col. 8, lines 19-50). As per claim 18, the combination teaches wherein the initial model predicts probabilistic class membership for unseen data, and has the structure of a collection of decision trees (Kamkar: predictions, [0044]) . 07-21-aia AIA Claim (s) 12 -13 are rejected under 35 U.S.C. 103 as being unpatentable over Kamkar, publication number: US 2022/0164877 in view of Nath, patent number: US 12 124 782 in further view of Hackl, publication number: US 2021/0269882 . The combination of Kamkar and Nath teach retraining tree based machine learning models to avoid biases. The combination does not teach wherein calculating the group separation metric includes: computing, for each node, counts of protected and control group members that are sent down left and right branches of the node, when considering the node as a group predictor by looking at group identification of observations that land in the node's two children nodes corresponding to the left and right branches of the node; and computing a confusion matrix-based metric by placing the counts into a 2-by-2 contingency table. In an analogous art, Hackl teaches wherein calculating the group separation metric includes: computing, for each node, counts of protected and control group members that are sent down left and right branches of the node, when considering the node as a group predictor by looking at group identification of observations that land in the node's two children nodes corresponding to the left and right branches of the node; and computing a confusion matrix-based metric by placing the counts into a 2-by-2 contingency table (contingency tables with Matthew’s coefficients, [0203]). Therefore, it would have been obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention to modify the combination of Kamkar and Nath to include contingency tables as described in Hackl’s machine learning system for the advantage of better tracking relationships between categories. As per claim 13, the combination teaches wherein the group separation metric is defined by absolute value of the Matthews correlation coefficient of the contingency table (Hackl: Matthew’s coefficient, [0203]). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to OLUGBENGA O IDOWU whose telephone number is (571)270-1450. The examiner can normally be reached Monday-Friday 8am - 5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jung Kim can be reached at 5712723804. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /OLUGBENGA O IDOWU/Primary Examiner, Art Unit 2494 Application/Control Number: 18/548,054 Page 2 Art Unit: 2494 Application/Control Number: 18/548,054 Page 3 Art Unit: 2494 Application/Control Number: 18/548,054 Page 4 Art Unit: 2494 Application/Control Number: 18/548,054 Page 5 Art Unit: 2494 Application/Control Number: 18/548,054 Page 6 Art Unit: 2494 Application/Control Number: 18/548,054 Page 7 Art Unit: 2494
Read full office action

Prosecution Timeline

Aug 25, 2023
Application Filed
Mar 05, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591707
Privacy Preserving Insights and Distillation of Large Language Model Backed Experiences
2y 5m to grant Granted Mar 31, 2026
Patent 12587397
MULTI DIMENSION BLOCKCHAIN
2y 5m to grant Granted Mar 24, 2026
Patent 12585753
VALIDATED MOVEMENT OF SHARED IHS HARDWARE COMPONENTS
2y 5m to grant Granted Mar 24, 2026
Patent 12562912
APPLICATION PROGRAMMING INTERFACE (API) PROVISIONING USING DECENTRALIZED IDENTITY
2y 5m to grant Granted Feb 24, 2026
Patent 12556416
METHOD AND SYSTEM FOR ATOMIC, CONSISTENT AND ACCOUNTABLE CROSS-CHAIN REWRITING
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
71%
Grant Probability
90%
With Interview (+19.1%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 636 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month