Prosecution Insights
Last updated: April 19, 2026
Application No. 18/349,409

SYSTEMS AND METHODS FOR MITIGATING BIAS IN MACHINE LEARNING MODELS

Non-Final OA §101§103
Filed
Jul 10, 2023
Examiner
ARJOMANDI, NOOSHA
Art Unit
2166
Tech Center
2100 — Computer Architecture & Software
Assignee
Verizon Patent and Licensing Inc.
OA Round
1 (Non-Final)
86%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
96%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
547 granted / 635 resolved
+31.1% vs TC avg
Moderate +10% lift
Without
With
+9.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
15 currently pending
Career history
650
Total Applications
across all art units

Statute-Specific Performance

§101
19.4%
-20.6% vs TC avg
§103
44.1%
+4.1% vs TC avg
§102
20.6%
-19.4% vs TC avg
§112
4.8%
-35.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 635 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The instant office action having application number 18349409, filed on July 10, 2023, has claims 1-20 pending in this application. Information Disclosure Statement The information disclosure statement (IDS) submitted on April 22, 2024. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 rejected under 35 U.S.C. §101 because the claimed invention is directed to a judicial exception (i.e., an abstract idea) without significantly more, and therefore is not patent eligible. I. Step 2A, Prong One – Recitation of an Abstract Idea Claim 1 recites limitations that fall within the category of abstract ideas, including mathematical concepts and mental processes. Claim 1 recites “A method, comprising: receiving, by a device, protected attribute data, observation data, and target variable data associated with a machine learning model; including, by the device, intersectional groups in the protected attribute data to expand a quantity of demographic subgroups and to generate modified protected attribute data; calculating, by the device, an expected proportion of individuals with the modified protected attribute data being in a particular group and the target variable data being positive; calculating, by the device, an observed proportion of individuals with the modified protected attribute data being in the particular group and the target variable data being positive; determining, by the device, observation weights based on the expected proportion and the observed proportion; and utilizing, by the device, the observation data and the observation weights to train the machine learning model and generate a trained machine learning model.” These limitations involve mathematical relationships and statistical computations, which are recognized as abstract ideas under the USPTO Subject Matter Eligibility Guidance. II. Step 2A, Prong Two – Directed to the Abstract Idea The claim as a whole is directed to the abstract idea because the focus of the claim is on organizing and analyzing demographic information and applying statistical weighting in machine learning training, rather than a specific improvement to computer technology. III. Step 2B – No Inventive Concept The claim does not include additional elements that amount to significantly more than the abstract idea. The recited device and training steps are merely generic computer implementation of the abstract mathematical calculations. Accordingly, claim 1 is rejected under 35 U.S.C. §101 as being directed to an abstract idea without additional elements that amount to significantly more. The dependent claims 2-7 are depending on independent claim 1 and therefore recite similar limitation as in claim 1. And are rejected for same reasons as set forth above. Independent claims 8 and 15 are rejected based on the same rationale as claim 1 above. Dependent claims 9-14 and 16-20 are rejected for same reasons as set forth above. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims1-20 are rejected under 35 USC 103(a) as being unpatentable over Ramamurthy et al. ((US 20210158204 A1) (hereinafter Ramamurthy) in view of Ribera et al. (US 20210073675 A1) (hereinafter Ribera). As per claims 1, 8 and 15, Ramamurthy discloses receiving, by a device, protected attribute data, observation data, and target variable data associated with a machine learning model [training examples Xi, and binary target variable labels Yi for training a machine learning model, paragraphs 35-42]; including, by the device, intersectional groups in the protected attribute data to expand a quantity of demographic subgroups and to generate modified protected attribute data [expanding demographic subgroups by evaluating intersectional groups, including gender-race intersections, for fairness assessment, paragraphs 60-65]; calculating, by the device, an expected proportion of individuals with the modified protected attribute data being in a particular group and the target variable data being positive; calculating, by the device, an observed proportion of individuals with the modified protected attribute data being in the particular group and the target variable data being positive [it is assumed that one of the labels is a more favorable outcome than the other. Unlabeled data {x.sub.n+1, . . . x.sub.n+m} is obtained from a target domain to which it is desired to assign labels. In one embodiment, S={1, . . . , n} and T={n+1, . . . , n+m} are used to distinguish the index sets from the source (S) and target (T) domains, paragraph 54, also see observed proportions of positive outcomes within groups via group-conditioned prevalence Pr(Y=1|G=k), paragraphs 48-53]. However Ramamurthy does not teach determining, by the device, observation weights based on the expected proportion and the observed proportion; and utilizing, by the device, the observation data and the observation weights to train the machine learning model and generate a trained machine learning model. On the other hand, Ribera discloses determining, by the device, observation weights based on the expected proportion and the observed proportion; and utilizing, by the device, the observation data and the observation weights to train the machine learning model and generate a trained machine learning model [the use of a machine learning model to predict an estimate of quality of an image after compression and subsequent decompression. In more detail, an original image 210 is compressed 220 and then decompressed 230 using an image compression algorithm to generate a reconstructed image 240, where the image compression algorithm is configured by one or more input parameters. The original image 210 and the reconstructed image 240 are supplied to a trained predictive model 250, which is trained to compute an estimate of quality 260, paragraph 35, teaches computing a weight function based on distribution/density characteristics and applying these weights during model training through weighted loss minimization, paragraphs 28-35]. It would have been obvious to one of ordinary skill in the art to determine observation weights based on expected versus observed group-positive outcome proportions because Ramamurthy motivates controlling group prevalence for fairness and Ribera Prat provides an explicit mechanism for calculating and applying such weights in training. The combination represents a predictable use of known weighting techniques to improve fairness outcomes. As per claim 2, Ramamurthy discloses wherein the intersectional groups include two or more particular groups that include the particular group [the definition of protected groups is assumed given and depends on the application context, paragraph 53]. As per claim 3, Ramamurthy discloses wherein the intersectional groups include intersectional conditional probabilities in the protected attribute data [Particular attention is made to situations in which protected attribute data are available only in the source or target domain, paragraph 57]. As per claims 4 and 16, Ramamurthy discloses wherein determining the observation weights based on the expected proportion and the observed proportion comprises: dividing the expected proportion by the observed proportion to determine the observation weights [The ‘workclass’ (worker class) variable is used to divide the dataset into source and target populations., paragraph 88]. As per claims 5, 17 and 20, Ribera discloses wherein utilizing the observation data and the observation weights to train the machine learning model and generate the trained machine learning model comprises: applying the observation weights to the observation data to obtain weighted observation data; and training the machine learning model with the weighted observation data to generate the trained machine learning model [training a continuous machine learning model by weighting a loss function used to train the model to compensate for imbalances in the distribution of the training data across the input domain., paragraph 6]. As per claim 6, Ribera discloses wherein the machine learning model is a classifier machine learning model [training models for classification (as opposed to regression). In particular, aspects of embodiments of the present disclosure relate to addressing imbalanced data in making predictions of continuous values (as opposed to discrete classifications), such as in regression models. One example of a technique used in training classification models from imbalanced training data (e.g., where there is a large disparity in the number of samples in different ones of the classes) includes oversampling (or duplicating) data points in the underrepresented classes (classes with fewer samples) in the training data and performing the training with this modified data set, paragraph 39]. As per claims 7, 18 and 20, Ribera discloses utilizing the trained machine learning model to make one or more predictions [Fig. 2, a machine learning model to predict an estimate of quality of an image after compression and subsequent decompression]. As per claim 9, Ramamurthy discloses wherein the bias metric is Cohen's D [graph 600 shows the variation of fairness and accuracy metrics, paragraph 82]. As per claim 10, Ramamurthy discloses wherein the bias metric is associated with two or more protected groups [Fig. 3]. As per claim 11, Ramamurthy discloses wherein the bias measure identifies distributional differences between protected groups among the feature data [Prevalence differences between groups are therefore a measure of bias in the dataset, paragraph 60]. As per claim 12, Ramamurthy discloses wherein the bias measure generates multiple pairwise-comparisons between the feature data [the threshold for computing the TSP metric since it produces a fraction of positive predictions comparable to the true prevalence of 24%, paragraph 88]. As per claim 13, Ramamurthy discloses wherein the one or more processors, when utilizing the min-max normalization on the feature values and the bias measure to generate the normalized feature values and the normalized bias measure, are configured to: determine a hyperparameter associated with the min-max normalization; and generate the normalized feature values and the normalized bias measure based on the hyperparameter [Thresholded score parity applies when the score is thresholded to yield a binary prediction. It is used below as a second fairness metric to evaluate different embodiments. If a score satisfies thresholded parity for all thresholds t∈[0,1], then it also satisfies parity in the strong sense above. Strong score parity implies both mean and thresholded parity, paragraph 59]. As per claim 14, Ramamurthy discloses utilizing the trained machine learning model to make one or more predictions [Thresholded score parity applies when the score is thresholded to yield a binary prediction, paragraph 59]. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to NOOSHA ARJOMANDI whose telephone number is (571)272-9784. The examiner can normally be reached on (571)272-9784. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sanjiv Shah can be reached on (571)272-4098. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. February 18, 2026 /NOOSHA ARJOMANDI/Primary Examiner, Art Unit 2166
Read full office action

Prosecution Timeline

Jul 10, 2023
Application Filed
Feb 19, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596732
GENERATIVE ARTIFICIAL INTELLIGENCE (AI) CONSTRUCTION SPECIFICATION INTERFACE
2y 5m to grant Granted Apr 07, 2026
Patent 12591555
SYSTEM AND METHODS FOR LIVE DATA MIGRATION
2y 5m to grant Granted Mar 31, 2026
Patent 12587510
SYSTEMS AND METHODS FOR MANAGED DATA TRANSFER
2y 5m to grant Granted Mar 24, 2026
Patent 12580782
SYSTEMS AND METHODS FOR PROCESSING BLOCKCHAIN TRANSACTIONS
2y 5m to grant Granted Mar 17, 2026
Patent 12572812
GRAPH NEURAL NETWORKS FOR PARTICLE ACCELERATOR FACILITIES
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
86%
Grant Probability
96%
With Interview (+9.9%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 635 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month