DETAILED ACTION
This action is in response to the filing of 10-27-2025. Claims 1-20 are pending and have been considered below:
Claim Rejections - 35 USC § 101
Claims 1-20 previously rejected under 35 U.S.C. 101 have been withdrawn.
Objected Subject Matter
Claims 5-8, 15-17 and 20 are objected to as being dependent upon a rejected base claim, but would be allowable over the prior art if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 11 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Miroshnikov et al. (“Miroshnikov” 20210383268 A1) in view of Dalli et al. (“Dalli” 20220012591 A1) and Venugopalan et al. (“Venugopalan” 20220383167 A1).
Claim 1: Miroshnikov discloses a computer-implemented method for execution by one or more processors in a special purpose computing machine to eliminate bias from artificial intelligent (Al) systems, wherein the execution of the method comprises:
identifying a plurality of features derived from one or more class identifiers represented in training data fed to an Al system for purpose of training a predictive model in the Al system (Paragraphs 8, 33, 64 and 66; input vectors data inferred to be paired with protected attributes and input variables (features) are grouped/listed);
Miroshnikov further discloses the predictive model having:
one or more input layers,
one or more output layers,
one or more hidden layers connecting the one or more input layers to the one or more output layers, and
at least one edge connecting two layers in the predictive model, the edge representing an interaction between features in the two layers and being associated with a weight, which is adjustable to train the predictive model towards less bias (Miroshnikov: Paragraph 78; CNN are a type of Deep neural network containing the layers (including hidden layers) and uses backpropagation for training to modify bias);
identifying a first list of features, one or more features in the first list correlated with the one or more class identifiers according to the correlation analysis (Miroshnikov: Paragraphs 8 and 66; groups/list created from input);
regarding creating a second list including sets of input features associated with at least one latent feature in a hidden layer of the predictive model, the second list identifying combinations of input features that are not allowed to interact due to learned nonlinearities that result in bias in the hidden layer; Miroshnikov also discloses an output vector is provided to a bias engine(Figure 2:104 and Paragraph 29); The input vectors are run through the model and produce output vectors (This model can include a CNN Paragraph 27, which has a hidden layer and latent space which is associated with the input/output vectors); the model is evaluated for bias based on the vectors (Figure 3:304 and Paragraph 64), from there groupings are made (Figure 3:308 and Paragraph 66), Last a second list/grouping is made of the inputs that are in association (Figure 4 and Paragraphs 71);
Miroshnikov may not explicitly disclose conducting correlation analysis of input features from a list of raw variables, r, in a dataset and a plurality of derived features, x, with one or more class identifiers in the list of class identifiers and features derived from these class identifiers;
Dalli is provided to discloses a functionality for mitigating bias (Paragraph 44), further the system looks at correlated variables as part of the determination (Paragraphs 65-67). The analysis also utilizes raw and transformed data (Paragraphs 78 and 152-153). This process is then utilized to detect bias (Paragraph 154).
Therefore it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to use a known technique to improve a similar device in the same way and provide analysis on different forms of data as taught by Dalli.
One would have been motivated to provide the analysis functionality as a way to improve correlations ensuring more precise understanding of model focus (Paragraph 154) which would deliver a more robust model.
Venugopalan is also provided to disclose bias detection, using correlation analysis where variables are provided, cluster formed and features determined for correlation (Figure 3:317 and Paragraphs 29-31).
Therefore it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to use a known technique to improve a similar device in the same way and expand the correlation analysis for extended explainability as taught by Venugopalan.
One would have been motivated to provide the functionality as a way to improve correlations which enhances explainability thereby extending analysis capability.
Additionally in combination Miroshnikov and Dalli disclose, training the predictive model using the first list and the second list to eliminate the bias from the predictive model by removing from the predictive model one or more interactions between the identified set of combinations of input features that are not allowed to interact due to the learned nonlinearities (Miroshnikov: Figure 3:310 and Paragraphs 24, 48, 52-53, groups are neutralized in order to remove bias from model Paragraphs 63-64 and 67; loss function of input/features utilized to adjust parameters of model (train) and Dalli: Paragraph 81; pruning interactions).
Claims 11 and 18 are similar in scope to claim 1 and therefore rejected under the same rationale (Miroshnikov: Paragraph 14; processor, memory and CRM).
Claims 2-4, 9-10, 12-14 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Miroshnikov et al. (“Miroshnikov” 20210383268 A1), Dalli et al. (“Dalli” 20220012591 A1) and Venugopalan et al. (“Venugopalan” 20220383167 A1) in further view of Zoldi et al. (“Zoldi” 20190354853 A1).
Claim 2: Miroshnikov, Dalli and Venugopalan disclose a method of claim 1, but may not explicitly disclose wherein for one or more hidden layers in an interpretable neural network model, interpretable latent features in the hidden layers are extracted to investigate whether a first latent feature from among the latent features in the hidden layers contains a bias (Miroshnikov: Paragraphs 48, 64 and Dalli: Paragraphs 78-79; look to examine and mitigate bias).
Zoldi is provided to discloses a functionality for generating explainable latent features and further discloses investigating features within a hidden layer (Zoldi: Paragraphs 8, 29-31 and 54-55).
Therefore it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to use a known technique to improve a similar device in the same way and provide analysis of hidden layers as taught by Zoldi. One would have been motivated to provide the analysis functionality as a way to improve correlations ensuring more accurate understanding of features which would deliver a more robust model.
Claim 3: Miroshnikov, Dalli, Venugopalan and Zoldi disclose a method of claim 2, wherein the first latent feature is determined to be biased, in response to determining that the first latent feature results in a discriminatory distribution against a protected class of individuals identified by the one or more class identifiers (Miroshnikov: Paragraphs 10-11, 33(classifier) 44 (determines contributor to bias)and 64; determine bias of protected class, based on distributions).
Claim 4: Miroshnikov, Dalli, Venugopalan and Zoldi disclose a method of claim 3, wherein for a protected class, the latent feature output is binned into N bins, such that N is a universal constant specified per latent feature (Miroshnikov: Paragraphs 8-9, 12, 69 66(correlated variables) and Zoldi: Paragraphs 28-31; provides N as a constant of inputs for interpretability).
Claim 9: Miroshnikov, Dalli, Venugopalan and Zoldi disclose a method of claim 3 wherein determination of a biased latent feature towards a class value results in determining the combination of features contributing to the latent feature and the combination of features being added to the second list of sets of input features (Miroshnikov: Paragraphs 8-9(group bias contribution), 44, 48 (determine features which contribute to bias), 66(correlated variables) and 69 (variables determined for adding to grouping/listing)).
Claim 10: Miroshnikov, Dalli, Venugopalan and Zoldi disclose a method of claim 9, wherein the biased latent feature is approximated with a sparse set of multiple latent features to explode the latent feature into a set of lower complexity latent features and nonlinearities, the sparse set of lower complexity latent features being investigated for bias to determine which lower complexity latent features are identified as being biased, wherein the identified latent features are added to the second list of sets of input features (Zoldi: Paragraphs 28-31; explode nodes in order to find simpler relationship of the features).
Claims 12 and 19 are similar in scope to claim 2 and therefore rejected under the same rationale.
Claim 13 is similar in scope to claim 3 and therefore rejected under the same rationale.
Claim 14 is similar in scope to claim 4 and therefore rejected under the same rationale.
Response to Arguments
Applicant's arguments have been considered, and the 101 rejection has been withdrawn. Regarding the 103 rejection Venugopalan is provided to disclose additional correlation analysis and additional cited areas of the prior art have been further mapped to new limitations.
Regarding the predictive model, as mapped above Miroshnikov can utilize a CNN model which contains the model features.
Additionally, Dalli in Paragraph 81 provides pruning functionality which can supplement the capability of limiting interactions.
Conclusion
The prior art made of record and not relied upon is considered pertinent to Applicant’s disclosure:
Srinivasan et al. 20220237208 A1
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Applicant is required under 37 C.F.R. § 1.111(c) to consider these references fully when responding to this action.
It is noted that any citation to specific pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. In re Heck, 699 F.2d 1331, 1332-33, 216 U.S.P.Q. 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006, 1009, 158 U.S.P.Q. 275, 277 (C.C.P.A. 1968)).
In the interests of compact prosecution, Applicant is invited to contact the examiner via electronic media pursuant to USPTO policy outlined MPEP § 502.03. All electronic communication must be authorized in writing. Applicant may wish to file an Internet Communications Authorization Form PTO/SB/439. Applicant may wish to request an interview using the Interview Practice website: http://www.uspto.gov/patent/laws-and-regulations/interview-practice.
Applicant is reminded Internet e-mail may not be used for communication for matters under 35 U.S.C. § 132 or which otherwise require a signature. A reply to an Office action may NOT be communicated by Applicant to the USPTO via Internet e-mail. If such a reply is submitted by Applicant via Internet e-mail, a paper copy will be placed in the appropriate patent application file with an indication that the reply is NOT ENTERED. See MPEP § 502.03(II).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHERROD KEATON whose telephone number is 571-270-1697. The examiner can normally be reached 9:30am to 5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, Applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor MICHELLE BECHTOLD can be reached at 571-431-0762. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SHERROD L KEATON/ Primary Examiner, Art Unit 2148
2-18-2026