DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Status of Claims Claims 1-20 remain pending and are ready for examination. Information Disclosure Statement The information disclosure statement (IDS) submitted on 05/09/2024 and 07/26/2023 , were filed. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 8-10 and 12-19 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by ROMANIELLO et al., U.S. Pub No: US 20230222378 A1 (Hereinafter “ROMANIELLO”) . Regarding claim 8 , ROMANIELLO discloses One or more non-transitory computer-readable media that includes stored thereon computer-executable instructions that when executed by at least a processor of a computer system (fig. 6) cause the computer system to : accessing outcomes that were generated by an ML tool for a set of transactions, wherein the transactions include at least a first set of transactions associated with a first demographic category and a second set of transactions associated with a second demographic category (see paragraph [0006 , 0029-0030 ], wherein receiving an evaluation dataset for the machine learning model, the evaluation dataset comprising, for each entity in a group of entities: (i) an ordered set of attribute values for the entity, each attribute value corresponding to a respective attribute in a set of attributes (correspond to demographic categor ies) that is common for all of the entities in the group of entities, and (ii) an outcome prediction generated for the entity by the machine learning model based on the ordered set of attribute values for the entity, wherein the outcome prediction generated for each entity is either a first outcome or a second outcome ) ; train a bias detection model on the first set of transactions associated with the first demographic category, wherein the bias detection model is trained to estimate outcomes that are consistent with the outcomes that were generated for the first set of transactions associated with the first demographic category ( see paragraph [0006, 0029-0030], wherein running an optimization process to compute importance values indicating respective influences of the attributes on a probability of the machine learning model predicting a first outcome ) ; input the second set of transactions associated with the second demographic category into the bias detection model to generate outcome estimates (see paragraph [0006, 0029-0030], wherein running an optimization process to compute importance values indicating respective influences of the attributes on a probability of the machine learning model predicting a first outcome ) ; determine a status of bias for the ML tool based on a dissimilarity between the outcome estimates for the second set of transactions generated by the bias detection model and the outcomes generated for the second set of transactions by the ML tool (see paragraph [0029-0030], wherein the system evaluates the model using a dataset that includes actual outcome predications and attribute tensors (e.g. gender) for the entities. It splits entities into first and second sub-groups based on these demographic attributes. Unfairness is detected based on the differences (dissimilarity) between the relative quantities of preferred outcomes predication for the different sub-groups) ; and generate an electronic alert that reports the status of bias for the ML tool (see paragraph [0029-0030, 0066, 0090], wherein the special indication of unfairness corresponds to the alert as claimed) . Regarding claim 9 , ROMANIELLO discloses wherein the instructions to determine a status of bias for the ML tool further cause the computer to : execute a fault detection test on residuals between the outcome estimates and the outcomes for the second set of test transactions associated with the second demographic category to produce a detection index that represents the dissimilarity (see paragraph [0006 -008 , 0029-0030], wherein the system evaluates the model using a dataset that includes actual outcome predications and attribute tensors (e.g. gender) for the entities. It splits entities into first and second sub-groups based on these demographic attributes. Unfairness is detected based on the differences (dissimilarity) between the relative quantities of preferred outcomes predication for the different sub-groups) ; evaluate whether the detection index satisfies either of (i) a first acceptance threshold for detecting a presence of bias and (ii) a second acceptance threshold for confirming an absence of bias ( see paragraph [0006-008, 0029-0030], wherein the system evaluates the model using a dataset that includes actual outcome predications and attribute tensors (e.g. gender) for the entities. It splits entities into first and second sub-groups based on these demographic attributes. Unfairness is detected based on the differences (dissimilarity) between the relative quantities of preferred outcomes predication for the different sub-groups ) ; and setting the status of bias to indicate that there is bias in the ML tool when the first acceptance threshold is satisfied and setting the status of bias to indicate that there is no bias in the ML tool when the second acceptance threshold is satisfied (see paragraph [0006-008, 0029-0030] and fig.3) . Regarding claim 10 , ROMANIELLO discloses wherein the instructions to determine a status of bias for the ML tool further cause the computer to : access a first confidence factor for detection of bias (see paragraph [0067-0068]) ; setting the first acceptance threshold based on the first confidence factor for detection of bias (see paragraph [ 0033, 0067-0068]) ; access a second confidence factor for confirmation of absence of bias (see paragraph [0033, 0067-0068]) ; setting the second acceptance threshold based on the second confidence factor for confirmation of absence of bias (see paragraph [0033, 0067-0068]) . Regarding claim 1 2 , ROMANIELLO discloses wherein the instructions further cause the computer to : identify a trend regarding detection of bias in the ML tool (See ROMANIELLO paragraph [0007, 0059, 0065], wherein i dentify the attributes that are causing the ML model 180 to discriminate ) ; and include the trend in the electronic alert (see paragraph [0029-0030, 0066, 0090], wherein the special indication of unfairness corresponds to the alert as claimed) . Regarding claim 1 3 , ROMANIELLO discloses wherein the instructions further cause the computer to : where bias is detected in the ML tool, determine a root cause of the bias (See ROMANIELLO paragraph [0007, 0059, 0065], wherein i dentify the attributes that are causing the ML model 180 to discriminate ) ; and include the root cause in the electronic alert (see paragraph [0029-0030, 0066, 0090], wherein the special indication of unfairness corresponds to the alert as claimed) . Regarding claim 1 4 , ROMANIELLO discloses wherein the instructions further cause the computer to, before assigning outcomes for test transactions with an ML tool that is being checked for bias, detect a change to the machine learning tool (see abstract and fig.3 ) . Regarding claim 1 5 , ROMANIELLO discloses wherein the test transactions that belong to the first demographic category and the test transactions that belong to the second demographic category are discrete from one another (See ROMANIELLO paragraph [0008], wherein the first sub-group excludes any entities that are members of the second sub-group ) . Claims 16 - 19 are rejected under the same rationale as claims 8 - 9 and 15 . Claim Rejections - 35 USC § 103 This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action : A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co. , 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows : 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1- 7, 11 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over ROMANIELLO et al., U.S. Pub No : US 20230222378 A1 (Hereinafter “ ROMANIELLO ”) in view of Black et al., U.S. Patent No : US 7386426 B1 (Hereinafter “ Black ”). Regarding claim 1, ROMANIELLO discloses A computer-implemented method, comprising : generating outcomes for transactions with a machine learning (ML) tool (see paragraph [0029-0030 , 0065 ], wherein The ML model generate outcome) ; comparing actual values for a test subset of the outcomes with estimated values generated for the test subset of the outcomes by a machine learning model, wherein the test subset is associated with a test value for a demographic classification, and wherein the machine learning model is trained with a reference subset of the outcomes that is associated with a reference value for the demographic classification ( see paragraph [0029-0030], wherein the system evaluates the model using a dataset that includes actual outcome predications and attribute tensors (e.g. gender) for the entities . It splits entities into first and second sub- groups based on these demographic attributes. Unfairness is detected based on the differences (dissimilarity) between the relative quantities of preferred outcomes predication for the different sub-groups ) ; detecting the unfairness in the machine learning tool based on dissimilarity between the actual values and the estimated values for the test subset of the outcomes (see paragraph [0029-0030], wherein the system evaluates the model using a dataset that includes actual outcome predications and attribute tensors (e.g. gender) for the entities. It splits entities into first and second sub-groups based on these demographic attributes. Unfairness is detected based on the differences (dissimilarity) between the relative quantities of preferred outcomes predication for the different sub-groups) ; and generating an electronic alert that the ML tool generates outcomes that are unfair with respect to the demographic classification ( see paragraph [0029-0030, 006 6 , 0090 ], wherein the special indication of unfairness corresponds to the alert as claimed) . ROMANIELLO fails to explicitly discloses comparing actual values for a test subset of the outcomes with estimated values generated for the test subset of the outcomes by a machine learning mode . Black discloses comparing actual values for a test subset of the outcomes with estimated values generated for the test subset of the outcomes by a machine learning mode (see col.4 line 10-15, wherein t he model estimates the true process values, as functional sensors would provide them. The residuals between these estimates and the actual measurements (from sensors of unknown condition) can then be monitored using the sequential probability ratio test, a statistical decision ) . It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the system of ROMANIELLO to include the missing limitation , as taught by Black , since doing so would provide continuous and highly sensitive detection of anomies, drifts and system failures ( Black ; col.2 line 13-16 ). Regarding claim 2 , ROMANIELLO in view of Black further disclose wherein comparing actual values further comprises executing a fault detection test on residuals between the actual values and the estimated values to produce a detection index (Black, see col.2 line 51-57 and col.8 line 29-42 and col. 13 line 56-60, wherein Summing module 16 determines the difference (residual) between measurement and model prediction and forwards this result to the sequential probability ration test (SPRT) module 18. SPRT is a statistical decision technique that evaluates the difference, or residual, obtained between measurement and model prediction, and determines when drift or failure has occurred. The SPRT has previously been applied to detection in neural network model based ICVSs and in signal monitoring applications incorporating other model techniques. A full description of its role in the ICVS is not provided here, but it is known to those familiar with the art. Lastly, output module 20 provides the results of the comparison between measured values and model prediction. See also ROMANIELLO paragraph [0087] ) ; and wherein detecting the unfairness in the machine learning tool further comprises determining that the detection index satisfies an acceptance threshold for the presence of bias (Black, see col.2 line 51-57 and col.8 line 29-42 and col. 13 line 56-60, wherein Summing module 16 determines the difference (residual) between measurement and model prediction and forwards this result to the sequential probability ration test (SPRT) module 18. SPRT is a statistical decision technique that evaluates the difference, or residual, obtained between measurement and model prediction, and determines when drift or failure has occurred. The SPRT has previously been applied to detection in neural network model based ICVSs and in signal monitoring applications incorporating other model techniques. A full description of its role in the ICVS is not provided here, but it is known to those familiar with the art. Lastly, output module 20 provides the results of the comparison between measured values and model prediction. See also ROMANIELLO paragraph [0087]) . Regarding claim 3 , ROMANIELLO in view of Black further disclose setting the acceptance threshold for the presence of bias based on a pre-specified confidence factor (Black, see col.2 line 51-57 and col.8 line 29-42 and col. 13 line 56-60, wherein Summing module 16 determines the difference (residual) between measurement and model prediction and forwards this result to the sequential probability ration test (SPRT) module 18. SPRT is a statistical decision technique that evaluates the difference, or residual, obtained between measurement and model prediction, and determines when drift or failure has occurred. The SPRT has previously been applied to detection in neural network model based ICVSs and in signal monitoring applications incorporating other model techniques. A full description of its role in the ICVS is not provided here, but it is known to those familiar with the art. Lastly, output module 20 provides the results of the comparison between measured values and model prediction. See also ROMANIELLO paragraph [0087]) . Regarding claim 4 , ROMANIELLO in view of Black further disclose determining a root cause of the unfairness (See ROMANIELLO paragraph [0007, 0059, 0065], wherein i dentify the attributes that are causing the ML model 180 to discriminate ) ; detecting that the unfairness was introduced to ML tool by a training operation ( See ROMANIELLO paragraph [ 0007, 0059, 0065 ] , wherein i dentify the attributes that are causing the ML model 180 to discriminate ) ; and automatically adjusting the ML tool with respect to the root cause to mitigate the unfairness ( See ROMANIELLO paragraph [0007, 0059, 0065], wherein adjusting the ML by having the assessment module send an instruction to an ML training service to retrain the model when unfairness is found ) . Regarding claim 5 , ROMANIELLO in view of Black further disclose before generating outcomes for transactions with the machine learning tool : detecting a change to the machine learning tool ; and in response to detecting the change to the machine learning tool, performing the steps of generating outcomes for transactions with the machine learning tool, comparing actual values for a test subset of the outcomes with estimated values generated for the test subset of the outcomes, detecting the unfairness in the machine learning tool, and generating an electronic alert (Black, see col.4 line 19-24) . Regarding claim 6 , ROMANIELLO in view of Black further disclose wherein the test subset of the outcomes and the reference subset of the outcomes are discrete from one another ( See ROMANIELLO paragraph [000 8 ], wherein the first sub-group excludes any entities that are members of the second sub-group ) . Regarding claim 7 , ROMANIELLO in view of Black further disclose wherein the machine learning model is a multivariate state estimation technique model (Black, see col.4 line 37-59) . Regarding claim 11 , ROMANIELLO fails to explicitly disclose the limitation below. Black discloses wherein the fault detection test is a sequential probability ratio test, further comprising generating a log-likelihood ratio between likelihood of presence of bias and likelihood of the absence of bias to be the detection index (Black, see abstract and claim 17 ) . It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the system of ROMANIELLO to include the missing limitation , as taught by Black , since doing so would provide continuous and highly sensitive detection of anomies, drifts and system failures ( Black ; col.2 line 13-16 ). Regarding claim 20 , ROMANIELLO fail s to explicitly disclose the limitation below . Black discloses wherein the bias detection model is a non-linear, non-parametric regression model (Black, see abstract and col.4 line 37-59) . It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the system of ROMANIELLO to include the missing limitation , as taught by Black , since doing so would provide continuous and highly sensitive detection of anomies, drifts and system failures ( Black ; col.2 line 13-16 ). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT MAHER N ALGIBHAH whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)272-0718 . The examiner can normally be reached on FILLIN "Work Schedule?" \* MERGEFORMAT Monday-Thursday . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http : //www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT Aleksandr Kerzhner can be reached on FILLIN "SPE Phone?" \* MERGEFORMAT (571) 270-1760 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-1264 . Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http : //pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MAHER N ALGIBHAH/ Primary Examiner , Art Unit 2165