Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
The present application is being examined under the claims filed on October 22nd, 2021 (10-22-2021). Claims 1-20 are pending.
Information Disclosure Statement
The information disclosure statements (IDS) submitted on 10-22-2021, 12-31-2021, 3-6-2023, 11-12-2024 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner.
Drawings
The drawings are objected to because FIG. 8 and FIG. 9 includes text that crosses or mingles with the lines, interfering with its comprehension [see 37 CFR 1.84(p)(3)].
FIG. 8: Control Plane DMZ Tier 820, Control Plane App Tier 824, Data Plane DMZ Tier 848, Data Plane App Tier 846, Container Egress VCN 868(1), Container Egress VCN 868(2), Container Egress VCN 868(N).
FIG. 9: Control Plane DMZ Tier 920, Control Plane App Tier 924, Control Plane Data Tier 928, Data Plane DMZ Tier 948, Data Plane App Tier 946, 964(1), 964(2), 964(N).
Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Specification
Applicant is reminded of the proper language and format for an abstract of the disclosure.
It should avoid using phrases which can be implied, such as, “The disclosure concerns,” “The disclosure defined by this invention,” “The disclosure describes,” etc.
The abstract of the disclosure is objected to because it uses phrases that can be implied: “A system is disclosed that is configured to…”. Applicant is advised to amend to: “A system that is configured to…” A corrected abstract of the disclosure is required and must be presented on a separate sheet, apart from any other text. See MPEP § 608.01(b).
The use of the term following terms, which is a trade name or a mark used in commerce, has been noted in this application:
In [0119], iOS®, Windows®, Android®, BlackBerry®, Google Chrome®, Microsoft Xbox®
In [0157], Oracle Cloud Infrastructure® (OCI®)
The term should be accompanied by the generic terminology; furthermore the term should be capitalized wherever it appears or, where appropriate, include a proper symbol indicating use in commerce such as ™, SM , or ® following the term.
Although the use of trade names and marks used in commerce (i.e., trademarks, service marks, certification marks, and collective marks) are permissible in patent applications, the proprietary nature of the marks should be respected and every effort made to prevent their use in any manner which might adversely affect their validity as commercial marks.
The disclosure is objected to because it contains embedded hyperlinks and/or other forms of browser-executable code in para. [0082]. Applicant is required to delete the embedded hyperlinks and/or other forms of browser-executable code; references to websites should be limited to the top-level domain name without any prefix such as http:// or other browser-executable code. See MPEP § 608.01.
Claim Objections
Claims 4 and 20 are objected to because of the following informalities:
In claim 4, “one or more bias values to a bias to a bias threshold” should read “one or more bias values to a bias threshold”.
In claim 20, “[the] non-transitory computer-readable medium of claim 18” should read “[the] non-transitory computer-readable medium of claim 19”.
Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1: Claims 1-11 are directed to a computer-implemented method [process]. Claims 12-18 are directed to a system [machine]. Claims 19-20 are directed to a non-transitory computer-readable medium [machine].
Regarding Claim 1:
Step 2A, Prong 1: The following limitations are directed to the abstract idea of a mental process [see MPEP 2106.04(a)(2) III. C.]. In particular, the claim recites mental processes that are concepts performed in the human mind or with pen and paper (including an observation, evaluation, judgement, or opinion).
for a trained model to be evaluated, determining…a set of model attributes for the trained model (determining a set of model attributes for a trained model to be evaluated can be performed in the human mind)
generating,…based upon the set of model attributes, a first synthetic dataset to be used for a first bias check to be performed for the trained model, the first bias check configured to evaluate the trained model with respect to a first bias type, the first synthetic dataset comprising a plurality of data points (generating a synthetic data, artificial data to mimic real world data, can be performed in the human mind with pen and paper)
generating…a first bias result for the first bias type based upon the first prediction data (generating a bias result can be performed in the human mind)
generating…a bias evaluation report for the trained model, wherein the bias evaluation report comprises information indicative of the first bias result (generating an evaluation report can be performed in the human mind with pen and paper)
As drafted, under their broadest reasonable interpretation (BRI), in view of the specification, the above limitations cover concepts performed in the human mind (observation, evaluation, judgement, or opinion). Given a sufficiently small set of data, nothing in the claim prohibits this process from being performed mentally or with pen and paper.
Step 2A, Prong 2: There are no additional elements in this claim that integrate the judicial exception into a practical application.
The following additional elements are adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the judicial exception into a practical application.
…by a computing system…
…by the computing system and…
generating, using the trained model, first prediction data for the first synthetic dataset, the first prediction data comprising a first plurality of predicted values generated by the trained model for the plurality of data points in the first synthetic dataset
…by the computing system…
…by the computing system…
Step 2B: There are no additional elements in this claim that amount to significantly more than the judicial exception.
As discussed above with respect to integration of the abstract idea into a practical application, the additional limitations amount to no more than using generic computer components to implement the exception. Implementing the abstract idea by merely applying it using generic computer components, without more, does not amount to an inventive concept.
Regarding Claim 2:
Step 2A, Prong 1: The claim recites the same abstract ideas as in claim 1.
The following limitation remains directed to the abstract idea of a mental process [see MPEP 2106.04(a)(2) III. C.]. In particular, the claim recites mental processes that are concepts performed in the human mind (including an observation, evaluation, judgement, or opinion).
wherein the first bias result comprises one or more bias values generated based on the first prediction data (generating a bias result comprising values can be performed in the human mind)
Step 2A, Prong 2: There are no additional elements in this claim that integrate the judicial exception into a practical application.
Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Regarding Claim 3:
Step 2A, Prong 1: The claim recites the same abstract ideas as in claim 2.
The following limitation remains directed to the abstract idea of a mental process [see MPEP 2106.04(a)(2) III. C.]. In particular, the claim recites mental processes that are concepts performed in the human mind (including an observation, evaluation, judgement, or opinion).
wherein the bias evaluation report comprises the first bias result and the one or more bias values (generating an evaluation report comprising results and values can be performed in the human mind with pen and paper)
Step 2A, Prong 2: There are no additional elements in this claim that integrate the judicial exception into a practical application.
The following additional element is directed to insignificant extra-solution activity to the judicial exception [see MPEP 2106.05(g)].
outputting the bias evaluation report
Step 2B: There are no additional elements in this claim that amount to significantly more than the judicial exception.
The following additional element is directed to receiving or transmitting data over a network. The courts (as per Intellectual Ventures v. Symantec, 838 F.3d 1307, 1321; 120 USPQ2d 1353, 1362 (Fed. Cir. 2016)) have recognized receiving or transmitting data over a network as well-understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity to the judicial exception [see MPEP 2106.05(d) II.].
outputting the bias evaluation report
Regarding Claim 4:
Step 2A, Prong 1: The claim recites the same abstract ideas as in claim 2.
The following limitations are directed to the abstract idea of a mental process [see MPEP 2106.04(a)(2) III. C.]. In particular, the claim recites mental processes that are concepts performed in the human mind (including an observation, evaluation, judgement, or opinion).
comparing at least a bias value of the one or more bias values to a bias to a bias threshold (comparing values to a threshold is mentally performable)
determining, based on the comparison, whether to accept or reject the trained model from inclusion in a group of trained models (determining whether to accept or reject is mentally performable)
Step 2A, Prong 2: There are no additional elements in this claim that integrate the judicial exception into a practical application.
Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Regarding Claim 5:
Step 2A, Prong 1: The claim recites the same abstract ideas as in claim 1.
The following limitations are directed to the abstract idea of a mental process [see MPEP 2106.04(a)(2) III. C.]. In particular, the claim recites mental processes that are concepts performed in the human mind (including an observation, evaluation, judgement, or opinion).
generating,…based upon the set of model attributes, a second synthetic dataset to be used for a second bias check to be performed for the trained model, the second bias check configured to evaluate the trained model with respect to a second bias type, the second synthetic dataset comprising a plurality of data points (generating another set of synthetic data, artificial data to mimic real world data, can be performed in the human mind with pen and paper)
generating…a second bias result for the first bias type based upon the first prediction data (generating a bias result can be performed in the human mind)
Step 2A, Prong 2: There are no additional elements in this claim that integrate the judicial exception into a practical application.
The following additional elements are adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the judicial exception into a practical application.
…by the computing system and…
generating, using the trained model, a second set of predictions for the second synthetic dataset
…by the computing system…
Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
As discussed above with respect to integration of the abstract idea into a practical application, the additional limitations amount to no more than using generic computer components to implement the exception. Implementing the abstract idea by merely applying it using generic computer components, without more, does not amount to an inventive concept.
Regarding Claim 6:
Step 2A, Prong 1: The claim recites the same abstract ideas as in claim 5.
The following limitations are directed to the abstract idea of a mental process [see MPEP 2106.04(a)(2) III. C.]. In particular, the claim recites mental processes that are concepts performed in the human mind (including an observation, evaluation, judgement, or opinion).
generating…a bias score based on the first bias result and the second bias result (generating a score based on two results is mentally performable)
determining, based on the generated bias score, whether to accept or reject the trained model from inclusion in a group of trained models (determining whether to accept or reject is mentally performable)
Step 2A, Prong 2: There are no additional elements in this claim that integrate the judicial exception into a practical application.
The following additional elements are adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the judicial exception into a practical application.
…by the computing system…
Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
As discussed above with respect to integration of the abstract idea into a practical application, the additional limitation amounts to no more than using generic computer components to implement the exception. Implementing the abstract idea by merely applying it using generic computer components, without more, does not amount to an inventive concept.
Regarding Claim 7:
Step 2A, Prong 1: The claim recites the same abstract ideas as in claim 1.
The following limitation is directed to the abstract idea of a mental process [see MPEP 2106.04(a)(2) III. C.]. In particular, the claim recites mental processes that are concepts performed in the human mind (including an observation, evaluation, judgement, or opinion).
wherein determining the set of model attributes comprises processing the trained model to determine at least one model attribute in the set of model attributes (given a sufficiently small model, processing the model to determine an attribute is mentally performable)
Step 2A, Prong 2: There are no additional elements in this claim that integrate the judicial exception into a practical application.
Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Regarding Claim 8:
Step 2A, Prong 1: The claim recites the same abstract ideas as in claim 1.
The following limitation is directed to the abstract idea of a mental process [see MPEP 2106.04(a)(2) III. C.]. In particular, the claim recites mental processes that are concepts performed in the human mind (including an observation, evaluation, judgement, or opinion).
wherein determining the set of model attributes comprises determining at least one model attribute in the set of model attributes based upon analysis of training data used for training and generating the trained model (analyzing training data is mentally performable)
Step 2A, Prong 2: There are no additional elements in this claim that integrate the judicial exception into a practical application.
Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Regarding Claim 9:
Step 2A, Prong 1: The claim recites the same abstract ideas as in claim 1.
The following limitation is directed to the abstract idea of a mental process [see MPEP 2106.04(a)(2) III. C.]. In particular, the claim recites mental processes that are concepts performed in the human mind (including an observation, evaluation, judgement, or opinion).
determining training data used for training and generating the trained model (determining training data is mentally performable)
generating…a second bias result for the first bias type based on the training data, wherein generating the first bias result is further based on the generated second bias result (generating a result is mentally performable)
Step 2A, Prong 2: There are no additional elements in this claim that integrate the judicial exception into a practical application.
The following additional elements are adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the judicial exception into a practical application.
…by the computing system…
Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
As discussed above with respect to integration of the abstract idea into a practical application, the additional limitation amounts to no more than using generic computer components to implement the exception. Implementing the abstract idea by merely applying it using generic computer components, without more, does not amount to an inventive concept.
Regarding Claim 10:
Step 2A, Prong 1: The claim recites the same abstract ideas as in claim 1.
The following limitation is directed to the abstract idea of a mental process [see MPEP 2106.04(a)(2) III. C.]. In particular, the claim recites mental processes that are concepts performed in the human mind (including an observation, evaluation, judgement, or opinion).
wherein generating the first synthetic dataset comprises generating,…based on the set of model attributes for the trained model…, the first synthetic dataset (generating synthetic data is mentally performable)
Step 2A, Prong 2: There are no additional elements in this claim that integrate the judicial exception into a practical application.
The following additional elements are adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the judicial exception into a practical application.
…by the computing system and…using a generative neural network learning model…
Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
As discussed above with respect to integration of the abstract idea into a practical application, the additional limitations amount to no more than using generic computer components to implement the exception. Implementing the abstract idea by merely applying it using generic computer components, without more, does not amount to an inventive concept.
Regarding Claim 11:
Step 2A, Prong 1: The claim recites the same abstract ideas as in claim 1.
Step 2A, Prong 2: There are no additional elements in this claim that integrate the judicial exception into a practical application.
The following additional elements are adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the judicial exception into a practical application.
the trained model is a neural network
the prediction data further comprises at least one value generated by an output layer of the neural network
Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
As discussed above with respect to integration of the abstract idea into a practical application, the additional limitations amount to no more than using generic computer components to implement the exception. Implementing the abstract idea by merely applying it using generic computer components, without more, does not amount to an inventive concept.
Regarding Claim 12:
Claim 12 corresponds to claim 1.
Step 2A, Prong 1: Claim 12 recites the same abstract ideas as in claim 1.
Step 2A, Prong 2: There are no additional elements in this claim that integrate the judicial exception into a practical application. The analysis of claim 12 at this step mirror that of claim 1, with the exception the following limitations.
The following additional elements are adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the judicial exception into a practical application.
A system comprising: one or more computing devices; one or more processors; and a memory including instructions that, when executed by the one or more processors, cause the system to perform processing comprising:
Step 2B: There are no additional elements in this claim that amount to significantly more than the judicial exception. The analysis of claim 12 at this step mirror that of claim 1.
As discussed above with respect to integration of the abstract idea into a practical application, the additional limitations amount to no more than using generic computer components to implement the exception. Implementing the abstract idea by merely applying it using generic computer components, without more, does not amount to an inventive concept.
Regarding Claims 13-18:
Claims 13-18 correspond to claims 5-10. In particular, 13:5, 14:6, 15:7, 16:8, 17:9, 18:10.
Step 2A, Prong 1: Claims 13-18 recite the same abstract ideas as in claims 5-10.
Step 2A, Prong 2: There are no additional elements in this claim that integrate the judicial exception into a practical application. The analysis of claims 13-18 at this step mirror that of claims 5-10.
Step 2B: There are no additional elements in this claim that amount to significantly more than the judicial exception. The analysis of claims 13-18 at this step mirror that of claims 5-10.
Regarding Claim 19:
Claim 19 corresponds to claim 1.
Step 2A, Prong 1: Claim 19 recites the same abstract ideas as in claim 1.
Step 2A, Prong 2: There are no additional elements in this claim that integrate the judicial exception into a practical application. The analysis of claim 19 at this step mirror that of claim 1, with the exception the following limitations.
The following additional elements are adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the judicial exception into a practical application.
A non-transitory computer-readable medium storing a plurality of instructions executable by one or more processors, and when executed by the one or more processors cause the one or more processors to perform processing comprising:
Step 2B: There are no additional elements in this claim that amount to significantly more than the judicial exception. The analysis of claim 19 at this step mirror that of claim 1.
As discussed above with respect to integration of the abstract idea into a practical application, the additional limitations amount to no more than using generic computer components to implement the exception. Implementing the abstract idea by merely applying it using generic computer components, without more, does not amount to an inventive concept.
Regarding Claim 20:
Claim 20 correspond to claim 2.
Step 2A, Prong 1: Claim 20 recites the same abstract ideas as in claim 2.
Step 2A, Prong 2: There are no additional elements in this claim that integrate the judicial exception into a practical application. The analysis of claim 20 at this step mirror that of claim 2.
Step 2B: There are no additional elements in this claim that amount to significantly more than the judicial exception. The analysis of claim 20 at this step mirror that of claim 2.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-3, 5, 7-13, and 15-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Balakrishnan et al. (“Towards Causal Benchmarking of Bias in Face Analysis Algorithms”, 2020), hereinafter Bala.
Regarding Claim 1:
Bala discloses:
A computer-implemented method, comprising:
Bala, pg. 1, “Current methods to measure algorithmic bias in computer vision, which are based on observational datasets, are inadequate for this task because they conflate algorithmic bias with dataset bias. To address this problem, we develop an experimental method for measuring algorithmic bias of face analysis algorithms, which manipulates directly the attributes of interest, e.g., gender and skin tone, in order to reveal causal links between attribute variation and performance change.”
Bala discloses an experimental method, in regards to computer vision, [A computer-implemented method] for measuring algorithmic bias.
for a trained model to be evaluated, determining, by a computing system, a set of model attributes for the trained model
Bala, pg. 10, “We assume a target attribute of interest, e.g., gender, and a target attribute classifier C. We will use transect images to perform bias analysis on C.”
Pg. 8, “More formally, let there be a list of
N
a
image attributes of interest (age, gender, skin color, etc.).”
On pg. 10, Bala discloses performing bias analysis on a target attribute classifier C [for a trained model to be evaluated], and pg. 8 discloses using a list of images with various attributes of interest [determining, by a computing system, a set of model attributes for the trained model].
generating, by the computing system and based upon the set of model attributes, a first synthetic dataset
Bala, pg. 7, “We assume a black-box generator G that can transform a latent vector
z
∈
R
D
into an image
I
=
G
z
…In our study, G is the generator of a pre-trained, publicly available state of the art GAN (‘StyleGAN2’)…to synthesize image grids, i.e., transects, spanning arbitrarily many attributes.”
On pg. 7, Bala discloses using generator G, a GAN (generative adversarial network), to synthesize image grids of many attributes [generating, by the computing system and based upon the set of model attributes, a first synthetic dataset].
to be used for a first bias check to be performed for the trained model, the first bias check configured to evaluate the trained model with respect to a first bias type
Bala, pg. 10, “We assume a target attribute of interest, e.g., gender, and a target attribute classifier C. We will use transect images to perform bias analysis on C.”
On pg. 10, Bala discloses using transect images to perform bias analysis [to be used for a first bias check] on attribute classifier C [to be performed for the trained model]. Additionally, Bala discloses a target attribute of interest [the first bias check configured to evaluate the trained model with respect to a first bias type].
the first synthetic dataset comprising a plurality of data points
Bala, pg. 8, “More formally, let there be a list of
N
a
image attributes of interest (age, gender, skin color, etc.). As explained below, we generate an annotated training dataset
D
z
=
z
i
,
a
i
i
=
1
N
2
where
a
i
is a vector of scores, one for each attribute, for generated image
G
z
i
.”
On pg. 8, Bala discloses the generated images with attributes of interest such as gender [the first synthetic dataset] have a list of attributes of interest a [comprising a plurality of data points].
generating, using the trained model, first prediction data for the first synthetic dataset, the first prediction data comprising a first plurality of predicted values generated by the trained model for the plurality of data points in the first synthetic dataset
Bala, pg. 10, “We assume a target attribute of interest, e.g., gender, and a target attribute classifier C. We will use transect images to perform bias analysis on C…We denote…C’s prediction by
y
^
i
.”
On pg. 10, Bala discloses a target classifier C to generate prediction
y
^
i
[generating, using the trained model, first prediction data] using transect images that contain attributes of interest [for the first synthetic dataset, the first prediction data comprising a first plurality of predicted values generated by the trained model for the plurality of data points in the first synthetic dataset].
generating, by the computing system, a first bias result for the first bias type based upon the first prediction data
Bala, pg. 12, “Our first analysis strategy is to simply compare C’s error rate across different subgroups in the population. Let
E
j
s
denote the average error of
C
over test samples for which covariate j is equal to
s
∈
0
,
1
:
…the quantity
E
j
1
-
E
j
0
is a good estimate of the ‘average treatment effect’ (ATE)…of covariate
j
on
e
, or the average change in e over all examples when covariate j is flipped from 0 to 1, with other covariates fixed.”
Bala discloses the error
E
j
s
as an estimate of the ATE of a covariate. The ATE estimate,
E
j
s
, corresponds to the generated first bias result and the covariate corresponds to the first bias type. Additionally, the first bias result is based upon the first prediction data because it is an error obtained from classifier C.
generating, by the computing system, a bias evaluation report for the trained model, wherein the bias evaluation report comprises information indicative of the first bias result
Bala, pg. 24, “Figure 16: Errors by gender and age group on our transect images. The two top plots were obtained by using a decision threshold equal to 0.5, and show a prevalence of female errors. The bottom two plots were obtained with a threshold equal to 0.8, chosen to minimize overall error. There is a non-uniform influence of age on errors. Both models tend to have lower errors for young to middle-aged adults. The differences in errors between genders are fairly consistent for adults, but differ for children, teenagers and seniors, illustrating a combined age-gender bias in the algorithms.”
Bala discloses plotting the model errors [generating, by the computing system, a bias evaluation report for the trained model], for their transect images with a focus on the errors involving gender and age attributes [wherein the bias evaluation report comprises information indicative of the first bias result].
Regarding Claim 2:
As discussed above, Bala teaches [the] computer-implemented method of claim 1, and further discloses:
wherein the first bias result comprises one or more bias values generated based on the first prediction data
Bala, pg. 12, “Our first analysis strategy is to simply compare C’s error rate across different subgroups in the population. Let
E
j
s
denote the average error of
C
over test samples for which covariate j is equal to
s
∈
0
,
1
:
…the quantity
E
j
1
-
E
j
0
is a good estimate of the ‘average treatment effect’ (ATE)…of covariate
j
on
e
, or the average change in e over all examples when covariate j is flipped from 0 to 1, with other covariates fixed. For example, the ATE of the ‘dark skin’ covariate captures the average change in C’s error when each person’s skin tone is changed from non-dark to dark.”
Pg. 4, “Figure 2:… Human annotations on the transects provide generator-independent ground truth to be compared with algorithm output to measure algorithm errors. Attribute specific bias measurements are obtained by comparing the algorithm’s predictions with human annotations as the attributes are varied. The depicted example may study the question: Does hair length, skin tone, or any combination of the two have a causal effect on classifier errors?”
On pg. 12, Bala discloses the ATE estimate,
E
j
s
, which corresponds to the first bias result comprising one or more bias values because it compares the error of the ground truth compared with the model’s prediction. This is stated on pg. 4, Figure 2, which discloses that attribute specific bias measurements are obtained by comparing the ground truth with the model’s predictions.
Regarding Claim 3:
As discussed above, Bala teaches [the] computer-implemented method of claim 2, and further discloses:
wherein the bias evaluation report comprises the first bias result and the one or more bias values, and the method further comprising outputting the bias evaluation report
Bala, pg. 24, “Figure 16: Errors by gender and age group on our transect images. The two top plots were obtained by using a decision threshold equal to 0.5, and show a prevalence of female errors. The bottom two plots were obtained with a threshold equal to 0.8, chosen to minimize overall error. There is a non-uniform influence of age on errors. Both models tend to have lower errors for young to middle-aged adults. The differences in errors between genders are fairly consistent for adults, but differ for children, teenagers and seniors, illustrating a combined age-gender bias in the algorithms.”
Bala discloses plotting the model errors [outputting the bias evaluation report] for their transect images with a focus on the errors involving gender [the bias evaluation report comprises the first bias result and the one or more bias values] and age attributes.
Regarding Claim 5:
As discussed above, Bala teaches [the] computer-implemented method of claim 1, and further discloses:
generating, by the computing system and based upon the set of model attributes, a second synthetic dataset
Bala, pg. 24, “Figure 16: Errors by gender and age group on our transect images. The two top plots were obtained by using a decision threshold equal to 0.5, and show a prevalence of female errors. The bottom two plots were obtained with a threshold equal to 0.8, chosen to minimize overall error. There is a non-uniform influence of age on errors. Both models tend to have lower errors for young to middle-aged adults. The differences in errors between genders are fairly consistent for adults, but differ for children, teenagers and seniors, illustrating a combined age-gender bias in the algorithms.”
Following FIG. 16, Bala discloses measuring gender and age bias with the datasets CelebA and FairFace [generating, by the computing system and based upon the set of model attributes, a second synthetic dataset].
to be used for a second bias check to be performed for the trained model, the second bias check configured to evaluate the trained model with respect to a second bias type
Bala, pg. 10, “We assume a target attribute of interest, e.g., gender, and a target attribute classifier C. We will use transect images to perform bias analysis on C.”
On pg. 10, Bala discloses using transect images to perform bias analysis [to be used for a second bias check] on attribute classifier C [to be performed for the trained model]. Additionally, Bala discloses a target attribute of interest [the second bias check configured to evaluate the trained model with respect to a second bias type].
the second synthetic dataset comprising a plurality of data points
Bala, pg. 8, “More formally, let there be a list of
N
a
image attributes of interest (age, gender, skin color, etc.). As explained below, we generate an annotated training dataset
D
z
=
z
i
,
a
i
i
=
1
N
2
where
a
i
is a vector of scores, one for each attribute, for generated image
G
z
i
.”
On pg. 8, Bala discloses the generated images with attributes of interest [the second synthetic dataset] have a list of attributes of interest a [comprising a plurality of data points].
generating, using the trained model, a second set of predictions for the second synthetic dataset
Bala, pg. 10, “We assume a target attribute of interest, e.g., gender, and a target attribute classifier C. We will use transect images to perform bias analysis on C…We denote…C’s prediction by
y
^
i
.”
On pg. 10, Bala discloses a target classifier C to generate prediction
y
^
i
[generating, using the trained model, a second set of predictions] using transect images that contain attributes of interest [for the second synthetic dataset].
generating, by the computing system, a second bias result for the first bias type based upon the first prediction data
Bala, pg. 24, “Figure 16: Errors by gender and age group on our transect images. The two top plots were obtained by using a decision threshold equal to 0.5, and show a prevalence of female errors. The bottom two plots were obtained with a threshold equal to 0.8, chosen to minimize overall error. There is a non-uniform influence of age on errors. Both models tend to have lower errors for young to middle-aged adults. The differences in errors between genders are fairly consistent for adults, but differ for children, teenagers and seniors, illustrating a combined age-gender bias in the algorithms.”
Following FIG. 16, Bala discloses measuring gender bias with the datasets CelebA and FairFace [generating…a second bias result for the first bias type based upon the first prediction data].
Regarding Claim 7:
As discussed above, Bala teaches [the] computer-implemented method of claim 1, and further discloses:
wherein determining the set of model attributes comprises processing the trained model to determine at least one model attribute in the set of model attributes
Bala, pg. 18, “We trained linear regression models to predict age, gender, skin color and hair length attributes from style vectors. For the remaining attributes — facial hair, makeup and smiling — we found that binarizing the ranges and training a linear SVM classifier works best.”
Bala discloses certain attributes are better suited for training and generating certain models [processing the trained model to determine at least one model attribute in the set of model attributes]. For example, age, gender, skin color, and hair length were used for linear regression models, and facial hair, makeup, and smiling were using for a linear SVM classifier.
Regarding Claim 8:
As discussed above, Bala teaches [the] computer-implemented method of claim 1, and further discloses:
wherein determining the set of model attributes comprises determining at least one model attribute in the set of model attributes based upon analysis of training data used for training and generating the trained model
Bala, pg. 18, “We trained linear regression models to predict age, gender, skin color and hair length attributes from style vectors. For the remaining attributes — facial hair, makeup and smiling — we found that binarizing the ranges and training a linear SVM classifier works best.”
Bala discloses using certain attributes for training and generating certain models [determining at least one model attribute in the set of model attributes based upon analysis of training data used for training and generating the trained model]. For example, age, gender, skin color, and hair length were used for linear regression models, and facial hair, makeup, and smiling were using for a linear SVM classifier.
Regarding Claim 9:
As discussed above, Bala teaches [the] computer-implemented method of claim 1, and further discloses:
determining training data used for training and generating the trained model
Bala, pg. 8, “More formally, let there be a list of
N
a
image attributes of interest (age, gender, skin color, etc.). As explained below, we generate an annotated training dataset
D
z
=
z
i
,
a
i
i
=
1
N
2
where
a
i
is a vector of scores, one for each attribute, for generated image
G
z
i
.”
Bala discloses an annotated training dataset [determining training data used for training and generating the trained model].
generating, by the computing system, a second bias result for the first bias type based on the training data, wherein generating the first bias result is further based on the generated second bias result
Bala, pg. 24, “Figure 16: Errors by gender and age group on our transect images. The two top plots were obtained by using a decision threshold equal to 0.5, and show a prevalence of female errors. The bottom two plots were obtained with a threshold equal to 0.8, chosen to minimize overall error. There is a non-uniform influence of age on errors. Both models tend to have lower errors for young to middle-aged adults. The differences in errors between genders are fairly consistent for adults, but differ for children, teenagers and seniors, illustrating a combined age-gender bias in the algorithms.”
Following FIG. 16, Bala discloses measuring gender bias for a first dataset CelebA and a second dataset FairFace [generating, by the computing system, a second bias result for the first bias type based on the training data, wherein generating the first bias result is further based on the generated second bias result].
Regarding Claim 10:
As discussed above, Bala teaches [the] computer-implemented method of claim 1, and further discloses:
wherein generating the first synthetic dataset comprises generating, by the computing system and based on the set of model attributes for the trained model and using a generative neural network machine learning model, the first synthetic dataset
Bala, pg. 7, “We assume a black-box generator G that can transform a latent vector
z
∈
R
D
into an image
I
=
G
z
…In our study, G is the generator of a pre-trained, publicly available state of the art GAN (‘StyleGAN2’)…to synthesize image grids, i.e., transects, spanning arbitrarily many attributes.”
Bala discloses using generator G, a GAN (generative adversarial network) [using a generative neural network machine learning model], to synthesize image grids [generating the first synthetic dataset comprises generating… the first synthetic dataset] of many attributes [based on the set of model attributes for the trained model].
Regarding Claim 11:
As discussed above, Bala teaches [the] computer-implemented method of claim 1, and further discloses:
the trained model is a neural network
Bala, pg. 15, “We trained two research-grade gender classifier models, each using the ResNet-50 architecture.”
Bala discloses the classifiers use ResNet-50 architecture [the trained model is a neural network].
the prediction data further comprises at least one value generated by an output layer of the neural network
Bala, pg. 10, “We assume a target attribute of interest, e.g., gender, and a target attribute classifier C. We will use transect images to perform bias analysis on C…We denote…C’s prediction by
y
^
i
.”
Bala discloses using a classifier C to generate prediction
y
^
i
, and is interpreted as disclosing the prediction data comprises at least one value,
y
^
i
, generated by an output layer of the neural network, C.
Regarding Claim 12:
Claim 12 is a system claim corresponding to method claim 1 and is rejected for at least the same reasons as given in the rejection of claim 1, with the exception of the following limitations.
Bala discloses:
A system comprising: one or more computing devices; one or more processors; and a memory including instructions that, when executed by the one or more processors, cause the system to perform processing comprising:
Bala, pg. 1, “Current methods to measure algorithmic bias in computer vision, which are based on observational datasets, are inadequate for this task because they conflate algorithmic bias with dataset bias. To address this problem, we develop an experimental method for measuring algorithmic bias of face analysis algorithms, which manipulates directly the attributes of interest, e.g., gender and skin tone, in order to reveal causal links between attribute variation and performance change.”
Bala discloses an experimental method, in regards to computer vision, for measuring algorithmic bias. Bala discloses working with computer vision/face analysis algorithms and is interpreted as disclosing a system with one or more processors and memory for execution.
Regarding Claims 13 and 15-18:
Claims 13 and 15-18 are system claims corresponding to method claims 5 and 7-10 and is rejected for at least the same reasons as given in the rejection of claim 5 and 7-10. In particular, 13:5, 15:7, 16:8, 17:9, 18:10.
Regarding Claim 19:
Claim 19 is a non-transitory computer-readable medium claim corresponding to method claim 1 and is rejected for at least the same reasons as given in the rejection of claim 1, with the exception of the following limitations.
Bala discloses:
A non-transitory computer-readable medium storing a plurality of instructions executable by one or more processors, and when executed by the one or more processors cause the one or more processors to perform processing comprising:
Bala, pg. 1, “Current methods to measure algorithmic bias in computer vision, which are based on observational datasets, are inadequate for this task because they conflate algorithmic bias with dataset bias. To address this problem, we develop an experimental method for measuring algorithmic bias of face analysis algorithms, which manipulates directly the attributes of interest, e.g., gender and skin tone, in order to reveal causal links between attribute variation and performance change.”
Bala discloses an experimental method, in regards to computer vision, for measuring algorithmic bias. Bala discloses working with computer vision/face analysis algorithms and is interpreted as disclosing a non-transitory computer-readable medium executable by one or more processors.
Regarding Claim 20:
Claim 20 is a non-transitory computer-readable medium claim corresponding to method claim 2 and is rejected for at least the same reasons as given in the rejection of claim 2.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 4, 6, and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Bala in view of Castiglione et al. (US 20220114399), hereinafter Cast.
Regarding Claim 4:
As discussed above, Bala teaches [the] computer-implemented method of claim 2, but does not explicitly disclose:
comparing at least a bias value of the one or more bias values [to a bias] threshold
determining, based on the comparison, whether to accept or reject the trained model from inclusion in a group of trained models
However, in the same field, analogous art Cast teaches:
comparing at least a bias value of the one or more bias values [to a bias] threshold
Cast, [0025], “Additionally, aggregate metrics may be generated, e.g. L-p norms of the bias indicator values, to assess an overall fairness of the (machine learning) model. For example, bounding the L-infinity norm may ensure all bias indicators are below a given predetermined threshold.”
Cast teaches ensuring bias indicators are below a given predetermined threshold [comparing at least a bias value of the one or more bias values to a bias threshold].
determining, based on the comparison, whether to accept or reject the trained model from inclusion in a group of trained models
Cast, [0032], “The downstream computing system, for example, could be a model selector subsystem that is configured to control routing or selection of various models for use, and it may be configured to not route or not select models having a fairness score greater than a pre-defined threshold if there is another option. In a variant embodiment, the downstream computing system could be configured to always select a most fair option from a set of candidate models.”
Cast teaches selecting and not selecting models based on the fairness score determined by the pre-defined threshold [determining, based on the comparison, whether to accept or reject the trained model from inclusion in a group of trained models].
Bala, Cast, and the instant application are analogous art because they are all directed to machine learning fairness.
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Bala with Cast to use a fairness threshold in order to automatically and efficiently determine which machine learning models are suitable for use due to fairness. “The fairness indicator value may be used to pass or fail the machine learning model, or may be used in an automated tool for machine learning model generation. Thus, the system may facilitate the HR department to efficiently (with low computational overhead) screen machine learning models and deploy only ones that meet a predetermined threshold of fairness” (Cast, [0150]).
Regarding Claim 6:
As discussed above, Bala teaches [the] computer-implemented method of claim 5, but does not explicitly disclose:
generating, by the computing system, a bias score based on the first bias result and the second bias result
determining, based on the generated bias score, whether to accept or reject the trained model from inclusion in a group of trained models.
However, in the same field, analogous art Cast teaches:
generating, by the computing system, a bias score based on the first bias result and the second bias result
Cast, [0025], “Additionally, aggregate metrics may be generated, e.g. L-p norms of the bias indicator values, to assess an overall fairness of the (machine learning) model. For example, bounding the L-infinity norm may ensure all bias indicators are below a given predetermined threshold.”
Cast teaches generating aggregate metrics of bias indicators [generating…a bias score based on the first bias result and the second bias result].
determining, based on the generated bias score, whether to accept or reject the trained model from inclusion in a group of trained models
Cast, [0032], “The downstream computing system, for example, could be a model selector subsystem that is configured to control routing or selection of various models for use, and it may be configured to not route or not select models having a fairness score greater than a pre-defined threshold if there is another option. In a variant embodiment, the downstream computing system could be configured to always select a most fair option from a set of candidate models.”
Cast teaches selecting and not selecting models based on the fairness score determined by the pre-defined threshold [determining, based on the generated bias score, whether to accept or reject the trained model from inclusion in a group of trained models].
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Bala with Cast to use a fairness threshold in order to automatically and efficiently determine which machine learning models are suitable for use due to fairness. “The fairness indicator value may be used to pass or fail the machine learning model, or may be used in an automated tool for machine learning model generation. Thus, the system may facilitate the HR department to efficiently (with low computational overhead) screen machine learning models and deploy only ones that meet a predetermined threshold of fairness” (Cast, [0150]).
Regarding Claim 14:
Claim 14 is a system claim corresponding to method claim 6 and is rejected for at least the same reasons as given in the rejection of claim 6.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to STEVEN PHUNG whose telephone number is (703) 756-1499. The examiner can normally be reached Monday-Thursday: 9:00AM-4:00PM ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, KAMRAN AFSHAR can be reached at (571) 272-7796. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/STEVEN PHUNG/Examiner, Art Unit 2125
/KAMRAN AFSHAR/Supervisory Patent Examiner, Art Unit 2125