DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
The action is in response to the application filed on 4/26/2022. Claims 1-5 and 8-17 are pending and have been examined.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 4/26/2022 is in compliance with the provisions of 37 CFR 1.97, 1.98, and MPEP § 609. It has been placed in the application file, and the information referred to therein has been considered as to the merits.
Claim Objections
Claims 2, 8, and 10 objected to because of the following informalities: Claims 2, 8, and 10 recite “wherein when an error function…” which should read “wherein an error function…” because the when clause is never completed. Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-5 and 8-17 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Regarding Claim 1:
Step 1:
The claim recites a learning apparatus, which is one of the four statutory categories of patentable subject matter.
Step 2A prong 1:
The claim recites an abstract idea. Specifically the limitation learning the classifier by solving a constrained optimization problem in which a mean of causal effects between predetermined variables is within a predetermined range and a variance of the causal effects is equal to or smaller than a predetermined value, using the training data and the causal graph is a mathematical concept.
Step 2A prong 2:
The additional element of using a processor is a generic computer component amounting to mere instructions to apply the abstract idea, therefore does not integrate the abstract idea into practical application 2106.05(f).
The additional element of inputting training data for learning a classifier and a casual graph representing causal relationships between variables included in the training data does not integrate the abstract idea into practical application because inputting data is considered an insignificant extra solution activity of "mere data gathering" MPEP 2106.05(g).
Step 2B:
The additional element of using a processor is a generic computer component amounting to mere instructions to apply the abstract idea, therefore does not amount to significantly more 2106.05(f).
The additional element of inputting training data for learning a classifier and a casual graph representing causal relationships between variables included in the training data does not amount to significantly more because the additional element is an insignificant extra solution activity and further is a well understood routine and conventional activity. See MPEP 2106.05(d)(II)(i), (buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network)).
Therefore, the claim is rejected.
Regarding Claim 2:
Claim 2 which incorporates the rejection of Claim 1, recites further abstract ideas an error function for quantifying an accuracy of the classifier is represented by a Lipschitz continuous convex function and the classifier is represented by a non-convex and smooth function and learning the classifier by optimizing an objective function, the objective function being a weakly convex function that approximates the constrained optimization problem which are mathematical concepts. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. The claim is ineligible.
Regarding Claim 3:
Claim 3 which incorporates the rejection of Claim 2, recites a further abstract idea a penalty function, an estimate of an upper confidence bound represented by a sum of a mean of absolute values of the causal effects and a standard deviation of the absolute values of the causal effects, with respect to a mean of the error function associated with an empirical distribution of the training data which is a mathematical concept. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. The claim is ineligible.
Regarding Claim 4:
Step 1:
The claim recites a classification apparatus, which is one of the four statutory categories of patentable subject matter.
Step 2A prong 1:
The claim recites an abstract idea. Specifically the limitation learning the classifier by solving a constrained optimization problem in which a mean of causal effects between predetermined variables is within a predetermined range and a variance of the causal effects is equal to or smaller than a predetermined value, using the training data and the causal graph is a mathematical concept.
Step 2A prong 2:
The additional element of using a processor is a generic computer component amounting to mere instructions to apply the abstract idea, therefore does not integrate the abstract idea into practical application 2106.05(f).
The additional element of inputting training data for learning a classifier and a casual graph representing causal relationships between variables included in the training data does not integrate the abstract idea into practical application because inputting data is considered an insignificant extra solution activity of "mere data gathering" MPEP 2106.05(g).
The additional element of determining a class associated with target data with the learned classifier amounts to mere instructions to apply the abstract idea, therefore does not integrate the abstract idea into practical application 2106.05(f).
Step 2B:
The additional element of using a processor is a generic computer component amounting to mere instructions to apply the abstract idea, therefore does not amount to significantly more 2106.05(f).
The additional element of inputting training data for learning a classifier and a casual graph representing causal relationships between variables included in the training data does not amount to significantly more because the additional element is an insignificant extra solution activity and further is a well understood routine and conventional activity. See MPEP 2106.05(d)(II)(i), (buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network)).
The additional element of determining a class associated with target data with the learned classifier amounts to mere instructions to apply the abstract idea, therefore does not amount to significantly more 2106.05(f).
Therefore, the claim is rejected.
Regarding Claim 5:
Step 1:
The claim recites a method, which is one of the four statutory categories of patentable subject matter.
Step 2A prong 1:
The claim recites an abstract idea. Specifically the limitation learning the classifier by solving a constrained optimization problem in which a mean of causal effects between predetermined variables is within a predetermined range and a variance of the causal effects is equal to or smaller than a predetermined value, using the training data and the causal graph is a mathematical concept.
Step 2A prong 2:
The additional element of using a processor is a generic computer component amounting to mere instructions to apply the abstract idea, therefore does not integrate the abstract idea into practical application 2106.05(f).
The additional element of inputting training data for learning a classifier and a casual graph representing causal relationships between variables included in the training data does not integrate the abstract idea into practical application because a model receiving data is considered an insignificant extra solution activity of "mere data gathering" MPEP 2106.05(g).
Step 2B:
The additional element of using a processor is a generic computer component amounting to mere instructions to apply the abstract idea, therefore does not amount to significantly more 2106.05(f).
The additional element of inputting training data for learning a classifier and a casual graph representing causal relationships between variables included in the training data does not amount to significantly more because the additional element is an insignificant extra solution activity and further is a well understood routine and conventional activity. See MPEP 2106.05(d)(II)(i), (buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network)).
Therefore, the claim is rejected.
Regarding Claim 8:
Claim 8 which incorporates the rejection of Claim 4, recites further abstract ideas an error function for quantifying an accuracy of the classifier is represented by a Lipschitz continuous convex function and the classifier is represented by a non-convex and smooth function and learning the classifier by optimizing an objective function, the objective function being a weakly convex function that approximates the constrained optimization problem which are mathematical concepts. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. The claim is ineligible.
Regarding Claim 9:
Claim 9 which incorporates the rejection of Claim 8, recites a further abstract idea a penalty function, an estimate of an upper confidence bound represented by a sum of a mean of absolute values of the causal effects and a standard deviation of the absolute values of the causal effects, with respect to a mean of the error function associated with an empirical distribution of the training data which is a mathematical concept. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. The claim is ineligible.
Regarding Claim 10:
Claim 10 which incorporates the rejection of Claim 5, recites further abstract ideas an error function for quantifying an accuracy of the classifier is represented by a Lipschitz continuous convex function and the classifier is represented by a non-convex and smooth function and learning the classifier by optimizing an objective function, the objective function being a weakly convex function that approximates the constrained optimization problem which are mathematical concepts. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. The claim is ineligible.
Regarding Claim 11:
Claim 11 which incorporates the rejection of Claim 10, recites a further abstract idea a penalty function, an estimate of an upper confidence bound represented by a sum of a mean of absolute values of the causal effects and a standard deviation of the absolute values of the causal effects, with respect to a mean of the error function associated with an empirical distribution of the training data which is a mathematical concept. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. The claim is ineligible.
Regarding Claim 12:
Claim 12 incorporates the rejection of Claim 1. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. Specifically, the claim recites a further additional element the classifier determines a class indicating whether to recruit an individual amounts to mere instructions to apply the abstract idea MPEP 2106.05(f). This claim further recites a description of the variables from the inputting and learning steps and is ineligible for the same reasons as set forth in Claim 1. The claim is ineligible.
Regarding Claim 13:
Claim 13 incorporates the rejection of Claim 1. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. Specifically, the claim recites a further additional element the classifier determines a class indicating whether to release an individual, wherein the individual corresponds to a prisoner amounts to mere instructions to apply the abstract idea MPEP 2106.05(f). This claim further recites a description of the variables from the inputting and learning steps and is ineligible for the same reasons as set forth in Claim 1. The claim is ineligible.
Regarding Claim 14:
Claim 14 incorporates the rejection of Claim 4. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. Specifically, the claim recites a further additional element the classifier determines a class indicating whether to recruit an individual amounts to mere instructions to apply the abstract idea MPEP 2106.05(f). This claim further recites a description of the variables from the inputting and learning steps and is ineligible for the same reasons as set forth in Claim 4. The claim is ineligible.
Regarding Claim 15:
Claim 15 incorporates the rejection of Claim 4. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. Specifically, the claim recites a further additional element the classifier determines a class indicating whether to release an individual, wherein the individual corresponds to a prisoner amounts to mere instructions to apply the abstract idea MPEP 2106.05(f). This claim further recites a description of the variables from the inputting and learning steps and is ineligible for the same reasons as set forth in Claim 4. The claim is ineligible.
Regarding Claim 16:
Claim 16 incorporates the rejection of Claim 5. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. Specifically, the claim recites a further additional element the classifier determines a class indicating whether to recruit an individual amounts to mere instructions to apply the abstract idea MPEP 2106.05(f). This claim further recites a description of the variables from the inputting and learning steps and is ineligible for the same reasons as set forth in Claim 5. The claim is ineligible.
Regarding Claim 17:
Claim 17 incorporates the rejection of Claim 5. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. Specifically, the claim recites a further additional element the classifier determines a class indicating whether to release an individual, wherein the individual corresponds to a prisoner amounts to mere instructions to apply the abstract idea MPEP 2106.05(f). This claim further recites a description of the variables from the inputting and learning steps and is ineligible for the same reasons as set forth in Claim 5. The claim is ineligible.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1, 4, 5, and 12-17 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Nabi et al., “Fair Inference on Outcomes”, from applicant IDS, hereinafter “Nabi”.
Regarding Claim 1, Nabi teaches:
A learning apparatus comprising a processor configured to execute a method comprising:
inputting training data for learning a classifier (p. 1931, col. 2, ¶4, “probabilistic classification… problems with a set of features X and an outcome Y”, p. 1937, col. 1, ¶1, “Y model was fit using constrained BART”) and a causal graph representing causal relationships between variables included in the training data (p. 1937, col. 2, ¶2, “fair inference via two datasets: the COMPAS dataset (Angwin et al. 2016) and the Adult dataset (Lichman 2013)”, p. 1937, col. 2, Figure 2 description, “Figure 2: Causal graphs for (a) the COMPAS dataset, and (b) the Adult dataset.”); and
learning the classifier by solving a constrained optimization problem in which a mean of causal effects between predetermined variables is within a predetermined range and a variance of the causal effects is equal to or smaller than a predetermined value, using the training data and the causal graph (Equation 9 shows solving a constrained optimization problem, Equation 9 shows mean of causal effects of a population n with dividing by 1/n and summing to n, Equation 9 also shows variation in causal effects by combining differences of causal effect from each individual in the population, p. 1938, col. 2, ¶4, “We solve the constrained problem by restricting the PSE, as estimated by (9), to lie between 0.95 and 1.05”, predetermined value is 1.05).
Regarding Claim 4, Nabi teaches:
A classification apparatus comprising a processor configured to execute a method comprising:
inputting training data for learning a classifier (p. 1931, col. 2, ¶4, “probabilistic classification… problems with a set of features X and an outcome Y”, p. 1937, col. 1, ¶1, “Y model was fit using constrained BART”) and a causal graph representing causal relationships between variables included in the training data (p. 1937, col. 2, ¶2, “fair inference via two datasets: the COMPAS dataset (Angwin et al. 2016) and the Adult dataset (Lichman 2013)”, p. 1937, col. 2, Figure 2 description, “Figure 2: Causal graphs for (a) the COMPAS dataset, and (b) the Adult dataset.”); and
learning the classifier by solving a constrained optimization problem in which a mean of causal effects between predetermined variables is within a predetermined range and a variance of the causal effects is equal to or smaller than a predetermined value, using the training data and the causal graph (Equation 9 shows solving a constrained optimization problem, Equation 9 shows mean of causal effects of a population n with dividing by 1/n and summing to n, Equation 9 also shows variation in causal effects by combining differences of causal effect from each individual in the population, p. 1938, col. 2, ¶4, “We solve the constrained problem by restricting the PSE, as estimated by (9), to lie between 0.95 and 1.05”, predetermined value is 1.05).
determining a class associated with target data with the learned classifier (p. 1938, col. 1, ¶3, “The “adult” dataset from the UCI repository has records on 14 attributes… The objective is to learn a statistical model that predicts the class of income for a given individual”).
Regarding Claim 5, Nabi teaches:
A computer-implemented method for learning a class, the method comprising:
inputting training data for learning a classifier (p. 1931, col. 2, ¶4, “probabilistic classification… problems with a set of features X and an outcome Y”, p. 1937, col. 1, ¶1, “Y model was fit using constrained BART”) and a causal graph representing causal relationships between variables included in the training data (p. 1937, col. 2, ¶2, “fair inference via two datasets: the COMPAS dataset (Angwin et al. 2016) and the Adult dataset (Lichman 2013)”, p. 1937, col. 2, Figure 2 description, “Figure 2: Causal graphs for (a) the COMPAS dataset, and (b) the Adult dataset.”); and
learning the classifier by solving a constrained optimization problem in which a mean of causal effects between predetermined variables is within a predetermined range and a variance of the causal effects is equal to or smaller than a predetermined value, using the training data and the causal graph (Equation 9 shows solving a constrained optimization problem, Equation 9 shows mean of causal effects of a population n with dividing by 1/n and summing to n, Equation 9 also shows variation in causal effects by combining differences of causal effect from each individual in the population, p. 1938, col. 2, ¶4, “We solve the constrained problem by restricting the PSE, as estimated by (9), to lie between 0.95 and 1.05”, predetermined value is 1.05).
Regarding Claim 12, Nabi teaches the learning apparatus of Claim 1 as referenced above. Nabi further teaches:
wherein the classifier determines a class indicating whether to recruit an individual (p. 1931, col. 2, ¶4, “probabilistic classification… with a set of features X and an outcome Y… Y is a hiring decision”), and wherein the variables include a sensitive features of the individual (p. 1931, col. 2, ¶4, “feature S ∈ X is sensitive… S is gender”)
Regarding Claim 13, Nabi teaches the learning apparatus of Claim 1 as referenced above. Nabi further teaches:
wherein the classifier determines a class indicating whether to release an individual, wherein the individual corresponds to a prisoner (p. 1931, col. 2, ¶4, “probabilistic classification… with a set of features X and an outcome Y… recidivism prediction in parole hearings… Y is a parole decision”), and wherein the variables include a sensitive feature of the individual (p. 1931, col. 2, ¶4, “feature S ∈ X is sensitive… S is race”).
Claim 14, the rejection of Claim 4 is incorporated and further, the claim is rejected for the same reasons as set forth in Claim 12.
Claim 15, the rejection of Claim 4 is incorporated and further, the claim is rejected for the same reasons as set forth in Claim 13.
Claim 16, the rejection of Claim 5 is incorporated and further, the claim is rejected for the same reasons as set forth in Claim 12.
Claim 17, the rejection of Claim 5 is incorporated and further, the claim is rejected for the same reasons as set forth in Claim 13.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 2, 8, and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Nabi in view of Chinot et al., “Robust Statistical learning with Lipschitz and convex loss functions”, hereinafter “Chinot”.
Regarding Claim 2, Nabi teaches the learning apparatus of Claim 1 as referenced above. Nabi further teaches:
the classifier is represented by a non-convex and smooth function (Bayesian Additive Regression Trees (BART) is non-convex because it is tree based and smoothed from the Bayesian method, Nabi, p. 1937, col. 1, ¶1, “using constrained BART”), and the processor further configured to execute a method comprising:
learning the classifier by optimizing an objective function (Nabi, p. 1936, col. 1, ¶1, “we would maximize L(D; α)”), the objective function being a weakly convex function (Objective function is based on the classifier, Nabi, p. 1935, col. 1, ¶5, “parameterized by α, an estimator g(D)”, BART classifier is also weakly convex because a non-convex function will have sections that appear convex and therefore is weakly convex) that approximates the constrained optimization problem (Nabi, p. 1936, col. 1, ¶2, “the optimization problem in (4) involves complex non-linear constraints on the parameter space”).
Nabi does not expressly teach:
wherein when an error function for quantifying an accuracy of the classifier is represented by a Lipschitz continuous convex function
However, Chinot teaches:
wherein when an error function for quantifying an accuracy of the classifier is represented by a Lipschitz continuous convex function (Chinot, p. 1, ¶1, “loss function is simultaneously Lipschitz and convex”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Chinot continuous Lipschitz convex loss function with the constrained optimization of Nabi. The motivation to do so would be to make the method more robust (Chinot, p. 1, ¶1, “Lipschitz property allows to make only weak assumptions on the outputs, these losses have been quite popular in robust statistics… (ERM) based on Lipschitz losses such as the Huber loss have received recently an important attention”)
Claim 8, the rejection of Claim 4 is incorporated and further, the claim is rejected for the same reasons as set forth in Claim 2.
Claim 10, the rejection of Claim 5 is incorporated and further, the claim is rejected for the same reasons as set forth in Claim 2.
Claims 3, 9 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Nabi in view of Chinot, further in view of Maurer et al., “Empirical Bernstein Bounds and Sample Variance Penalization”, hereinafter “Maurer”.
Regarding Claim 3, Nabi in view of Chinot teaches the learning apparatus of Claim 2 as referenced above. Nabi further teaches:
wherein the objective function includes, as a penalty function, an estimate of an upper confidence bound represented by a sum of a mean of absolute values of the causal effects (Nabi, p. 1936, col. 1, ¶1, Equation 9 shows summation of n and multiplied by 1/n which is a sum of mean absolute values)
In the combination as set forth above, Nabi in view of Chinot teaches:
with respect to a mean of the error function associated with an empirical distribution of the training data (Chinot, p. 2, ¶5, “empirical risk RN(f) = (1/N) PN i=1 `f (Xi , Yi)”, this equation shows mean of the error function associated with empirical distribution of training data which in the combination with Nabi, Nabi’s Equation 9 will be applied to).
Nabi in view of Chinot does not expressly teach:
…a standard deviation of the absolute values of the causal effects
However, Maurer teaches:
…a standard deviation of the absolute values of the causal effects (Maurer, p. 6, col. 2, Equation 6 shows square root of a variance term which is standard deviation added to an objective function)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine adding the standard deviation from Maurer with the objective function of Nabi. The motivation to do so would be to minimize bounds of error (Maurer, p. 1, Abstract, “sample variance penalization, a novel learning method which takes into account the empirical variance of the loss function”, p. 6, col. 2, ¶3, “a method which minimizes the bounds”)
Claim 9, the rejection of Claim 8 is incorporated and further, the claim is rejected for the same reasons as set forth in Claim 3.
Claim 11, the rejection of Claim 10 is incorporated and further, the claim is rejected for the same reasons as set forth in Claim 3.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JESSE CHEN COULSON whose telephone number is (571)272-4716. The examiner can normally be reached Monday-Friday 8:30-5:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kakali Chaki can be reached at (571) 272-3719. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JESSE C COULSON/
Examiner, Art Unit 2122
/KAKALI CHAKI/Supervisory Patent Examiner, Art Unit 2122