DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of the Claims
The office action is in response to the claims filed on January 7, 2025 for the application filed January 7, 2025 which claims priority to a provisional application filed on January 9, 2024. Claims 1-20 are currently pending and have been examined.
Claim Objections
Claim 11 is objected to because of the following informalities: Claim 11 recites “the second feature features” in lines 7-8, which should recite “the second features”. Appropriate correction is required.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-7, 10-17 and 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Lin et al. (Pursuing Counterfactual Fairness via Sequential Autoencoder Across Domains).
Regarding claim 1, Line discloses a computer-implemented method for fairness-aware domain generalization (Abstract), comprising:
identifying a sensitive attribute, first features related to the sensitive attribute, and second features irrelevant to the sensitive attribute (Page 3, Causal Structure of CDSAE, define Xs ⊂ X as a subset of features caused by a, whereas Xns ⊂ X is the other subset of irrelevant features to the intervention. For instance, considering the ‘Sex’ attribute in the Adult dataset as the sensitive attribute, we can broadly describe the characteristics of this attribute as Xs = {Occupation, Workclass,...}, while the remaining features can be denoted as Xns. Similarly, let’s define the exogenous variables of Xns and Xs to be Uns and Us, respectively.);
decoupling domain-specific information for the first features and the second feature features (Page 3, Causal Structure of CDSAE, To simulate dynamic environments, we adopt two variables, Uv1 and Uv2, to capture the dynamic changes in the distributions of Xs and Xns respectively, as they vary with the environments (Qin, Wang, and Li 2022). For the domain Dt at timestamp t, we represent Uv1 and Uv2 as Utv1 and Utv2, respectively. Page 4, Evidence Lower Bound of CDSAE, we employ Us and Uns to capture the invariant semantic information within the distribution, while Utv1 and Utv2 are utilized to encapsulate the domain-relevant information.); and
training a classifier with the first features and the second features to ensure cross-domain accuracy while maintaining fairness on the sensitive attribute (Page 5, Ultimate Objective Function, after completion of training within the CDSAE framework (Algorithm. 1), we require the trained static feature extractor Es and Ens to obtain semantic information (us and uns). Finally, the classifier C is utilized for prediction by inputting both us and uns alongside sensitive attribute a. Pages 1-2, Introduction, . This concept (counterfactual fairness) seeks to minimize the impact on predicted values when counterfactual interventions are applied to sensitive attributes. In the context of dynamically evolving environments, we propose a framework, denoted as Counterfactual Fairness-Aware Domain Generalization with Sequential Autoencoder (CDSAE), designed to address the issue of counterfactual fairness. Our objective can be succinctly summarized as aiming to enhance the model’s generalization capacity across unfamiliar domain sequences while concurrently ensuring counterfactual fairness in decision-making.).
Regarding claim 2, Lin further discloses wherein training the classifier includes a variational autoencoder that has a first encoders for the first features, a second encoder for the second features, a third encoder for sensitive exogenous features, and a fourth encoder for non-sensitive exogenous features (Page 4, Network Architecture, During the inference stage, we employ four distinct encoders to model q(us|xts, at), q(uns|xtns), q(uv1|xts) and q(uv2|xtns), respectively. Page, Introduction, to model the relationships among sensitive attributes, environmental information, and semantic information, we partition the exogenous variables into four latent variables: 1) semantic information caused by sensitive attributes, 2) semantic information not caused by sensitive attributes, 3) environmental information caused by sensitive attributes, and 4) environmental information not caused by sensitive attributes. Among these, we posit that the distribution of semantic information remains invariant across all domains, whereas the distribution of environmental information varies with changes in the environment. Here, the data feature X is composed of two components, wherein sensitive attribute A directly causes a subset of features (Xs), while another subset of features (Xns) is not directly influenced by A but may still exhibit correlations with it. They are encoded in the latent space as the aforementioned first two exogenous variables. Also see Figure 2.).
Regarding claim 3, Lin further discloses wherein training includes minimizing an objective function that includes an evidence lower bound based on the first features, an evidence lower bound based on the second features, and a classification loss (Pages 4-5, Evidence Lower Bound of CDSAE, Counterfactual Fairness Loss of CDSAE, Disentanglement Loss of CDSAE and Ultimate Objective Function.).
Regarding claim 4, Lin further discloses wherein the objective function further includes a counterfactual fairness loss (Pages 4-5, Evidence Lower Bound of CDSAE, Counterfactual Fairness Loss of CDSAE, Disentanglement Loss of CDSAE and Ultimate Objective Function.).
Regarding claim 5, Lin further discloses wherein the counterfactual fairness loss is expressed as a sum over domains of expectation values for predicted values conditioned on the sensitive attribute and a negation of the sensitive attribute (Page 4, Counterfactual Fairness Loss of CDSAE.).
Regarding claim 6, Lin further discloses wherein the objective function further includes a disentanglement loss (Pages 4-5, Disentanglement Loss of CDSAE).
Regarding claim 7, Lin further discloses wherein the disentanglement loss is approximated as a sum over domains of expectation values based on a discriminator that outputs a probability that a set of samples originates from a distribution defined by the first features, the second features, and the sensitive attribute (Pages 4-5, Disentanglement Loss of CDSAE).
Regarding claim 10, Lin further discloses wherein the classifier is a machine learning model implemented as a neural network (Page 4, Network Architecture, neural network architecture is shown in Fig. 2.).
Regarding claims 11-17 and 20: all limitations as recited have been analyzed and rejected with respect to claims 1-7 and 10. Claims 11-17 and 20 pertain to a system, corresponding to the computer-implemented method of claims 1-7 and 10. Claims 11-17 and 20 do not teach or define any new limitations beyond claims 1-7 and 10 apart from the processor and memory used to execute the computer-implemented method of claims 1-7 and 10, which is inherently disclosed by Lin as the methods of Lin are computer implemented, which requires a processor and memory to execute the methods. Therefore claims 11-17 and 20 are rejected under the same rationale.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 8-9 and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Lin et al. (Pursuing Counterfactual Fairness via Sequential Autoencoder Across Domains) in view of Cossler et al. (U.S. Pub. No. 2022/0383998).
Regarding claim 8, Lin does not appear to explicitly disclose, but Cossler teaches that it was old and well known in the art of artificial intelligence at the time of the filing to include using the classifier to diagnose a medical condition of a patient to assist in medical decision making (Closser, Paragraph [0030], Using one or more machine learning algorithms, the AI system can reason on the information provided by one or more of these data sources to determine the parameters, parameter values, and relationships between different parameters and parameter values that correlate to positive and negative clinical and financial outcomes associated with one or more aspects of healthcare delivery. In some embodiments, the AI system can determine or infer events and conditions that are clinically significant and warrant medical attention and/or acknowledgment (referred to herein as a significant event or condition). In some implementations in association with identifying significant events or conditions, the AI system can further determine patterns in the data that correspond to a defined diagnosis of a condition or complication affecting a patient. With these implementations, the AI system can further identify a significant event or condition by its medical diagnosis or known name that is used to refer to the type of event or condition in the medical field (e.g., generally by all medical practitioners or internally by a particular medical organization). Paragraph [0102], Various classification (explicitly and/or implicitly trained) schemes and/or systems (e.g., support vector machines, neural networks, expert systems, Bayesian belief networks, fuzzy logic, data fusion engines, etc.) can be employed in connection with performing automatic and/or inferred action in connection with the claimed subject matter. Also see paragraph [0029].) to optimize the performance of clinicians in real-time (Closser, paragraph [0029]).
Therefore, it would have been obvious to one of ordinary skill in the art of artificial intelligence at the time of the filing to modify the method of Lin to include using the classifier to diagnose a medical condition of a patient to assist in medical decision making, as taught by Closser, in order to optimize the performance of clinicians in real-time.
Regarding claim 9, Lin does not appear to explicitly disclose, but Cossler teaches that it was old and well known in the art of artificial intelligence at the time of the filing to include automatically administering a treatment to the patient based on an output of the classifier (Paragraph [0092], The AI response component 142 can be configured to determine or infer and provide one or more responses to significant events or conditions based on identification of the significant events or conditions by the significant event/condition identification component 138. Paragraph [0095], Further, in some implementations, a response can include a recommended action for performance by a machine, such as an IMD, a medical instrument, a medical device and the like, that can be configured to perform automated actions in response to control commands (e.g., dispensing medication, applying a medical treatment, moving a blade or needle relative to a body of a patient, etc.). With these implementations, the AI response component 142 can be configured to provide the corresponding control commands to such machines for execution thereby.) to optimize the performance of clinicians in real-time (Closser, paragraph [0029]).
Therefore, it would have been obvious to one of ordinary skill in the art of artificial intelligence at the time of the filing to modify the method of Lin to include automatically administering a treatment to the patient based on an output of the classifier, as taught by Closser, in order to optimize the performance of clinicians in real-time.
Regarding claims18-19: all limitations as recited have been analyzed and rejected with respect to claims 8-9. Claims 18-19 pertain to a system, corresponding to the computer-implemented method of claims 8-9. Claims 18-19 do not teach or define any new limitations beyond claims 8-9; Therefore claims 11-17 and 20 are rejected under the same rationale.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Devin C. Hein whose telephone number is (303)297-4305. The examiner can normally be reached 9:00 AM - 5:00 PM M-F MDT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason B. Dunham can be reached at (571) 272-8109. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DEVIN C HEIN/Examiner, Art Unit 3686