DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 16 December 2025 has been entered.
Status of Claims
This action is in reply to the claims filed on 16 December 2025. Claim 2 was previously canceled. Claim 7 was newly added. Claims 1 and 3-7 are currently pending and have been examined.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1 and 3-7 are rejected under 35 USC § 101
Step 1: Is the claim to a process, machine, manufacture, or composition of matter?
Claims 1 and 3-7 fall within one or more statutory categories. Claims 1 and 3-7 fall within the category of a machine.
Step 2A Prong One: Does the claim recite an abstract idea, law of nature, or natural phenomenon?
Claims 1 and 3-7 recite an abstract idea. Representative claim 1 recites:
obtain health states of a patient…;
generate a causal graph that indicates relationships between the health states based on the health states … to establish a causal relationship between health states and treatments responsive to the treatment being better predicted with information about a health state than without;
predict an interpretable treatment for the patient to be taken with respect to the health states …; and
output the predicted interpretable treatment to ….
Therefore, the claim as a whole is directed to “treating a patient,” which is an abstract idea because it is a method of organizing human activity. “Treating a patient” is considered to be a method of organizing human activity because it is an example of managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions). The broadest reasonable interpretation of the claims include the interaction between a clinician and a patient. Further, these elements could be considered to be directed to a mental process because making a prediction on treatment based on patient health states is an observation, evaluation, judgment, or opinion capable of being performed in the human mind.
Representative claim 1 also recites:
[generate a causal graph] using Granger causality;
encode the causal graph by updating state variable embeddings.
Under the broadest reasonable interpretation, these elements are directed to “calculating probabilities for a causal graph,” which is an abstract idea because it is a mathematical concept. “Calculating probabilities for a causal graph” is considered to be a mathematical concept because it is an example of mathematical calculation.
The limitations that recite a method of organizing human activity (and mental process) and the limitations that recite mathematical concepts are considered together as a single abstract idea for further analysis.
Step 2A Prong Two: Does the claim recite additional elements that integrate the judicial exception into a practical application?
This judicial exception is not integrated into a practical application. In particular, claim 1 recites the following additional element(s):
at least one memory storing instructions; and
at least one processor configured to access the at least one memory and execute the instructions to: (perform the method described above);
a server storing observations of patient health data;
using the updated state variable embeddings as input to a model, wherein the model is adversarially trained with a discrimination model to discriminate between predicted actions and demonstrations from an expert by machine-learning algorithm;
[output results to] a display used by a healthcare provider, a doctor, or a nurse
The additional elements individually or in combination do not integrate the exception into a practical application. These additional elements amount to merely reciting the words ‘‘apply it’’ (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea (see MPEP 2106.05(f)). Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Claim 1 is directed to an abstract idea.
Step 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception?
Claim 1 does not include additional elements, considered individually or in combination, that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element(s), individually and in combination, amount to merely reciting the words ‘‘apply it’’ (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea (see MPEP 2106.05(f)). Accordingly, claim 1 is ineligible.
Dependent claim 3 recites the method of claim 1, wherein:
the causal graph is a Directed Acyclic Graph indicating relationships between the health states.
This merely further limits the abstract idea of claim 1 discussed above and does not provide further additional elements. Therefore, claim 3 is considered to be ineligible.
Dependent claim 4 recites the method of claim 1, wherein:
the causal graph is generated by optimizing the causal graph based on constraints.
This merely further limits the abstract idea of claim 1 discussed above and does not provide further additional elements. Therefore, claim 4 is considered to be ineligible.
Dependent claim 5 recites the method of claim 1, wherein:
the health states include at least one of blood pressure and heart rate.
This merely further limits the abstract idea of claim 1 discussed above and does not provide further additional elements. Therefore, claim 5 is considered to be ineligible.
Dependent claim 6 recites the method of claim 1, wherein:
generation of the causal graph includes a weighted sum of causal graph templates according to their proximity to a temporal encoding of the health states.
The additional elements present in this claim merely recite the words ‘‘apply it' ' (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (see MPEP 2106.05(f)). These types of additional elements are not enough to integrate the abstract idea into a practical application, nor do they amount to significantly more than the judicial exception. Accordingly, claim 6 is ineligible.
Dependent claim 7 recites the method of claim 1, wherein:
generation of the causal graph includes optimization of an objective function with a component to guide template selection written as:
PNG
media_image1.png
69
326
media_image1.png
Greyscale
where θ is a set of parameters of graph templates, M is a number of graph templates,
q
t
i
indicates whether time step t belongs to a template i, and
α
t
i
is a selection weight of time step t on template i.
This equation is considered to be part of the abstract idea because it is a mathematical formula or equation. This merely further limits the abstract idea of claim 1 discussed above and does not provide further additional elements. Therefore, claim 7 is considered to be ineligible.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1 and 3-5 are rejected under 35 U.S.C. 103 as being unpatentable over Shankar et al. (U.S. 2018/0342323), hereinafter “Shankar,” in view of Dalli et al. (U.S 2022/0114417), hereinafter “Dalli.”
Regarding claim 1, Shankar discloses a treatment prediction system comprising:
at least one memory storing instruction (See Shankar [0032] the medical knowledge database can be stored on a local server, a remote server, over a distributed storage service such as mesh storage, on a cloud server. See also Fig. 12 and [0083]-[0084].); and
at least one processor configured to access the at least one memory (See Shankar [0032] the medical knowledge database can be stored on a local server, a remote server, over a distributed storage service such as mesh storage, on a cloud server. See also Fig. 12 and [0083]-[0084].)See Shankar [0053] the system can be implemented using one or more processors. See also Fig. 12 and [0083]-[0084].) and execute the instructions to:
obtain health states of a patient via a server storing observations of patient health data (See Shankar [0035] the system can collect data from a further patient (this patient is distinct from the patients whose data was used to train the system, as will be discussed below).);
generate a causal graph that indicates relationships between the health states based on the health states … to establish a causal relationship between health states and treatments responsive to the treatment being better predicted with information about a health state than without (See Shankar [0032] the system can collect patient data from a plurality of clinicians, including vital statistics and treatment histories, to form a medical knowledge database. This includes best practices for diagnosing clinical states and best practices for treating clinical states. It can also include a medical probabilistic rules graph. [0043] the medical rules graph includes a directed acyclic graph.);
encode the causal graph by updating state variable embeddings (See Shankar [0032] The medical probabilistic rules graph can include variables and factors. The variables can include evidence variables with known values, and query variables with unknown values. The factors can define relationships between and among variables. The relations can include probabilities.);
predict an interpretable treatment for the patient to be taken with respect to the health states using the updated state variable embeddings as input to a model (See Shankar [0037] system can generate a treatment plan for the further patient based on the medical probabilistic rules graph that was augmented.),
wherein the model [is trained] to discriminate between predicted actions and demonstrations from an expert by machine-learning algorithm (See Shankar [0033] the system can collect further medical data based on individual patient treatment outcomes. [0034] augmenting the medical knowledge database can be accomplished with a deep learning, trained using collected medical data. See also [0029]. [0045] The graph algorithms can order the medical knowledge data rules into a directed acyclic graph (DAG). The DAG can be ordered using graph inference and machine learning scoring. [0045] the models can be updated by evaluating treatment results and feeding those results back into machine learning/deep learning to update risk models and DAG nodes and edges.); and
output the predicted interpretable treatment to a display used by a healthcare provider, a doctor, or a nurse (See Shankar [0048] the system can deliver the treatment to an API and can be delivered to a practitioner. [0035] system can be performed using a GUI operated by a clinician.).
Shankar does not disclose:
[generate the causal graph] using Granger causality;
[the model] is adversarially trained with a discrimination model.
Dalli teaches:
[generate the causal graph] using Granger causality (See Dalli [0074] the system can use causal directed acyclic graph (DAG) diagrams. [0075] the system can use granger casual models. This is understood to indicate the application of Granger causality to causal graphs.);
[the model] is adversarially trained with a discrimination model (See Dalli [0050] the system can use an eXplainable Generative Adversarial Network (XGAN). This is an example of a model that is adversarially trained with a discriminator model.).
The system of Dalli is applicable to the disclosure of Shankar as they both share characteristics and capabilities, namely, they are directed to using predicting patient treatment (see Dalli [0098] for at least on example of the application of the teachings to healthcare and patient treatment). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Shankar to include Granger causality and adversary training of machine learning models as taught by Dalli. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify Shankar in order to address the lack of interpretable tools for understanding how a model makes a prediction, understanding the root cause of a particular prediction, identification of interpretable decision boundaries, and identifying the general structure of a model as barriers to XAI models (see Dalli [0010]).
Regarding claim 3, Shankar in view of Dalli discloses the system of claim 1 as discussed above. Shankar further discloses a system, wherein:
the causal graph is a Directed Acyclic Graph indicating relationships between the health states (See Shankar [0042] The probabilistic graph can include a Bayesian network. [0045] The graph algorithms can order the medical knowledge data rules into a directed acyclic graph (DAG). The DAG can be ordered using graph inference and machine learning scoring.).
Regarding claim 4, Shankar in view of Dalli discloses the system of claim 1 as discussed above. Shankar further discloses a system, wherein:
the causal graph is generated by optimizing the causal graph based on constraints (See Shankar [0043] the medical rules graph (which can be a directed acyclic graph) can be pruned to reduce search complexity and increase efficiency. This is understood to be an optimization based on constraints.).
Regarding claim 5, Shankar in view of Dalli discloses the system of claim 1 as discussed above. Shankar further discloses a system, wherein:
the health states include at least one of blood pressure and heart rate (See Shankar [0034] Clinical findings can include symptoms reported by a patient, objective signs observed by a clinician, disease prognosis, results of laboratory testing, and so on. [0037] the treatment plan can be based on patient blood pressure. See also the example given in Fig. 4A and [0051].).
Subject Matter Free of Prior Art
Claims 6 and 7 are considered to be free of prior art. The following is a statement of reasons for the indication of allowable subject matter:
The prior art of record is considered to be the closest prior art. Shankar discloses the use of causal graphs and machine learning to make health predictions. Dalli teaches a system that user Granger causality and adversarial trained discrimination models. However, the prior art does not teach or fairly suggest a combination which results in the specific combination of elements described in the language of the claims, such as the generation of the causal graph includes a weighted sum of causal graph templates in claim 6 or the particular equation used in claim 7 for optimization of an objective function with a component to guide template selection. Accordingly, claims 6 and 7 are considered to be free of prior art.
Response to Arguments
Applicant's arguments filed 16 December 2025, with respect to the 35 U.S.C. §101 rejection of the claims, have been fully considered but they are not persuasive. Applicant argues the claims recite an improvement to technology as described in MPEP 2106.04(d)(1) (see Applicant Remarks pages 4-6). This is not persuasive. Applicant’s assertion that the use of causal graphs in connection with the use of machine learning in order to improve the machine learning, by increasing the interpretability of the results, is not persuasive. Any additional elements present in the claims are recited at such a high level of generality that they do no more than amount to merely reciting the words ‘‘apply it’’ (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea (see MPEP 2106.05(f)). At best, any improvement to the interpretability of the output of the system is an intended or desired result, but not tied directly to the technology of the broadly recited machine learning algorithm. The machine learning technology itself is not improved. Accordingly, these additional elements do not integrate the abstract idea into a practical application by reciting an improvement to technology, and do not meaningfully limit claim. Therefore, the claims are still rejected as being directed to ineligible subject matter.
Applicant's arguments filed 16 December 2025, with respect to the 35 U.S.C. §103 rejection of the claims, have been fully considered but they are not persuasive. Applicant argues that Shankar does not disclose or teach establishing a causal relationship between health states and treatments responsive to the treatment being better predicted with information about a health state than without (see Applicant Remarks pages 6-7). This is not persuasive. Shankar teaches the use of directed acyclic graphs [0075] that show best practices for diagnosing clinical states and best practices for treating clinical states and can also include a medical probabilistic rules graph [0043]. Under the broadest reasonable interpretation of the claims, this element is disclosed by Shankar. Accordingly, claims 1 and 3-5 remain rejected as being obvious over the prior art.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Rose et al. (U.S. 2018/0342323) discloses a decision support system that uses directed acyclic graphs to generate treatment recommendations.
Zambotti et al. (U.S.2020/0013511) teaches a system and method of predictive modeling using Granger causality.
Garrido-Merchan (Garrido-Merchan et al., “Uncertainty Weighted Causal Graphs,” arXiv, cs.AI, 2002.00429) teaches a method of modelling the uncertainty in a causal graph through probabilistic improving the management of the imprecision in the quoted graph.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BENJAMIN L HANKS whose telephone number is (571)270-5080. The examiner can normally be reached Monday-Friday 8am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Shahid Merchant can be reached at (571) 270-1360. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/B.L.H./Examiner, Art Unit 3684
/Shahid Merchant/Supervisory Patent Examiner, Art Unit 3684