Prosecution Insights
Last updated: April 19, 2026
Application No. 18/427,739

RECIPIENT SURVIVAL AFTER ORGAN TRANSPLANTATION

Final Rejection §101§103§112
Filed
Jan 30, 2024
Examiner
HRANEK, KAREN AMANDA
Art Unit
3684
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
UNIVERSITY OF SOUTH FLORIDA
OA Round
2 (Final)
36%
Grant Probability
At Risk
3-4
OA Rounds
3y 7m
To Grant
83%
With Interview

Examiner Intelligence

Grants only 36% of cases
36%
Career Allow Rate
62 granted / 172 resolved
-16.0% vs TC avg
Strong +47% interview lift
Without
With
+46.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
49 currently pending
Career history
221
Total Applications
across all art units

Statute-Specific Performance

§101
30.3%
-9.7% vs TC avg
§103
35.3%
-4.7% vs TC avg
§102
10.6%
-29.4% vs TC avg
§112
20.3%
-19.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 172 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of the Claims The status of the claims as of the response filed 12/3/2025 is as follows: Claims 8-9 are cancelled, and all previously given rejections for these claims are considered moot. Claims 1-2, 6, and 15 are currently amended. Claims 3-5, 7, and 10-14 are original. Claim 16 is new. Claims 1-7 and 10-16 are currently pending in the application and have been considered below. Response to Amendment Rejection Under 35 USC 101 The claims have been amended but the 35 USC 101 rejections are upheld. Rejection Under 35 USC 103 The amendments made to the claims introduce limitations that are not fully addressed in the previous office action, and thus the corresponding 35 USC 103 rejections are withdrawn. However, Examiner will consider the amended claims in light of an updated prior art search and address their patentability with respect to prior art below. Response to Arguments Rejection Under 35 USC 101 On pages 8-9 of the response filed 12/3/2025 Applicant argues that the claims are improperly characterized as reciting an abstract idea in the form of certain methods of organizing human activity. Applicant specifically asserts that “simply stating that claimed features ‘could’ be performed by a person is not sufficient to support a finding that the claims ‘recite’ and are ‘directed to’” this grouping of abstract idea. Applicant’s arguments are fully considered, but are not persuasive. Examiner maintains that the claims recite steps that describe certain methods of organizing human activity, because they include steps that a human actor could follow to manage their personal behavior and/or interactions with others for the abstract purpose of matching organ recipients with organ donors and making diagnostic predictions about the outcome of organ transplant procedures. For example, a clinician interested in matching human organ donors and recipients could obtain datasets about these two types of entities, train or fit predictive models with the obtained datasets to assist with predictive determinations, and make predictions about outcomes for given matches and post-graft survival probabilities using the fitted predictive models. See more detailed explanation of the specific abstract steps recited by each claim in paras. 23-29 below. On pages 11-12 Applicant argues that the eligibility analysis in the non-final rejection is deficient because “it does not identify with any specificity which portions of the claims actually ‘recite’ steps or features that are ‘managing personal behavior or relationships or interactions between people.’” Applicant’s arguments are fully considered, but are not persuasive. As outlined in paras. 6-10 of the non-final rejection mailed 6/3/2025, Examiner has identified all of the claim language that recites an abstract idea via highlighting with italics. Examiner maintains that each of the steps/functions italicized in paras. 6 and 8 fit into the “certain methods of organizing human activity” grouping of abstract idea, because they describe obtaining data, training/fitting predictive models with training data, applying new data to the trained/fitted models to generate results, and making determinations about patient outcomes in the context of matching organ donors and recipients. Though such functions are recited as being performed by computer components (as made clear in claim 15 and the amendments to claim 1), they nevertheless describe steps that a human actor such as a clinician could take to manage their personal behavior and/or interactions with others (e.g. patients or colleagues) in the abstract pursuit of organ transplant donor/recipient matching and outcome prediction. Having found that the claims recite an abstract idea, the computing components (e.g. digital memory, communication network, processor, and high-level machine learning) are then evaluated as additional elements under Step 2A – Prong 2 and Step 2B. The mere fact that computing components are recited or that a claim is a “system-type” claim does not preclude the claims from reciting an abstract idea, as Applicant appears to assert on Pg 12. Examiner also notes that no specific details of the model “training” are present in claim 10 to show that it could not encompass fitting a model via statistical or mathematical operations achievable by a human actor, and as such Examiner maintains that this function still fits within the abstract idea. See Example 47, claims 2 & 3 for exemplary analysis regarding how training and using a high-level machine learning model can be considered to recite an abstract idea under step 2A – Prong 1. On pages 12-13 Applicant argues that the abstract idea analysis provides a “rationale [that] appears to be an attempt to justify a rejection on the basis of a ‘mental process’ type abstract idea” and that “simply asserting that steps ‘could’ be done alone by a person is irrelevant to” the ‘certain methods of organizing human activity’ grouping of abstract idea. Applicant’s arguments are fully considered, but are not persuasive. Examiner notes that the claims have not been characterized as a mental process. Further, the ’certain methods of organizing human activity’ grouping of abstract idea has been found to cover examples of a human actor managing their personal behavior to perform functions like filtering content, considering historical usage information while inputting data, and following a mental process when testing a patient for nervous system malfunctions (see MPEP 2106.04(a)(2)(II)(C)), which Examiner submits are sufficiently analogous to the functions found to be abstract in the instant claims. On pages 13-14 Applicant argues that the rejection “improperly dismisses ‘machine learning’ models as generic,” further asserting that the amended claims “recite very specific ways in which the models were generated, how they are used in a unique bifurcated way, and what their inputs/outputs are” and include “conditional logic and model architecture [that] are not generic” and instead “reflect a tailored solution to the problem of transplant survival prediction.” Applicant’s arguments are fully considered, but are not persuasive. Though the inputs, outputs, and use of the models is described in the claims, there is still no detail about the specific architecture or training methods of the models, nor any indication that the field of machine learning is being improved in any way. Instead, Applicant appears to apply the known technology of machine learning modeling to the abstract field of organ transplant recipient/donor matching and outcome prediction in an attempt to improve this clinical business practice, rather than in an attempt to improve any underlying technology specific to machine learning or computing. That is, the otherwise-abstract process of fitting predictive models using specific inputs and outputs, and using the models in a logically sequenced diagnostic workflow with decision points leading to next steps is part of the abstract idea itself, and the fact that such predictive models are specifically machine learning models is evaluated as an additional element. In the instant case, the “machine learning” aspect of the models is recited at a high level of generality in the claims, and the specification further confirms this generality in at least paras. [0052], [0062], & [0069], where the machine learning models are described as utilizing preexisting algorithms such as survival tree algorithms like LongCART or SurvCART. The “machine learning” aspect of these models amounts to instructions to “apply” the exception using high-level machine learning computing components as a tool with which to digitize and/or automate the otherwise-abstract result generation steps of the claims and thus do not provide a technical improvement. See Example 47, claim 2 for exemplary analysis regarding how using a high-level trained machine learning model to provide a functionally-claimed result can be considered to amount to instructions to “apply” the exception under Step 2A – Prong 2. On pages 14-15 Applicant cites to several PTAB reversals, and argues that the outstanding rejection “fails to acknowledge the specific and unique training of the dual machine learning models, as well as the architecture and logic of the claimed methods and systems as a whole, including bifurcated model application, conditional logic based on graft dysfunction, and tailored data inputs and outputs,” which “reflect a particular solution to the problem of inaccuracy in transplant survival prediction.” Applicant’s arguments are fully considered, but are not persuasive. Examiner first notes that the cited PTAB reversals are non-precedential, and that every case must be evaluated on its own merits in light of the current statutory guidance reflected in the MPEP. Further, while selecting certain data types for inclusion in training datasets for machine learning models may improve the accuracy of such models as Applicant explains in the specification, such operations do not reflect a technical improvement to a technical problem. Selecting appropriate data for mathematically modeling a prognostic outcome is a problem in the clinical business practice of organ transplant recipient/donor matching and outcome prediction, and utilizing a diagnostic workflow with multiple clinical decision points where prognostic predictions are made is also part of normal clinical decision-making procedures. Because the underlying functions of training/fitting and using predictive models to make clinical outcome predictions are part of the abstract idea itself, they do not provide integration into a practical application and thus do not confer eligibility (see MPEP 2106.05(a): “It is important to note, the judicial exception alone cannot provide the improvement. The improvement can be provided by one or more additional elements.” See also 2106.05(a)(II): “it is important to keep in mind that an improvement in the abstract idea itself… is not an improvement in technology”). In the instant case, the only additional elements recited in the claims are a digital memory, a communication network, a processor, and specifying that the models are machine learning models. As explained in more detail below, these high-level computing components merely serve as instructions to “apply” the exception in a computing environment such that the otherwise-abstract data collection, model training/fitting, model execution, and outcome prediction processes are digitized and/or automated. On pages 15-17 Applicant argues that the claims are similar to those found eligible in Desjardins by “integrat[ing] multiple, specifically-trained machine learning models into practical implementations that improve transplant survival predictions and offer more accurate, objective, and meaningful ways to make donor-recipient organ match decisions, all of which are important concepts in the specific technical field of organ transplant management.” Applicant further points to technological features “resulting in improved accuracy and relevance of predictions,” as well as to the pre-processing step that allows use of a backup machine learning model used in situations when “time is of the essence.” Applicant’s arguments are fully considered, but are not persuasive. Examiner respectfully disagrees that organ transplant management is a technical field as Applicant asserts. This field conventionally relies upon human decision-making to match donors and recipients and make prognostic determinations about predicted outcomes, and merely utilizing high-level computing components and machine learning models to digitize/automate this otherwise-abstract business practice does not provide a technical improvement to a technical field. As indicated above, though the abstract idea itself may be improved (e.g. via more accurate predictions, or via use of a backup prediction model when some data is missing), no technical improvements are being made to technical fields such as the functioning of a computer, machine learning model architectures or training procedures, etc., in contrast to Desjardins, which was found to address the specific technical problem of ‘catastrophic forgetting’ in machine learning model training. As indicated above, the instant claims appear to merely apply high-level computing elements and machine learning techniques to the abstract field of organ transplant management in an effort to improve existing business workflows and prognostic prediction methods, which does not provide integration into a practical application or ‘significantly more’ than the abstract idea itself. For the reasons outlined above, the 35 USC 101 rejections are upheld for claims 1-7 and 10-15. Rejection Under 35 USC 103 On pages 17-18 Applicant asserts that subject matter from original claim 10 found to be free from prior art has been incorporated into independent claims 1 and 15, such that they should also be found to be free from prior art. Applicant’s arguments are fully considered, but are not persuasive. As outlined in the claim interpretation section below, the filtering steps found to be non-obvious in combination with the other limitations of original claim 10 are not positively recited as part of claims 1 and 15, such that these non-obvious features are not required within the BRI of the scope of each claim. Accordingly, prior art rejections have been supplied for these claims below. Claim Interpretation Claim 1 Claim 1 positively recites the following steps relating to models as part of its method: accessing and loading from digital memory a trained pre-operative organ transplant machine learning model relating to a given type of organ for which a donor is currently available, pre-processing the current donor dataset to determine whether the plurality of donor-specific factors is missing data for at least one factor present in the first set of donor factors, and if so automatically loading into memory a backup pre-operative organ transplant machine learning model; processing the current donor dataset and the pre-operative recipient dataset via the backup pre-operative organ transplant machine learning model; generating a pre-operative result from the trained pre-operative organ transplant machine learning model, the pre-operative result comprising a prediction of survival for the given patient at intervals of time post-transplant if the given patient were to receive the given type of organ from the currently-available donor and further comprising an identification of significant factors that influence the patient’s survival prediction; only if the patient does not exhibit graft failure, processing the pre-operative recipient dataset and the post-operative recipient dataset via a post-operative organ transplant machine learning model; and generating a post-operative result from the post-operative organ transplant machine learning model comprising a post-operative survival probability for the patient. The claim further describes how the three models have been previously trained prior to their accessing and execution as part of the method by utilizing “wherein” clauses (or equivalent): wherein the trained pre-operative organ transplant machine learning model was trained using: a filtered recipient dataset comprising a first training dataset relating to a plurality of organ recipients of the given type of organ which was filtered to exclude post-operative factors, data records for inapplicable recipient treatments, and data records for recipients with graft dysfunction; and a donor training dataset comprising data for a first set of donor factors relating to a plurality of organ donors corresponding to the plurality of organ recipients; automatically loading into memory a backup pre-operative organ transplant machine learning model which was trained on a filtered donor training dataset comprising data for a second set of donor factors, corresponding to the plurality of donor-specific factors, relating to the plurality of organ donors corresponding to the plurality of organ recipients; wherein the post-operative organ transplant machine learning model was trained using: a second filtered training dataset comprising the first training dataset relating to the plurality of organ recipients which was filtered to exclude data records for inapplicable recipient treatments, and data records for recipients with graft dysfunction but not filtered to exclude post-operative factors; and the donor training dataset. “Wherein” clauses and their equivalents are discussed in MPEP 2111.04: “Claim scope is not limited by claim language that suggests or makes optional but does not require steps to be performed, or by claim language that does not limit a claim to a particular structure. However, examples of claim language, although not exhaustive, that may raise a question as to the limiting effect of the language in a claim are: (A) ‘adapted to’ or ‘adapted for’ clauses; (B) ‘wherein’ clauses; and (C) ‘whereby’ clauses.” In the instant case, the “wherein” clauses indicated above that describe how the three models have previously been trained prior to their accessing and execution by the positively recited steps do not limit the claim to a particular structure, nor do they indicate that such training steps are positively performed within the scope of the claim. Accordingly, the broadest reasonable interpretation of the scope of claim 1 will be considered to include (1) accessing and executing a pre-operative organ transplant machine learning model that has been trained in any manner to perform the positively recited function of generating a pre-operative result comprising a prediction of survival for a given patient at intervals of time post-transplant if the given patient were to receive a given type of organ from a currently-available donor and further comprising an identification of significant factors that influence the patient’s survival prediction; (2) automatically loading into memory a backup pre-operative organ transplant machine learning model that has been trained in any manner to process a current donor dataset and a pre-operative recipient dataset; and (3) processing a pre-operative recipient dataset and a post-operative recipient dataset via a post-operative organ transplant machine learning model that has been trained in any manner to generate a post-operative result comprising a post-operative survival probability for the patient. Claim 1 further includes the contingent limitation “pre-processing the current donor dataset to determine whether the plurality of donor-specific factors is missing data for at least one factor present in the first set of donor factors, and if so automatically loading into memory a backup pre-operative organ transplant machine learning model which was trained on a filtered donor training dataset comprising data for a second set of donor factors, corresponding to the plurality of donor-specific factors, relating to the plurality of organ donors corresponding to the plurality of organ recipients.” Per MPEP 2111.04(II): “The broadest reasonable interpretation of a method (or process) claim having contingent limitations requires only those steps that must be performed and does not include steps that are not required to be performed because the condition(s) are not met.” Accordingly, the BRI of claim 1 does not require “automatically loading into memory a backup pre-operative organ transplant machine learning model which was trained on a filtered donor training dataset comprising data for a second set of donor factors, corresponding to the plurality of donor-specific factors, relating to the plurality of organ donors corresponding to the plurality of organ recipients.” The limitation “processing the current donor dataset and the pre-operative recipient dataset via the backup pre-operative organ transplant machine learning model” appears to further limit this contingent limitation because it recites use of the backup pre-operative organ transplant machine learning model, such that this limitation is also considered not to be required under the BRI of claim 1. Claim 1 also includes the contingent limitation “only if the patient does not exhibit graft failure, processing the pre-operative recipient dataset and the post-operative recipient dataset via a post-operative organ transplant machine learning model.” Per MPEP 2111.04(II): “The broadest reasonable interpretation of a method (or process) claim having contingent limitations requires only those steps that must be performed and does not include steps that are not required to be performed because the condition(s) are not met.” Accordingly, the BRI of claim 1 does not require “processing the pre-operative recipient dataset and the post-operative recipient dataset via a post-operative organ transplant machine learning model.” The limitation “generating a post-operative result from the post-operative organ transplant machine learning model comprising a post-operative survival probability for the patient” appears to further limit this contingent limitation because it recites use of the post-operative organ transplant machine learning model, such that this limitation is also considered not to be required under the BRI of claim 1. Claim 6 Claim 6 further limits the backup pre-operative organ transplant machine learning model, which further describes an aspect of a contingent limitation that is not required under the BRI of parent claim 1. Accordingly, the BRI of the claims does not require the subject matter of claim 6. Claim 7 Claim 7 further limits the post-operative organ transplant machine learning model, which further describes an aspect of a contingent limitation that is not required under the BRI of parent claim 1. Accordingly, the BRI of the claims does not require the subject matter of claim 7. Claim 15 Claim 15 recites a processor that is caused to positively perform the following functions relating to models: obtain a trained pre-operative organ transplant machine learning model; apply the donor dataset and the pre-operative recipient dataset to the trained pre-operative organ transplant machine learning model; provide a result for the patient based on the trained organ transplant machine learning model; apply the pre-operative recipient dataset and the post-operative recipient dataset to a post-operative organ transplant machine learning model. The claim further describes how the two models have been previously trained prior to their accessing and execution by the processor by utilizing “wherein” clauses: wherein the pre-operative organ transplant machine learning model was trained using: a first filtered recipient dataset comprising a first training dataset relating to a plurality of organ recipients of the given type of organ which was filtered to exclude post-operative factors, data records for inapplicable recipient treatments, and data records for recipients with graft dysfunction; and a second training dataset relating to a plurality of organ donors corresponding to the plurality of recipients; wherein the post-operative organ transplant machine learning model was trained using: a second training dataset relating to a plurality of organ donors corresponding to the plurality of organ recipients; and a second filtered training dataset comprising the first training dataset relating to the plurality of organ recipients which was filtered to exclude data records for inapplicable recipient treatments, and data records for recipients with graft dysfunction but not filtered to exclude post-operative factors. “Wherein” clauses and their equivalents are discussed in MPEP 2111.04: “Claim scope is not limited by claim language that suggests or makes optional but does not require steps to be performed, or by claim language that does not limit a claim to a particular structure. However, examples of claim language, although not exhaustive, that may raise a question as to the limiting effect of the language in a claim are: (A) ‘adapted to’ or ‘adapted for’ clauses; (B) ‘wherein’ clauses; and (C) ‘whereby’ clauses.” In the instant case, the “wherein” clauses indicated above that describe how the two models have previously been trained prior to their accessing and execution by the positively recited functions of the system do not limit the claim to a particular structure, nor do they indicate that such training steps are positively performed within the scope of the claim. Accordingly, the broadest reasonable interpretation of the scope of claim 15 will be considered to include (1) obtain a pre-operative organ transplant machine learning model that has been trained in any manner to perform the positively recited function of being applied to the donor dataset and the pre-operative recipient dataset to provide a result for the patient; and (2) applying the pre-operative recipient dataset and the post-operative recipient dataset to a post-operative organ transplant machine learning model that has been trained in any manner. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 16 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 16 recites “electronically transmitting the result via a communication network to an organ donor matching system.” However, parent claim 1 introduces (1) a pre-operative and (2) a post-operative result. It is therefore unclear which of these two results is intended to be transmitted in claim 16, rendering the claim indefinite. For purposes of examination, Examiner will interpret “the result” of claim 16 as referencing either of the pre-operative and post-operative results of claim 1. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-7 and 10-16 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 In the instant case, claims 1-7, 14, and 16 are directed to methods (i.e. processes) and claim 15 is directed to a system (i.e. a machine). Thus, each of the claims falls within one of the four statutory categories. Nevertheless, the claims fall within the judicial exception of an abstract idea. Step 2A – Prong 1 Independent claims 1, 10, and 15 recite steps that, under their broadest reasonable interpretations, cover certain methods of organizing human activity, e.g. managing personal behavior, relationships, or interactions between people. Specifically, claim 1 recites: A method for predicting recipient survival after an organ transplant, the method comprising: accessing and loading from digital memory a trained pre-operative organ transplant machine learning model relating to a given type of organ for which a donor is currently available, wherein the trained pre-operative organ transplant machine learning model was trained using: a filtered recipient dataset comprising a first training dataset relating to a plurality of organ recipients of the given type of organ which was filtered to exclude post-operative factors, data records for inapplicable recipient treatments, and data records for recipients with graft dysfunction; and a donor training dataset comprising data for a first set of donor factors relating to a plurality of organ donors corresponding to the plurality of organ recipients; receiving via a communication network a current donor dataset having data for a plurality of donor-specific factors relating to a currently-available donor; pre-processing the current donor dataset to determine whether the plurality of donor-specific factors is missing data for at least one factor present in the first set of donor factors, and if so automatically loading into memory a backup pre-operative organ transplant machine learning model which was trained on a filtered donor training dataset comprising data for a second set of donor factors, corresponding to the plurality of donor-specific factors, relating to the plurality of organ donors corresponding to the plurality of organ recipients; receiving a pre-operative recipient dataset corresponding to a plurality of pre-operative recipient factors relating to a given patient; processing the current donor dataset and the pre-operative recipient dataset via the backup pre-operative organ transplant machine learning model; generating a pre-operative result from the trained pre-operative organ transplant machine learning model, the pre-operative result comprising a prediction of survival for the given patient at intervals of time post-transplant if the given patient were to receive the given type of organ from the currently-available donor and further comprising an identification of significant factors that influence the patient’s survival prediction; receiving a post-operative recipient dataset corresponding to a plurality of post-operative factors, the plurality of post-operative factors relating to a transplantation operation of the patient; determining if the patient exhibits early graft failure; only if the patient does not exhibit graft failure, processing the pre-operative recipient dataset and the post-operative recipient dataset via a post-operative organ transplant machine learning model, wherein the post-operative organ transplant machine learning model was trained using: a second filtered training dataset comprising the first training dataset relating to the plurality of organ recipients which was filtered to exclude data records for inapplicable recipient treatments, and data records for recipients with graft dysfunction but not filtered to exclude post-operative factors; and the donor training dataset; and generating a post-operative result from the post-operative organ transplant machine learning model comprising a post-operative survival probability for the patient. But for the recitation of generic computing components like digital memory, a communication network, and machine learning, the italicized functions, when considered as a whole, describe an organ transplant donor-recipient matching and medical outcome prediction operation that could be achieved by a human actor such as a clinician or other medical professional managing their personal behavior and/or interactions with others. For example, a clinician could obtain a fitted/trained prediction model (e.g. one that has been previously trained via filtered datasets) along with organ donor characteristics and pre-operative characteristics of an organ recipient (e.g. by accessing patient records, communicating with the donor, recipient, and/or colleagues, etc.), and use the organ donor and recipient data as inputs to the model to obtain a predicted survival of the recipient at multiple time intervals post-transplant, along with identifying the most influential factors driving the prediction. A clinician could also preprocess the donor dataset to determine if any inputs are missing, and select a different analysis model as a backup that corresponds to the combination of donor features that are actually present in the dataset for use in the survival prediction operation. Following survival prediction with either model and actual performance of a transplant procedure, the clinician could then obtain post-operative data about the organ recipient and use their medical expertise to determine if the recipient exhibits any indications of early graft failure. Finally, the clinician could use the recipient’s pre- and post-operative characteristics as inputs to another predictive model (e.g. one that has been previously trained via filtered datasets) to determine a survival probability of the patient and/or other desired patient outcomes. Accordingly, claim 1 recites an abstract idea in the form of a certain method of organizing human activity. Independent claim 10 recites: A method for organ transplant prediction model training, comprising: receiving a first training dataset relating to a plurality of organ recipients, the first training dataset comprising pre-operative and post-operative factors; receiving a second training dataset relating to a plurality of organ donors, the plurality of organ donors corresponding to the plurality of organ recipients; filtering the first training dataset to remove post-operative factors, data records for inapplicable recipient treatments, and data records for recipients with graft dysfunction to generate a recipient pre-operative training dataset; training a pre-operative organ transplant machine learning model based on the recipient pre-operative training dataset and the second training dataset; filtering the first training dataset to remove inapplicable recipient treatments, and recipients with graft dysfunction to generate a recipient post-operative training dataset; and training a post-operative organ transplant machine learning model based on the recipient post-operative training dataset and the second training dataset, the post-operative machine learning model corresponding to the pre-operative machine learning model. But for the recitation of generic computing components like machine learning, the italicized functions, when considered as a whole, describe a medical outcome predictive model fitting operation that could be achieved by a human actor such as a clinician or other medical professional managing their personal behavior and/or interactions with others. For example, a clinician could obtain pre- and post-operative data for an organ transplant recipient and organ donor data (e.g. by communicating with the patient/ donor, accessing patient/donor records, etc.), filter the datasets to remove various types of data desired for exclusion, and fit/train at least two corresponding predictive models using the filtered datasets. Accordingly, claim 10 also recites an abstract idea in the form of a certain method of organizing human activity. Independent claim 15 recites: A system for recipient survival after organ transplant prediction, comprising: a memory; a processor communicatively coupled to the memory; wherein the memory stores a set of instructions which, when executed by the processor to: obtain a trained pre-operative organ transplant machine learning model, wherein the pre-operative organ transplant machine learning model was trained using: a first filtered dataset comprising a first training dataset relating to a plurality of organ recipients which was filtered to exclude post-operative factors, data records for inapplicable recipient treatments, and data records for recipients with graft dysfunction; and a second training dataset relating to a plurality of organ donors corresponding to the plurality of organ recipients; receive a donor dataset corresponding to a plurality of factors relating to a donor; receive a donor dataset corresponding to a plurality of factors relating to a given donor; receive a pre-operative recipient dataset corresponding to a plurality of pre-operative recipient factors for a given patient; apply the donor dataset and the pre-operative recipient dataset to the trained pre-operative organ transplant machine learning model; provide a result for the patient based on the trained organ transplant machine learning model; receive a post-operative recipient dataset corresponding to a plurality of post-operative factors, the plurality of post-operative factors relating to a transplantation operation of the patient; determine if the patient exhibits graft dysfunction; apply the pre-operative recipient dataset and the post-operative recipient dataset to a post-operative organ transplant machine learning model, wherein the post-operative organ transplant machine learning model was trained using: a second training dataset relating to a plurality of organ donors corresponding to the plurality of organ recipients; a second filtered training dataset comprising the first training dataset relating to the plurality of organ recipients which was filtered to exclude data records for inapplicable recipient treatments, and data records for recipients with graft dysfunction but not filtered to exclude post-operative factors; and determine a survival probability for the patient. But for the recitation of generic computing components like a memory, a processor, and machine learning, the italicized functions, when considered as a whole, describe an organ transplant donor-recipient matching and medical outcome prediction operation that could be achieved by a human actor such as a clinician or other medical professional managing their personal behavior and/or interactions with others. For example, a clinician could obtain a fitted/trained prediction model (e.g. one that has been previously trained via filtered datasets) along with organ donor characteristics and pre-operative characteristics of an organ recipient (e.g. by accessing patient records, communicating with the donor, recipient, and/or colleagues, etc.), and use the organ donor and recipient data as inputs to the model to obtain a predicted result (e.g. a probability or survival, graft rejection, overall prognosis, etc.). The clinician could then obtain post-operative data about the organ recipient after a transplant procedure has occurred and use their medical expertise to determine if the recipient exhibits any indications of early graft failure. Finally, the clinician could use the recipient’s pre- and post-operative characteristics as inputs to another predictive model (e.g. one that has been previously trained via filtered datasets), and determine a survival probability of the patient and/or other desired patient outcomes. Accordingly, claim 15 recites an abstract idea in the form of a certain method of organizing human activity. Dependent claims 2-7, 11-14, and 16 inherit the limitations that recite an abstract idea from their dependence on claims 1 or 10, and thus these claims also recite an abstract idea under the Step 2A – Prong 1 analysis. In addition, claims 2-5, 11-12, and 16 recite additional limitations that further describe the abstract idea identified in the independent claims. Specifically, claims 2-3, 5, and 11-12 specify particular types of donor and recipient data, each of which are types of data that a clinician would be capable of accessing and evaluating. Claim 4 recites transmitting survival probability information highlighting one or more significant factor types to a physician, which a clinician could achieve by communicating survival probability and significant factors in the analysis to a colleague (e.g. by speaking with them, providing a written report, or some other means of data sharing). Claim 16 similarly recites transmitting the result to an organ donor matching system, which a clinician could achieve by communicating the pre- and/or post-operative prediction result(s) to an organ transplant matching organization (e.g. verbally, via written communication, etc.). However, recitation of an abstract idea is not the end of the analysis. Each of the claims must be analyzed for additional elements that indicate the abstract idea is integrated into a practical application to determine whether the claim is considered to be “directed to” an abstract idea. Step 2A – Prong 2 The judicial exception is not integrated into a practical application. In particular, independent claims 1, 10, and 15 do not include additional elements that integrate the abstract idea into a practical application. Claim 1 includes the additional elements of a digital memory and a communication network, while claim 15 includes the additional elements of a memory and a processor communicatively coupled to the memory, wherein the memory stores a set of instructions which, when executed by the processor, cause the processor to perform the functions of the invention. Claims 1, 10, and 15 also specify that the various models are machine learning models. These additional elements, when considered in the context of each claim as a whole, merely serve to automate and/or digitize interactions that could otherwise occur by and among human actors (as described above), and thus amount to instructions to “apply” the abstract idea using generic computer components (see MPEP 2106.05(f)). For example, a clinician can communicate with other human entities like organ donors, organ recipients, and colleagues to receive various types of data, train/fit predictive models, analyze the data using the fitted/trained predictive models, and make determinations and predictions about a patient’s likelihood of developing graft failure or probability of long-term survival after an organ transplant procedure. The use of a memory, communication network, and processor as well as specifying that the models are high-level machine learning models merely digitizes and/or automates these otherwise-abstract functions such that they occur digitally or in an automated fashion (i.e. merely using computers and high-level computing components as tools with which to implement an otherwise-abstract idea) and thus do not provide integration into a practical application. Accordingly, claims 1, 10, and 15 as a whole are each directed to an abstract idea without integration into a practical application. The judicial exception recited in dependent claims 2-7, 11-14, and 16 is also not integrated into a practical application under a similar analysis as above. The functions of claims 2-5 and 11-12 are performed with the same additional elements introduced in the independent claims, without introducing any new additional elements of their own, and accordingly also amount to mere instructions to apply the abstract idea on these same additional elements. Claims 6-7 and 13-14 specify that the pre- and post-operative organ transplant machine learning models utilize a survival tree algorithm, which again merely utilizes a high-level type of machine learning algorithm as a tool with which to automate and/or digitize the otherwise-abstract aspects of the predictive models such that this element also amounts to instructions to “apply” the exception. Claim 16 recites electronically transmitting the result via a communication network to an organ donor matching system, which again merely utilizes the high-level computing component of a communication network to digitize the otherwise-abstract function of transmitting data between entities such that it occurs electronically, amounting to the words “apply it.” Accordingly, the additional elements of claims 1-7 and 10-16 do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Claims 1-7 and 10-16 are directed to an abstract idea. Step 2B The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of a digital memory, a communication network, a processor, and specifically machine learning models for performing the accessing, loading, receiving, pre-processing, determining, processing, generating, filtering, training, obtaining, applying, providing, etc. steps of the invention amount to mere instructions to apply the exception using generic computer components. As evidence of the generic nature of the above recited additional elements, Examiner notes paras. [0024]-[0026] of Applicant’s specification, where the processor, memory, and communication network are disclosed at a high level in terms of known “suitable” hardware, firmware, and/or software components, as well as paras. [0052], [0062], & [0069] where the machine learning models are described as utilizing preexisting algorithms such as survival tree algorithms like LongCART or SurvCART. These disclosures do not indicate that the elements of the invention are particular machines, and instead provide generic examples of computer hardware and machine learning algorithms, such that one of ordinary skill in the art would understand that any generic computing device and machine learning / survival tree algorithm could be used to implement the invention. Analyzing these additional elements as an ordered combination adds nothing that is not already present when considering the elements individually; the overall effect of the computer implementation and machine learning models in combination is to digitize and/or automate organ transplant donor-recipient matching, medical outcome prediction, and medical outcome predictive model fitting operations that could otherwise be achieved as certain methods of organizing human activity. Thus, when considered as a whole and in combination, claims 1-7 and 10-16 are not patent eligible. Claim Rejections - 35 USC § 103 This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Dag et al. (Reference V on the PTO-892 mailed 6/3/2025) in view of Premaud et al. (US 20200170580 A1). Claim 15 Dag teaches a system for recipient survival after organ transplant prediction, comprising: a memory; a processor communicatively coupled to the memory; wherein the memory stores a set of instructions which, when executed by the processor (Dag section 4, noting the decision support tool is able to be installed on any computer with an operating system, i.e. a system with a memory and processor executing instructions stored in the memory), cause the processor to: obtain a trained pre-operative organ transplant machine learning model (Dag section 4, Fig. 6, noting a clinician decision support tool with an interface that notes “TAN is trained and ready for the test”; TAN represents a machine learning model trained to predict patient-specific survival risks indicated by likelihood of graft failure based on preoperative transplant variables as explained in sections 2.3 & 3.3), wherein the pre-operative organ transplant machine learning model was trained using: a first filtered dataset comprising a first training dataset relating to a plurality of organ recipients which was filtered to exclude post-operative factors, data records for inapplicable recipient treatments, and data records for recipients with graft dysfunction; and a second training dataset relating to a plurality of organ donors corresponding to the plurality of organ recipients; receive a donor dataset corresponding to a plurality of factors relating to a given donor (Dag Table 4, section 4, Fig. 6, noting a clinician decision support tool with an interface that allows a clinician to input data values relating to the predictor variables of the model, which can include donor data like donor blood type, donor age, donor ethnicity, donor gender, etc.); receive a pre-operative recipient dataset corresponding to a plurality of pre-operative recipient factors for a given patient (Dag Table 4, section 4, Fig. 6, noting a clinician decision support tool with an interface that allows a clinician to input data values relating to the predictor variables of the model, which can include pre-operative recipient data like recipient blood type, recipient age, recipient ethnicity, recipient gender, etc.); apply the donor dataset and the pre-operative recipient dataset to the trained pre-operative organ transplant machine learning model; provide a result for the patient based on the trained organ transplant machine learning model (Dag sections 3.3-4, Fig. 6, noting the input predictor values are utilized by the trained model to calculate and output patient-specific survival risks indicated by likelihood of graft failure); determine a survival probability for the patient (Dag sections 3.3-4, Fig. 6, noting the clinical decision support tool determines and outputs a patient-specific survival chance). In summary, Dag teaches a computer system for training and using a pre-operative machine learning model to predict a patient’s survival probability based on pre-operative recipient and donor data. Dag further teaches receiving post-operative recipient data (see section 2.1, noting use of the UNOS dataset which includes preoperative, intra-operative, and post-operative factors for organ transplant patients), but appears to only use this type of data for model training purposes, and does not discuss receiving and evaluating post-operative data for the specific patient for whom a preoperative prediction was already performed as in section 4 and Fig. 6. Further, though Dag predicts graft failure within a nine year timeframe (which would thus capture failures within a shorter timeframe that could be considered “early” graft failures), it does not appear to determine if the patient actually does exhibit early graft failure. Accordingly, Dag fails to explicitly disclose receiving a post-operative recipient dataset corresponding to a plurality of post-operative factors, the plurality of post-operative factors relating to a transplantation operation of the patient; determining if the patient exhibits early graft dysfunction; and applying the pre-operative recipient dataset and the post-operative recipient dataset to a post-operative organ transplant machine learning model that was trained. However, Premaud teaches an analogous computerized method of predicting organ transplant outcomes that includes receiving a post-operative recipient dataset corresponding to a plurality of post-operative factors, the plurality of post-operative factors relating to a transplantation operation of the patient; determining if the patient exhibits early graft dysfunction; and applying the pre-operative recipient dataset and the post-operative recipient dataset to a trained post-operative organ transplant machine learning model (Premaud [0028], [0150], [0177], noting a machine learning model is trained and used to predict risk of organ graft failure based on input predictors including both pre-operative factors and a plurality of post-operative factors measured at or during the first year post-transplant and/or updated over time past the first year; the inputs include a variable related to first acute rejection, considered equivalent to determining whether the patient exhibits early graft failure). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the transplant survival prediction method of Dag to further include receipt and evaluation of post-operative recipient data (including whether the patient exhibits early/acute graft failure) with a trained post-operative machine learning model as in Premaud in order to account for the dynamic onset of adverse events over time (including post-operative events) that modify graft outcomes (as suggested by Premaud abstract & [0029]), thereby improving the predictive accuracy of the system. Claims 1, 3-4, 6-7, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Dag in view of Premaud, Krishnan et al. (US 20060184475 A1), and Lin et al. (Reference U on the accompanying PTO-892). Claim 1 Dag teaches a method for predicting recipient survival after an organ transplant (Dag title, abstract), the method comprising: accessing and loading from digital memory a trained pre-operative organ transplant machine learning model relating to a given type of organ for which a donor is currently available (Dag section 4, Fig. 6, noting a clinician decision support tool with an interface that notes “TAN is trained and ready for the test”; TAN represents a machine learning model that can be utilized prior to a graft transplant (i.e. when a donor is currently available) and is trained to predict patient-specific survival risks indicated by likelihood of graft failure of a specific organ type (e.g. heart) based on preoperative transplant variables as explained in sections 2.3 & 3.3. See also section 4, noting the decision support tool is able to be installed on any computer with an operating system, i.e. a system with a digital memory, such that use of the TAN model indicates it has been accessed and loaded into the digital memory for execution), wherein the trained pre-operative organ transplant machine learning model was trained using: a filtered recipient dataset comprising a first training dataset relating to a plurality of organ recipients of the given type of organ which was filtered to exclude post-operative factors, data records for inapplicable recipient treatments, and data records for recipients with graft dysfunction; and a donor training dataset comprising data for a first set of donor factors relating to a plurality of organ donors corresponding to the plurality of organ recipients; receiving via a communication network a current donor dataset having data for a plurality of donor-specific factors relating to a currently-available donor (Dag Table 4, section 4, Fig. 6, noting a clinician decision support tool with an interface that allows a clinician to input data values relating to the predictor variables of the model, which can include donor data like donor blood type, donor age, donor ethnicity, donor gender, etc.; use of an interface to input data is considered equivalent to the processing components of a computing system receiving the data via a communication network from the GUI); receiving a pre-operative recipient dataset corresponding to a plurality of pre-operative recipient factors relating to a given patient (Dag Table 4, section 4, Fig. 6, noting a clinician decision support tool with an interface that allows a clinician to input data values relating to the predictor variables of the model, which can include pre-operative recipient data like recipient blood type, recipient age, recipient ethnicity, recipient gender, etc.); processing the current donor dataset and the pre-operative recipient dataset (Dag sections 3.3-4, Fig. 6, noting the input predictor values are utilized by the trained model to calculate and output patient-specific survival risks indicated by likelihood of graft failure within a nine-year timeframe; as noted in section 3.3, the model identifies the factors that most contribute to or influence the predicted outcome); generating a post-operative result (Dag sections 3.3-4, Fig. 6, noting the clinical decision support tool determines and outputs a patient-specific survival chance). In summary, Dag teaches a method for training and using a pre-operative machine learning model to predict a patient’s survival probability based on pre-operative recipient and donor data. Dag further teaches pre-processing the training data to determine if data is missing (see section 2.1 on Pg 3) and prediction of survival within a single 9-year timeframe, while noting that there is extensive research addressing how to accurately predict post transplantation survival at any given time period (see section 1 on Pg 1), but fails to explicitly disclose pre-processing the current donor dataset to determine whether the plurality of donor-specific factors is missing data for at least one factor present in the first set of donor factors, and if so automatically loading into memory a trained backup pre-operative organ transplant machine learning model; processing the current donor dataset and the pre-operative recipient dataset with the backup model; and the pre-operative result comprising a prediction of survival for the given patient at multiple intervals of time post-transplant. Dag further teaches receiving post-operative recipient data (see section 2.1, noting use of the UNOS dataset which includes preoperative, intra-operative, and post-operative factors for organ transplant patients), but appears to only use this type of data for model training purposes, and does not discuss receiving and evaluating post-operative data for the specific patient for whom a preoperative prediction was already performed as in section 4 and Fig. 6. Further, though Dag predicts graft failure within a nine year timeframe (which would thus capture failures within a shorter timeframe that could be considered “early” graft failures), it does not appear to determine if the patient actually does exhibit early graft failure. Accordingly, Dag fails to explicitly disclose receiving a post-operative recipient dataset corresponding to a plurality of post-operative factors, the plurality of post-operative factors relating to a transplantation operation of the patient; determining if the patient exhibits early graft failure; only if the patient does not exhibit graft failure, processing the pre-operative recipient dataset and the post-operative recipient dataset via a trained post-operative organ transplant machine learning model; and generating a post-operative result from the post-operative organ transplant machine learning model. However, Krishnan teaches an analogous model-based clinical analysis method that includes identifying missing data in an input dataset, selecting an appropriate model that has been trained on data corresponding to the mix of data types present in the input dataset, and processing the data with the selected model (Krishnan abstract, [0018]-[0021], [0024], [0062]). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the predictive model of Dag such that additional predictive models trained on different subsets of input features are provided and may be selected for use based on determining missing variables in a new input request as in Krishnan in order to improve handling of missing data and permit use of the predictive system in cases when a new patient does not have all of the features of a most robust model (as suggested by Krishnan [0002] & [0018]-[0021]). Lin teaches using machine learning models to predict patient survival at multiple timepoints (Lin abstract, introduction). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the single time point prediction method of Dag to include the capability to predict survival at multiple intervals as in Lin in order to broaden the predictive power of the method to multiple timescales (as suggested by Lin abstract & introduction). Premaud teaches an analogous method of predicting organ transplant outcomes that includes receiving a post-operative recipient dataset corresponding to a plurality of post-operative factors, the plurality of post-operative factors relating to a transplantation operation of the patient; and determining if the patient exhibits early graft failure (Premaud [0028], [0150], [0177], noting a machine learning model is trained and used to predict risk of organ graft failure based on input predictors including both pre-operative factors and a plurality of post-operative factors measured at or during the first year post-transplant and/or updated over time past the first year; the inputs include a variable related to first acute rejection, considered equivalent to determining whether the patient exhibits early graft failure). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the transplant survival prediction method of Dag to further include receipt and evaluation of post-operative recipient data (including whether the patient exhibits early/acute graft failure) as in Premaud in order to account for the dynamic onset of adverse events over time (including post-operative events) that modify graft outcomes (as suggested by Premaud abstract & [0029]), thereby improving the predictive accuracy of the method. Claim 3 Dag in view of Krishnan, Lin, and Premaud teaches the method of claim 1, and the combination further teaches wherein the plurality of pre-operative recipient factors comprises: a recipient’s age; (Dag Table 4, Fig. 6, noting recipient age as an input). The combination further contemplates that different findings and combinations of important variables may be found to be predictive of organ transplant outcomes and that extracting predictive features from multiple sources helps increase the performance of the prediction model (Dag section 5), suggesting that additional variable types may be incorporated as needed/indicated. However, the present combination fails to explicitly disclose that the plurality of pre-operative recipient factors comprises a recipient’s transplant history and a type of transplant procedure as required by the instant claim. However, Lin further teaches that pre-operative transplant variables useful for predicting organ transplant outcomes can include a recipient’s transplant history (e.g. duration between date of current transplantation and failure date of the previous transplantation) and type of transplant procedure (e.g. living or deceased donor type, number of matched HLA antigens, etc.) (Lin section 3.4). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the recipient pre-operative factors of the combination to include additional factors like transplant history and transplant type as in Lin in order to incorporate additional variables useful in predicting organ transplant outcomes (as suggested by Lin section 3.4), thereby improving the predictive accuracy of the method (as suggested by Dag section 5). Claim 4 Dag in view of Krishnan, Lin, and Premaud teaches the method of claim 1, and the combination further teaches transmitting survival probability information to a physician, the survival probability information highlighting one or more factors that significantly influenced a subgroup categorization of a proposed organ transplant for the patient and at least one of: factors that can be altered prior to transplant surgery; factors that cannot be altered prior to transplant surgery; and factors that cannot be altered (Dag sections 3.3-4, Fig. 6, noting the clinical decision support tool determines and outputs a patient-specific survival chance for a clinician to review, along with display of the model input variables selected as being of significant influence on the outcome. The most important variables that are selected as in section 3.1.1 and displayed at the clinical decision support tool as in Fig. 6 can include factors that can be altered prior to transplant surgery (e.g. total days on waiting list, recipient primary payment source, etc.), factors that cannot be altered prior to transplant surgery (e.g. blood type, ethnicity, etc.), and factors that cannot be altered (e.g. blood type, ethnicity). The outcomes of survival chance and binary yes/no for survival as depicted in Fig. 6 is considered to be a subgroup categorization because patients are categorized into a subgroup of “yes” or “no” for survival). Claim 6 Dag in view of Krishnan, Lin, and Premaud teaches the method of claim 1, and the combination further teaches wherein the backup pre-operative organ transplant machine learning model utilizes a survival tree algorithm (Dag section 2.2.1.2, noting decision trees such as C&RT are used as a basis for constructing the pre-operative machine learning model for survival prediction, considered equivalent to utilizing a survival tree algorithm; when considered in the context of the combination with Krishnan, another C&RT model would be trained with different inputs corresponding to available features in an input dataset for use as a ‘backup’ model). Claim 7 Dag in view of Krishnan, Lin, and Premaud teaches the method of claim 1, and the combination further teaches wherein the post-operative organ transplant machine learning model utilizes a survival tree algorithm (Premaud [0153]-[0155], [0171], noting a conditional survival tree is used to construct the post-operative machine learning model). Claim 16 Dag in view of Krishnan, Lin, and Premaud teaches the method of claim 1, and the combination further teaches electronically transmitting the result via a communication network to an organ donor matching system (Dag Fig. 6, noting display of the resulting prediction at a GUI, considered equivalent to transmitting the result from the processing hardware of the computing system over a communication network to the GUI for display). Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Dag, Krishnan, Lin, and Premaud as applied to claim 1 above, and further in view of Fischer et al. (Reference U on the PTO-892 mailed 6/3/2025). Claim 2 Dag in view of Krishnan, Lin, and Premaud teaches the method of claim 1, and the combination further teaches wherein the plurality of donor-specific factors relating to the currently-available donor comprises: a donor’s age; (Dag Table 4, Fig. 6, noting donor age as an input). The combination further contemplates that different findings and combinations of important variables may be found to be predictive of organ transplant outcomes and that extracting predictive features from multiple sources helps increase the performance of the prediction model (Dag section 5), suggesting that additional variable types may be incorporated as needed/indicated. However, the present combination fails to explicitly disclose that the plurality of factors relating to a given donor comprises a donor’s cytomegalovirus status and a donor’s pulmonary infection status as required by the instant claim. However, Fischer teaches that donor transplant variables that may impact organ transplant outcomes can include a donor’s CMV status and pulmonary infection status (Fischer Pgs S9-S10, noting donor screening of bacterial infections such as respiratory tract infections, mycobacterial infections like tuberculosis, and cytomegalovirus). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the donor factors of the combination to include additional factors like CMV and pulmonary infection status as in Fischer in order to incorporate additional variables that impact organ transplant outcomes (as suggested by Fischer Pgs S1, first paragraph & S9-10), thereby improving the predictive accuracy of the method (as suggested by Dag section 5). Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Dag, Krishnan, Lin, and Premaud as applied to claim 1 above, and further in view of Ruoff et al. (US 20100198611 A1). Claim 5 Dag in view of Krishnan, Lin, and Premaud teaches the method of claim 1, and the combination further teaches receipt of post-operative factors such as length of hospitalization after transplant (Dag section 2.1, first paragraph) as well as that variables predictive of organ transplant outcomes may differ for different patients over time and/or over different analyses (Dag section 5, Premaud [0116]), suggesting that additional variable types may be incorporated into the model(s) as needed/indicated. However, the present combination fails to explicitly disclose that the plurality of post-operative recipient factors comprises a length of the patient’s stay, the length of the patient’s stay comprising an amount of days from transplant to discharge; a patient’s ventilator duration post-transplant; and a patient’s reintubation status post-transplant as required by the instant claim. However, Ruoff teaches that patient variables useful for predicting post-operative clinical risks/outcomes can include length of stay as well as ventilator/intubation status/duration (Ruoff [0036], [0038]). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the recipient post-operative factors of the combination to include additional factors length of stay and ventilator/intubation status/duration as in Ruoff in order to incorporate additional variables useful in predicting clinical risk outcomes (as suggested by Ruoff [0038]), thereby improving the predictive accuracy of the method (as suggested by Dag section 5). Subject Matter Free from Prior Art The following is a statement of reasons for the indication of subject matter free from prior art: An updated prior art search was completed, but no references were identified that expressly teach or suggest, either alone or in combination, each and every feature of independent claim 10. In particular, the prior art fails to teach filtering the first training dataset to remove data records for recipients with graft dysfunction to generate both a recipient pre-operative training dataset and a recipient post-operative training dataset, in combination with all of the other limitations of the claim. See paras. 32-33 of the non-final rejection mailed 6/3/2025. Accordingly, the prior art, either alone or in combination, does not disclose or render obvious all the features of independent claim 10 and it is found to recite subject matter free from prior art, as are the claims depending therefrom. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KAREN A HRANEK whose telephone number is (571)272-1679. The examiner can normally be reached M-F 8:00-4:00 ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Shahid Merchant can be reached at 571-270-1360. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KAREN A HRANEK/ Primary Examiner, Art Unit 3684
Read full office action

Prosecution Timeline

Jan 30, 2024
Application Filed
May 30, 2025
Non-Final Rejection — §101, §103, §112
Dec 03, 2025
Response Filed
Mar 12, 2026
Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12580072
CLOUD ANALYTICS PACKAGES
2y 5m to grant Granted Mar 17, 2026
Patent 12555667
SYSTEMS AND METHODS FOR USING AI/ML AND FOR CARDIAC AND PULMONARY TREATMENT VIA AN ELECTROMECHANICAL MACHINE RELATED TO UROLOGIC DISORDERS AND ANTECEDENTS AND SEQUELAE OF CERTAIN UROLOGIC SURGERIES
2y 5m to grant Granted Feb 17, 2026
Patent 12548656
SYSTEM AND METHOD FOR AN ENHANCED PATIENT USER INTERFACE DISPLAYING REAL-TIME MEASUREMENT INFORMATION DURING A TELEMEDICINE SESSION
2y 5m to grant Granted Feb 10, 2026
Patent 12475978
ADAPTABLE OPERATION RANGE FOR A SURGICAL DEVICE
2y 5m to grant Granted Nov 18, 2025
Patent 12462911
CLINICAL CONCEPT IDENTIFICATION, EXTRACTION, AND PREDICTION SYSTEM AND RELATED METHODS
2y 5m to grant Granted Nov 04, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
36%
Grant Probability
83%
With Interview (+46.7%)
3y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 172 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month