Prosecution Insights
Last updated: April 19, 2026
Application No. 18/130,149

SYSTEMS AND METHODS FOR USING TREATMENT EFFECT MODELS FOR CARE MANAGEMENT INTERVENTIONS

Non-Final OA §101§103§112
Filed
Apr 03, 2023
Examiner
HRANEK, KAREN AMANDA
Art Unit
3684
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Aetna Inc.
OA Round
3 (Non-Final)
36%
Grant Probability
At Risk
3-4
OA Rounds
3y 7m
To Grant
83%
With Interview

Examiner Intelligence

Grants only 36% of cases
36%
Career Allow Rate
62 granted / 172 resolved
-16.0% vs TC avg
Strong +47% interview lift
Without
With
+46.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
49 currently pending
Career history
221
Total Applications
across all art units

Statute-Specific Performance

§101
30.3%
-9.7% vs TC avg
§103
35.3%
-4.7% vs TC avg
§102
10.6%
-29.4% vs TC avg
§112
20.3%
-19.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 172 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/15/2026 has been entered. Status of the Claims The status of the claims as of the response filed 1/27/2025 is as follows: Claims 2, 7-8, 13, and 18-19 are cancelled, and all previously given rejections for these claims are considered moot. Claims 1, 12, and 20 are currently amended. Claims 3-5, 14-16, and 21-22 are as previously presented. Claims 6, 9-11 and 17 are original. Claims 23-26 are new. Claims 1, 3-6, 9-12, 14-17, and 20-26 are currently pending in the application and have been considered below. Response to Amendment Rejection Under 35 USC 101 The claims have been amended but the 35 USC 101 rejections are upheld. Rejection Under 35 USC 103 The amendments made to the claims introduce limitations that are not fully addressed in the previous office action, and thus the corresponding 35 USC 103 rejections are withdrawn. However, Examiner will consider the amended claims in light of an updated prior art search and address their patentability with respect to prior art below. Response to Arguments Rejection Under 35 USC 112(a) On pages 10-11 of the response filed 1/15/2026 Applicant argues that the phrase “above an accuracy threshold” in claims 3 and 14 is sufficiently supported by the specification because para. [0058] of the published application (US 20240331823 A1) “expressly discloses selecting parameters that are ‘best performing’ and ‘most accurate,’” which one of ordinary skill in the art would understand as “involv[ing] evaluating accuracy against a benchmark or criterion, e.g., determining that performance exceeds some accuracy threshold, even if the accuracy threshold is not explicitly named or specified.” Applicant’s arguments are fully considered, but are not persuasive. Examiner notes that determining which parameters are “best performing” or “most accurate” does not inherently require comparison to a threshold, as Applicant appears to imply; for example, the top x number of parameters may be selected as being the most accurate relative to the total number of considered parameters. Because the limitation reciting hyperparameters being above an accuracy threshold was not present in the original disclosure as filed, this limitation constitutes new matter and the 35 U.S.C. 112(a) rejections are maintained. Rejection Under 35 USC 101 On pages 11-13 Applicant argues that including details from now-cancelled claims 7-8 in the independent claims, directed to determining a plurality of individuals for enrolling into care management interventions based on strategic stratification metrics for individuals that are calculated based on impactability metrics and additional metrics, show that the independent claims “are not directed towards a method of organizing human activity especially given that the MPEP explicitly states that the organizing human activity ‘is not to be expanded beyond these enumerated sub-groupings except in rare circumstances.’” Applicant’s arguments are fully considered, but are not persuasive. Examiner notes that the example types of organizing human activity described in the MPEP are not exhaustive, and are provided to give guidance about the types of human activities that are considered abstract. In the instant case, Examiner maintains the recited steps for making determinations about which patients should be enrolled in care management interventions describe a certain method of organizing human activity such as managing personal behavior and/or interactions with others. Examiner maintains that a human actor managing their personal behavior and/or interactions with others would be capable of calculating various metrics about patients in a population to stratify the patients and identify those with metrics above a threshold who would most likely benefit from enrollment into care management programs. Examiner notes that calculating and/or combining metrics in the manner claimed can also be considered to fit into the “mathematical concepts” grouping of abstract idea because they reflect mathematical operations of patient data. Accordingly, the claims do still recite an abstract idea. On page 13 Applicant argues that the claims “recite a technical approach for performing care management identification for individuals who will benefit the most [from enrollment into care management interventions] using one or more care management ML – AI models” which provides integration into a practical application. Applicant’s arguments are fully considered, but are not persuasive. Examiner maintains that identifying patients for enrollment into care management interventions describes an abstract idea in the form of a certain method of organizing human activity. Because these functions are part of the abstract idea itself, they cannot provide “significantly more” than the abstract idea and thus do not confer eligibility (see MPEP 2106.05(a): “It is important to note, the judicial exception alone cannot provide the improvement. The improvement can be provided by one or more additional elements.” See also 2106.05(a)(II): “it is important to keep in mind that an improvement in the abstract idea itself… is not an improvement in technology.”). In the instant case, the additional elements of a computing platform comprising one or more processors executing instructions stored on a non-transitory computer-readable medium to perform the receiving, standardizing, determining, performing, generating, training, inputting, combining, and providing steps, as well as specifying that the care management HTE model comprises one or more care management machine learning – artificial intelligence (ML – AI) models and that the information is provided for display on a care management computing device merely serve to automate steps that otherwise could occur as part of a certain method of organizing human activity, and thus amount to mere instructions to “apply” the abstract idea using generic computer components (see MPEP 2106.05(f)). There is no indication that the nominal technical aspects of the claims provide any technical improvements to computers, machine learning techniques, or another technical field; rather, the business practice of identifying patients who would benefit for enrollment in a care management program is merely being digitized/automated via high-level computing and ML/AI modeling components. Examiner notes that per MPEP 2106.05(f)(2), “‘claiming the improved speed or efficiency inherent with applying the abstract idea on a computer’ does not integrate a judicial exception into a practical application or provide an inventive concept.” On pages 13-14 Applicant analogizes the instant claims to those found eligible in Example 40. Applicant specifically asserts that “similar to Example 40, the claim features of the independent claims include combining the output information with additional metrics to generate combined strategic stratification metrics associated with the plurality of second individuals, and determining the plurality of second individuals for enrolling into the care management interventions based on comparing the combined strategic stratification metrics with one or more strategic stratification threshold values” which integrates any recited abstract idea “into a practical application of determine [sic] a plurality of individuals for enrolling into the care management interventions.” Applicant’s arguments are fully considered, but are not persuasive. The subject matter of Example 40 is directed to improvements in collecting network traffic data in a computerized environment which is a technically-rooted problem whose solution does not merely digitize/automate otherwise-abstract steps. In contrast, the instant invention appears to merely utilize high-level computing and ML-AI modeling components in an effort to implement otherwise-abstract patient data processing, metric calculation, and treatment effectiveness determination operations in a computing environment, as explained above. Accordingly, Examiner submits that the instant claims are not analogous to those found eligible in Example 40, and do not provide integration into a practical application. On page 14 Applicant argues that the claims should be found patent eligible under Step 2B because they are “novel and non-obvious,” and recite “a combination of features which go beyond what is well-understood, routine, and conventional,” specifically pointing to alleged deficiencies of the Gopal, Luo, and Winlo references to “disclose or suggest at least certain features from the independent claims.” Applicant’s arguments are fully considered, but are not persuasive. Applicant has not identified any specific additional elements or combination of additional elements that are believed to be unconventional, and appears to broadly assert that Gopal and Winlo fail to teach or suggest some of the features of the invention and thus they amount to an unconventional combination. Examiner notes that issues of patentability over the prior art are a separate consideration to the question of eligibility under 35 USC 101; MPEP 2106.05(I) states that: Although the courts often evaluate considerations such as the conventionality of an additional element in the eligibility analysis, the search for an inventive concept should not be confused with a novelty or non-obviousness determination. See Mayo, 566 U.S. at 91, 101 USPQ2d at 1973 (rejecting "the Government’s invitation to substitute §§ 102, 103, and 112 inquiries for the better established inquiry under § 101 "). As made clear by the courts, the "‘novelty’ of any element or steps in a process, or even of the process itself, is of no relevance in determining whether the subject matter of a claim falls within the § 101 categories of possibly patentable subject matter." Intellectual Ventures I v. Symantec Corp., 838 F.3d 1307, 1315, 120 USPQ2d 1353, 1358 (Fed. Cir. 2016) (quoting Diamond v. Diehr, 450 U.S. at 188–89, 209 USPQ at 9). Accordingly, whether the claims are found to be novel and/or non-obvious over the prior art has no bearing on analysis of patent eligibility under 35 USC 101. Further, the only additional elements beyond that abstract idea itself recited in the claims include a computing platform comprising one or more processors executing instructions stored on a non-transitory computer-readable medium to perform the receiving, standardizing, determining, performing, generating, training, inputting, combining, and providing steps, as well as specifying that the care management HTE model comprises one or more care management machine learning – artificial intelligence (ML – AI) models and that the information is provided for display on a care management computing device. These additional elements merely serve to automate steps that otherwise could occur via a human actor managing their personal behavior and/or interactions with others, and thus amount to instructions to apply the abstract idea using generic computer components (see MPEP 2106.05(f)). Additionally, the combination of a computing platform executing ML-AI models and displaying information at a computing device for the purpose of treatment/intervention planning is a well-understood, routine, and conventional combination, as evidenced by at least Figs. 1-5 of Winlo et al. (US 20190156955 A1); abstract, Fig. 7, & [0071] of Basu et al. (US 20210241907 A1); and abstract & Fig. 1 of Hasan et al. (US 20230352134 A1). For the reasons outlined above, the 35 USC 101 rejections are upheld. Rejection Under 35 USC 103 On pages 14-16 Applicant argues that “Gopal merely describes receiving a readmission score from the readmissions predictive model, and using the readmission score to determine whether the patient should be selected for further action” but “makes no mention of combining the readmission score with any data, let alone combining the score with additional metrics to generate a combined strategic stratification metrics [sic], and then comparing the combined strategic stratification metrics with one or more strategic stratification thresholds.” Applicant’s arguments are fully considered, and are found persuasive. Examiner agrees that though Gopal contemplates considering the readmission risk score in combination with other metrics (e.g. filter criteria as in Figs. 6A-B), it is not combined with such additional metrics prior to comparison to the threshold to generate combined strategic stratification metrics that are then compared to one or more strategic stratification threshold values as in the amended independent claims. However, Examiner submits that Winlo does sufficiently remedy this deficiency. Winlo teaches an analogous computerized method for identifying target patient populations for care management interventions (Winlo abstract) in which a calculated risk score is combined with other calculated metrics for a patient to generate combined stratification metrics that are then compared with one or more thresholds to identify patients for enrollment into a care intervention (Winlo [0085], noting “the selection module 230 may identify candidate members as target members based on various combinations of the risk, benefit, and participation scores exceeding a certain threshold value” and “the selection module 230 may identify candidate members as target members based on the aggregate of the risk, participation and benefit scores exceeding a certain threshold value,” showing that a risk score (i.e. analogous to the readmission risk score of Gopal) may be combined with other metrics for comparison to one or more stratification thresholds). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the threshold-based patient risk score stratification methods of Gopal to include aggregation of additional patient metrics prior to comparison to a threshold score as in Winlo in order to improve upon crude, single-data-type patient identification cutoffs and consider additional important patient-level metrics that impact the success of intervention programs so that human and computer resources are more efficiently utilized in reaching out to targeted patients that are actually most likely to engage with and benefit from care interventions (as suggested by Winlo [0004]). I Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 3 and 14 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claims 3 and 14 recite “wherein the set of HTE model hyperparameters are associated treatment effect model parameter values from the plurality of treatment effect model parameter values that are above an accuracy threshold.” Applicant’s original specification does not provide sufficient written support for hyperparameters being the treatment effect model parameter values that are above an accuracy threshold; at most, para. [0058] discloses that “the HTE outcome dataset may indicate best performing (e.g. most accurate) treatment effect model parameter values (e.g., HTE model hyperparameters that specify how the HTE model will operate when implemented).” In other words, the specification shows that hyperparameters may be the ”best performing” or “most accurate” treatment effect model parameter values, but there is no mention of any specific accuracy threshold being the basis for determining which parameters are considered to be the “best performing” or “most accurate.” Examiner notes that determining which parameters are “best performing” or “most accurate” does not inherently require comparison to a threshold; for example, the top x number of parameters may be selected as being the best performing or most accurate relative to the total number of considered parameters. Because the limitation reciting hyperparameters being above an accuracy threshold was not present in the original disclosure as filed, this limitation constitutes new matter and is rejected under 35 U.S.C. 112(a). Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 3-6, 9-12, 14-17, and 20-26 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 In the instant case, claims 1, 3-6, and 9-11 are directed to a method (i.e. a process), claims 12, 14-17, and 25-26 are directed to a platform (i.e. a machine), and claims 20-24 are directed to a non-transitory computer-readable medium (i.e. a manufacture). Thus, each of the claims falls within one of the four statutory categories. Nevertheless, the claims fall within the judicial exception of an abstract idea. Step 2A – Prong 1 Independent claims 1, 12, and 20 recite steps that, under their broadest reasonable interpretations, cover certain methods of organizing human activity, e.g. managing personal behavior, relationships, or interactions between people. Specifically, claim 12 (as representative) recites: An enterprise computing platform, comprising: one or more processors; and a non-transitory computer-readable medium having processor-executable instructions stored thereon, wherein the processor-executable instructions, when executed by the one or more processors, facilitate: receiving, from a plurality of data sources, population data for a plurality of first individuals; standardizing the population data to determine training data for a care management heterogeneous treatment effect (HTE) model, wherein standardizing the population data comprises: determining, based on the population data, a plurality of covariates, a plurality of impact factor datasets for the plurality of first individuals, and past population engagement in care management interventions for the plurality of first individuals; performing initial training of the care management HTE model using the plurality of covariates, the plurality of impact factor datasets, and the past population engagement to determine an HTE outcome dataset for the care management HTE model, wherein the HTE outcome dataset comprises model object parameters and a set of HTE model hyperparameters that specify an operation of the care management HTE model during implementation; and generating the training data based on the plurality of covariates, the plurality of impact factor datasets, the past population engagement, and the HTE outcome dataset; training the care management HTE model using the training data, wherein the care management HTE model comprises one or more care management machine learning - artificial intelligence (ML - Al) models; determining a plurality of second individuals for the care management interventions based on using the trained care management HTE model, wherein determining the plurality of second individuals for the care management interventions comprises: inputting a plurality of new impact factors and a plurality of new covariate datasets into the one or more care management ML – AI models to determine output information for the plurality of second individuals; combining the output information with additional metrics to generate combined strategic stratification metrics associated with the plurality of second individuals; and determining the plurality of second individuals for enrolling into the care management interventions based on comparing the combined strategic stratification metrics with one or more strategic stratification threshold values; and providing, for display on a care management computing device, information indicating the plurality of second individuals for the care management interventions. But for the recitation of generic computer components like a processor executing instructions stored in a non-transitory computer-readable medium and a care management computing device, the italicized functions, when considered as a whole, describe treatment effectiveness determination and patient population identification operations that could otherwise be achieved by human actors (e.g. a clinician, researcher, or administrator) managing their personal behavior and interactions with others (e.g. colleagues or patients). For example, a clinician could look up population data from a plurality of data sources (studies, case reports, written resources, colleagues’ experiences, etc.), standardize the population data by categorizing the data into different categories, perform initial fitting/training of a predictive model using the categorized data to determine an outcome dataset with operating parameters for the model that specify how the model transforms inputs to outputs (e.g. covariates in a regression equation, cutoff thresholds for a decision tree, etc.), and further fit/train the predictive model using the standardized population data and initial operating parameters. The clinician could then use the fitted/trained model to determine which new patients may benefit from certain care management interventions by inputting new patient data into the model and receiving predictions output from the model, considering the model output with other known patient metrics to calculate combined stratification metrics, and compare the stratification metrics with one or more thresholds to identify the patients most likely to benefit from enrollment into certain care interventions. The clinician could finally visually indicate (e.g. in a report, graph, or other visual means) the identified patients so that they may be contacted for enrollment into the indicated care interventions. Thus, claim 12 recites an abstract idea in the form of a certain method of organizing human activity. Claims 1 and 20 recite substantially similar subject matter as claim 12 and are found to recite an abstract idea under the same analysis. The independent claims also recite steps that can be considered to recite an abstract idea in the form of mathematical concepts in addition to certain methods of organizing human activity. Such steps include: standardizing the population data to determine training data for a care management heterogeneous treatment effect (HTE) model, wherein standardizing the population data comprises: determining, based on the population data, a plurality of covariates, a plurality of impact factor datasets for the plurality of first individuals, and past population engagement in care management interventions for the plurality of first individuals; performing initial training of the care management HTE model using the plurality of covariates, the plurality of impact factor datasets, and the past population engagement to determine an HTE outcome dataset for the care management HTE model, wherein the HTE outcome dataset comprises model object parameters and a set of HTE model hyperparameters that specify an operation of the care management HTE model during implementation; and generating the training data based on the plurality of covariates, the plurality of impact factor datasets, the past population engagement, and the HTE outcome dataset; training the care management HTE model using the training data, wherein the care management HTE model comprises one or more care management machine learning - artificial intelligence (ML - Al) models; determining a plurality of second individuals for the care management interventions based on using the trained care management HTE model, wherein determining the plurality of second individuals for the care management interventions comprises: inputting a plurality of new impact factors and a plurality of new covariate datasets into the one or more care management ML – AI models to determine output information for the plurality of second individuals; combining the output information with additional metrics to generate combined strategic stratification metrics associated with the plurality of second individuals; and determining the plurality of second individuals for enrolling into the care management interventions based on comparing the combined strategic stratification metrics with one or more strategic stratification threshold values. These steps describe a process for determining covariates and other data from a dataset, training a mathematical predictive model by identifying model object parameters and hyperparameters, using the trained mathematical model to predict outputs, using the predicted outputs to calculate additional metrics, and comparing the additional metrics to thresholds to identify patients for enrollment into care interventions, which amounts to a mathematically-based data analysis process and thus fits in the “mathematical concepts” grouping of abstract idea. Claims 1 and 20 recite substantially similar subject matter as claim 12 and are found to recite an abstract idea under the same analysis. Dependent claims 3-6, 9-11, 14-17, and 21-26 inherit the limitations that recite an abstract idea from their dependence on claims 1, 12, or 20, and thus these claims also recite an abstract idea under the Step 2A – Prong 1 analysis. In addition, claims 3-6, 9-11, 14-17, and 21-26 recite additional limitations that further describe the abstract ideas identified in the independent claims. Specifically, claims 3, 14, and 21 recite that the care management HTE model comprises a plurality of treatment effect model parameter values, and that the set of HTE model hyperparameters are associated treatment effect model parameter values from the plurality of treatment effect model parameter values that are above an accuracy threshold. These limitations merely further describe mathematical workings of the model in an abstract way such that they also describe an abstract idea. Claims 4, 15, and 22 further describe the types of data that fit into the impact factor datasets, each of which are types of data that a human actor would be capable of accessing, categorizing, and evaluating via mathematical operations. Claims 5, 16, and 23 further describe using impact factor datasets and covariates to train the model, which a human actor could accomplish by using such data types to fit/train a predictive model via mathematical processes as indicated for the independent claims above. Claims 6, 17, and 24 recite determining past population outcomes indicating post-engagement clinical and/or financial healthcare outcomes for the first individuals and utilizing such data in the model training step, which a human actor could achieve by making determinations about post-engagement outcomes based on the population data and using such outcome data to fit/train the predictive model as indicated for the independent claims above. Claims 9-11 and 25-26 specify various types of health plans that the patient populations may be enrolled in; a human actor would be capable of obtaining and evaluating data from patients enrolled in these types of plans. However, recitation of an abstract idea is not the end of the analysis. Each of the claims must be analyzed for additional elements that indicate the abstract idea is integrated into a practical application to determine whether the claim is considered to be “directed to” an abstract idea. Step 2A – Prong 2 The judicial exception is not integrated into a practical application. In particular, independent claims 1, 12, and 20 do not include additional elements that integrate the abstract idea into a practical application. The additional elements of claims 1, 12, and 20 include a computing platform comprising one or more processors executing instructions stored on a non-transitory computer-readable medium to perform the receiving, standardizing, determining, performing, generating, training, inputting, combining, comparing, providing, etc. steps, as well as specifying that care management HTE model comprises one or more care management machine learning – artificial intelligence (ML – AI) models and that the information is provided for display on a care management computing device. These additional elements, when considered in the context of each claim as a whole, merely serve to automate steps that could occur via a human actor managing their personal behavior and/or interactions with others (as described above), and thus amount to instructions to “apply” the abstract idea using generic computer components (see MPEP 2106.05(f)). For example, use of the computing platform to perform the various steps merely digitizes/automates the otherwise-abstract steps of receiving population data, standardizing the population data by making determinations about the data and performing initial fitting/training of a predictive model, performing further fitting/training of the model, determining a plurality of second individuals by using the model to predict outcomes, combine the predicted outcomes with additional metrics, and compare the stratification metrics to a threshold, and finally providing information indicating the plurality of second individuals such that they take place in a computerized environment. Specifying that the HTE model is an AI-ML model merely utilizes the high-level concept of artificial intelligence or machine learning as a means to digitize/automate the otherwise-abstract steps of fitting/training and using a predictive model. Specifying that the information indicating the plurality of second individuals is provided for display at a care management computing device again merely invokes a high-level computing device as a means with which to digitize the output of information from a predictive model such that it occurs in a computerized environment. Accordingly, these additional elements are merely invoked as tools with which to digitize/automate the otherwise abstract functions of the invention, and claims 1, 12, and 20 as a whole are each directed to an abstract idea without integration into a practical application. The judicial exception recited in dependent claims 3-6, 9-11, 14-17, and 21-26 is also not integrated into a practical application under a similar analysis as above. Claims 3-6, 9-11, 14-17, and 21-26 are performed with the same additional elements introduced in the independent claims, without introducing any new additional elements of their own, and accordingly also amount to mere instructions to apply the abstract idea. Accordingly, the additional elements of claims 1, 3-6, 9-12, 14-17, and 20-26 do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Claims 1, 3-6, 9-12, 14-17, and 20-26 are directed to an abstract idea. Step 2B The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of a computing platform comprising processors and a non-transitory computer-readable medium, a specifically ML-AI HTE model, and a care management computing device for performing the receiving, standardizing, determining, performing, generating, training, inputting, combining, comparing, providing for display, etc. steps of the invention amount to mere instructions to apply the exception using generic computer components. As evidence of the generic nature of the above recited additional elements, Examiner notes the following portions of Applicant’s specification: [0039], noting various examples of known computing elements that may be embody the computing platform, such as computing devices, computing platforms, systems, servers, engines, software functions, applications, etc. [0040], noting various examples of known computing devices that may be embody the care management computing device, such as a desktop, laptop, table, mobile device, smart watch, IoT device, etc. [0042], noting a generic arrangement of an exemplary computing device within the system. [0057], noting the ML-AI models “may be any type of ML-AI model (e.g., unsupervised, supervised, and/or deep learning)” and providing several examples of known ML-AI model types such as XGBoost regression/classifier, causal forest, EconML, etc. These disclosures do not indicate that the elements of the invention are particular machines and instead provide generic, high-level examples of known computer hardware and ML-AI model types, such that one of ordinary skill in the art would understand that any generic computing platform, ML-AI models, and computing device could be used to implement the invention. Further, the combination of these additional elements is not expanded upon in the specification as a unique arrangement and as such relies on the knowledge of one of ordinary skill in the art to understand the combination of components within a computer system as a well-known and generic combination for automating an abstract idea that could otherwise be performed as a certain method of organizing human activity and thus do not provide an inventive concept. Additionally, the combination of a computing platform executing ML-AI models and displaying information at a computing device for the purpose of treatment/intervention planning is a well-understood, routine, and conventional combination, as evidenced by at least Figs. 1-5 of Winlo et al. (US 20190156955 A1); abstract, Fig. 7, & [0071] of Basu et al. (US 20210241907 A1); and abstract & Fig. 1 of Hasan et al. (US 20230352134 A1). Analyzing these additional elements as an ordered combination adds nothing that is not already present when considering the elements individually; the overall effect of the computer platform, ML-AI models, and computing device in combination is to digitize and/or automate a treatment effectiveness and patient population identification operation that could otherwise be achieved as a certain method of organizing human activity. Thus, when considered as a whole and in combination, claims 1, 3-6, 9-12, 14-17, and 20-26 are not patent eligible. Claim Rejections - 35 USC § 103 This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 3-6, 9, 12, 14-17, and 20-25 are rejected under 35 U.S.C. 103 as being unpatentable over Gopal et al. (US 20160358282 A1) in view of Luo (Reference U on the accompanying PTO-892) and Winlo et al. (US 20190156955 A1). Claims 1, 12, and 20 Gopal teaches a method, comprising: receiving, by a computing platform and from a plurality of data sources, population data for a plurality of first individuals (Gopal Fig. 1, [0007], [0015], noting a computerized system obtains historical data for a patient population from a variety of sources, e.g. claims data, clinical/health program participation data, consumer data, etc.); standardizing, by the computing platform, the population data to determine training data for a care management heterogeneous treatment effect (HTE) model (Gopal Figs. 1 & 3, [0015], [0019]-[0020], noting the obtained historical population data is cleansed, mined, and otherwise prepared for use in training a predictive model), wherein standardizing the population data comprises: determining, based on the population data, a plurality of covariates, a plurality of impact factor datasets for the plurality of first individuals, and past population engagement in care management interventions for the plurality of first individuals (Gopal Figs. 2 & 4, [0015], [0018]-[0021], noting various types of patient factors from the obtained historical population data are used to train the predictive model, including covariates (e.g. demographic data like age and gender, diagnosis, medications, etc.), impact factors (e.g. hospital admit count, comorbidity index or count, medication count, CMS risk score, etc.), and past population engagement in care management interventions (e.g. clinical/health program participation data)); performing initial training of the care management HTE model using the plurality of covariates, the plurality of impact factor datasets, and the past population engagement to determine an HTE outcome dataset for the care management HTE model, wherein the HTE outcome dataset comprises model object parameters (Gopal Figs 1 & 3, [0015], [0020]-[0021], noting a predictive model is trained using the prepared training data, i.e. including the covariates, impact factor datasets, and past population engagement data as explained above. Predictive models such as decision trees, regression equations, and neural networks (the examples of model types trained by the system in [0015]) are collections of learned mathematical/statistical relationships that transform specific inputs into a desired output, such that training of these types of models is considered to determine an outcome dataset comprising model object parameters that specify an operation of the model during implementation); and generating the training data based on the plurality of covariates, the plurality of impact factor datasets, the past population engagement, and the HTE outcome dataset (Gopal Fig. 3, [0020], noting prepared population data (i.e. including the covariates, impact factor datasets, and past population engagement data as explained above) is used to generate a validation dataset that is used to tune the trained model (which would include the statistical relationships learned from the initial training operation)); training, by the computing platform, the care management HTE model using the training data, wherein the care management HTE model comprises one or more care management machine learning - artificial intelligence (ML - Al) models (Gopal Figs 1 & 3, [0015], [0020]-[0021], noting a predictive model is trained and tuned using the prepared training data, including tuning (i.e. retraining) of the model with the validation dataset as in [0020]; the predictive model can be used to make care management intervention enrollment decisions as in [0026] and is thus considered equivalent to a care management HTE model comprising one or more care management ML-AI models in accordance with Applicant’s definition of such models in para. [0057] of the specification as “any type of ML – AI model (e.g., unsupervised, supervised, and/or deep learning) that can be used to determine (e.g., identify) individuals to be enrolled into care management interventions”); determining, by the computing platform, a plurality of second individuals for the care management interventions based on using the trained care management HTE model (Gopal Figs. 1 & 6A-B, [0026]-[0027], [0031], noting new patient data is applied to the trained model to determine which patients should be selected for or enrolled in certain interventions), wherein determining the plurality of second individuals for the care management interventions comprises: inputting a plurality of new impact factors and a plurality of new covariate datasets into the one or more care management ML – AI models to determine output information for the plurality of second individuals (Gopal Figs. 6A-B, [0023], [0026]-[0031], noting new patient profiles with various factors (i.e. impact factors and covariates) may be input to the trained model to obtain a readmission risk output); combining the output information with additional metrics (Gopal Figs. 6A-B, [0026]-[0029], noting patient readmission risks (i.e. the output information) are evaluated in combination with other filtering or selection criteria (i.e. additional metrics) to determine which patients should be enrolled in the care management interventions); and determining the plurality of second individuals for enrolling into the care management interventions based on comparing the output with one or more strategic stratification threshold values (Gopal Figs. 6A-B, [0026]-[0029], noting patient readmission risks (i.e. the output information) are compared to a threshold to determine which patients should be enrolled in the care management interventions); and providing, by the computing platform and (Gopal [0031], claim 15, noting a daily referral list of the identified patients may be generated for review by a computer user, i.e. provided on a computer device). In summary, Gopal teaches a computerized method of training, tuning, and using a predictive model to determine patient populations to undergo care management interventions. The predictive model of the system can capture and learn statistical relationships between inputs and a desired output such that the system is considered to learn model object parameters that specify an operation of the care management HTE model during implementation. However, Gopal is silent regarding determining HTE model hyperparameters. Additionally, in Gopal a readmission risk score output from the model may be compared with a threshold and then considered in combination with additional filtering criteria (i.e. other metrics) to identify patients for care interventions, but the risk score is not explicitly combined with additional metrics prior to comparison to generate combined strategic stratification metrics that are then compared to the strategic stratification threshold. Finally, in Gopal a list of the identified patients may be generated and provided for review by a computer user, indicating some manner of computerized user interface to output the identified patients. However, the reference does not specify that there is any visual display of information about the identified patients, and thus fails to explicitly disclose providing, by the computing platform and for display on a care management computing device, information indicating the plurality of second individuals for the care management interventions. However, Luo teaches that machine learning models include both ordinary and hyper parameters that govern operation of the model, and that hyperparameters may be automatically tuned or selected based on measurements of accuracy associated with each potential combination of hyperparameters for a given algorithm (Luo abstract, Sections 1.2-1.3, Section 2.1). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the known methods of training a machine learning model of Gopal to include determining both ordinary and hyper parameters of the model as in Luo in order to make the machine learning methods accessible to layman users such as clinicians while skipping the manual and labor-intensive process of selecting an effective algorithm and/or combination of hyperparameter values (as suggested by Luo section 1.3). Additionally, Winlo teaches an analogous computerized method for identifying target patient populations for care management interventions (Winlo abstract) in which a calculated risk score is combined with other calculated metrics for a patient to generate combined stratification metrics that are then compared with one or more thresholds to identify patients for enrollment into a care intervention (Winlo [0085], noting “the selection module 230 may identify candidate members as target members based on various combinations of the risk, benefit, and participation scores exceeding a certain threshold value” and “the selection module 230 may identify candidate members as target members based on the aggregate of the risk, participation and benefit scores exceeding a certain threshold value,” showing that a risk score (i.e. analogous to the readmission risk score of Gopal) may be combined with other metrics for comparison to one or more stratification thresholds) and where information about the identified target patient populations may be provided to a care management computing device for display (Winlo [0034], [0087], noting graphical presentation of a user interface at a client device including information about target members). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the threshold-based patient risk score stratification methods of the combination to include aggregation of additional patient metrics prior to comparison to a threshold score as in Winlo in order to improve upon crude, single-data-type patient identification cutoffs and consider additional important patient-level metrics that impact the success of intervention programs so that human and computer resources are more efficiently utilized in reaching out to targeted patients that are actually most likely to engage with and benefit from care interventions (as suggested by Winlo [0004]). It further would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the generation of a patient list as in the combination such that it is graphically displayed for visual review as in Winlo in order to allow users such as clinicians to actually get useful, actionable information about which patients to target for intervention presented in a graphical manner (as suggested by Winlo [0034] & [0087]). Regarding claim 12, Gopal in view of Luo and Winlo teaches an enterprise computing platform, comprising: one or more processors; and a non-transitory computer-readable medium having processor-executable instructions stored thereon, wherein the processor-executable instructions, when executed by the one or more processors, facilitate (Gopal [0023], claims 8 & 15, noting a computerized system performing the method, such as a server or computer executing programming instructions) the method of claim 1, as explained above. Regarding claim 20, Gopal in view of Luo and Winlo teaches a non-transitory computer-readable medium having processor-executable instructions stored thereon, wherein the processor-executable instructions, when executed, facilitate (Gopal [0023], claims 8 & 15, noting a computerized system performing the method, such as a server or computer executing programming instructions) the method of claim 1, as explained above. Claims 3, 14, and 21 Gopal in view of Luo and Winlo teaches the method of claim 1, and the combination further teaches wherein the care management HTE model comprises a plurality of treatment effect model parameter values, and wherein the set of HTE model hyperparameters are associated treatment effect model parameter values from the plurality of treatment effect model parameter values that are above an accuracy threshold (Gopal Figs 1 & 3, [0015], [0020]-[0021], noting the predictive model is trained using the prepared training data; predictive models such as decision trees, regression equations, and neural networks (the examples of model types trained by the system in [0015]) are collections of learned mathematical/statistical relationships that transform specific inputs into a desired output, such that these types of models are considered to comprise a plurality of treatment effect model parameter values. See also Luo abstract, Sections 1.2-1.3, Section 2.1, noting trained machine learning models include a plurality of ordinary and hyper parameters that are learned, and the combination of ordinary and hyper parameters correlated with the highest accuracy measures are selected for use (considered equivalent to Applicant’s disclosure of the “best” or “most accurate” parameters being determined in para. [0058] of the specification)). Claims 14 and 21 recite substantially similar subject matter as claim 3, and are also rejected as above. Claims 4, 15, and 22 Gopal in view of Luo and Winlo teaches the method of claim 1, and the combination further teaches wherein the plurality of impact factor datasets comprises a plurality of impact factor metrics, wherein the plurality of impact factor metrics comprise fall risks, emergency room (ER) risks, medical adherence indicators, chronic condition counts, mental illness indicators, usage of durable medical equipment (DME), new onset of diseases, and/or drug safety indicators (Gopal Figs. 2 & 4, [0018]-[0021], noting various types of patient factors from the obtained historical population data are used to train the predictive model, including impact factors like CMS risk score or hospital admit or readmit count (equivalent to ER risks), comorbidity index or count (equivalent to chronic condition counts), medication count (equivalent to drug safety indicators), etc.)). Claims 15 and 22 recite substantially similar subject matter as claim 4, and are also rejected as above. Claims 5, 16, and 23 Gopal in view of Luo and Winlo teaches the method of claim 1, and the combination further teaches wherein training the care management HTE model using the training data comprises: using the plurality of impact factor datasets and the plurality of covariates to train the one or more care management ML - AI models, wherein the plurality of impact factor datasets and the plurality of covariates are features for the one or more care management ML - AI models (Gopal Figs. 1-4, [0015], [0018]-[0021], noting the various patient factors from the obtained historical population data (i.e. the covariates and impact factor datasets, as explained above) are used to train the predictive model and result in the discovery of statistical relationships between the input variables that are selected as predictor features in the validated model). Claims 16 and 23 recite substantially similar subject matter as claim 5, and are also rejected as above. Claims 6, 17, and 24 Gopal in view of Luo and Winlo teaches the method of claim 5, and the combination further teaches tracking and utilizing intervention participation data (Gopal) as well as clinical outcome data (Gopal Fig. 2, [0018], noting length of stay) in training the predictive model. Thus, Gopal in view of Winlo teaches wherein standardizing the population data further comprises: determining past population outcomes for the care management interventions for the plurality of first individuals, wherein the past population outcomes indicate (Gopal Fig. 2, [0018]-[0021], noting clinical outcome data such as length of stay is used to train the predictive model; see also [0015], [0030], noting the system tracks and utilizes intervention participation data in training the predictive model). In summary, the present combination teaches tracking and utilizing intervention participation data as well as clinical outcome data in training the predictive model, but it does not appear to specify that the clinical outcomes are post-engagement clinical outcomes resulting from undergoing certain interventions. Accordingly, the present combination fails to explicitly disclose wherein the past population outcomes indicate post-engagement clinical and/or financial healthcare outcomes for the plurality of first individuals after undergoing the care management interventions. However, Winlo further teaches analyzing historical member data including post-engagement clinical or financial outcomes of a member population after undergoing care management interventions to train a predictive model (Winlo [0055]-[0057]). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the unconnected past population outcomes and participation data of the combination to specifically include evaluation of past population outcomes indicating post-engagement clinical and/or financial healthcare outcomes resulting from certain interventions as in Winlo in order to train the model to quantify the benefit that each intervention may provide to each member so that the targeting of patient populations most likely to benefit from care management interventions is improved (as suggested by Winlo [0056]). Claims 17 and 24 recite substantially similar subject matter as claim 6, and are also rejected as above. Claims 9 and 25 Gopal in view of Luo and Winlo teaches the method of claim 1, and the combination further teaches wherein the plurality of first individuals and the plurality of second individuals are enrolled into MEDICARE (Gopal Fig. 3, [0019], noting Medicare claims as the member population). Claim 25 recites substantially similar subject matter as claim 9, and is also rejected as above. Claims 10-11 and 26 are rejected under 35 U.S.C. 103 as being unpatentable over Gopal, Luo, and Winlo as applied to claim 1 above, and further in view of Chandra et al. (US 20190172564 A1). Claims 10 and 26 Gopal in view of Luo and Winlo teaches the method of claim 1, and the combination further teaches that the system may be utilized by a health benefits provider with a covered patient-member population, e.g. Medicare (Gopal [0015], [0019]). However, the present combination fails to explicitly disclose wherein the plurality of first individuals and the plurality of second individuals are enrolled into MEDICAID. However, Chandra teaches an analogous predictive model training pipeline that utilizes data from patients enrolled in Medicaid (Chandra [0101], [0122]). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the covered patient-member populations of the combination to include patients enrolled in Medicaid as in Chandra in order to train and utilize a predictive model specific to that known type of member-population. Claim 26 recites substantially similar subject matter as claim 10, and is also rejected as above. Claim 11 Gopal in view of Luo and Winlo teaches the method of claim 1, and the combination further teaches that the system may be utilized by a health benefits provider with a covered patient-member population, e.g. Medicare (Gopal [0015], [0019]). However, the present combination fails to explicitly disclose wherein the plurality of first individuals and the plurality of second individuals are enrolled into a commercial plan. However, Chandra teaches an analogous predictive model training pipeline that utilizes data from patients enrolled in commercial plans (Chandra [0101], [0122]). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the covered patient-member populations of the combination to include patients enrolled in commercial plans as in Chandra in order to train and utilize a predictive model specific to that known type of member-population. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Pironti et al. (US 20150095046 A1) describes a method for using a predictive model to output a clinical risk score for a patient and combining the predicted clinical risk score with additional metrics to stratify patients into different risk tiers corresponding to different modes of engagement/intervention. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KAREN A HRANEK whose telephone number is (571)272-1679. The examiner can normally be reached M-F 8:00-4:00 ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Shahid Merchant can be reached on 571-270-1360. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KAREN A HRANEK/ Primary Examiner, Art Unit 3684
Read full office action

Prosecution Timeline

Apr 03, 2023
Application Filed
Mar 26, 2025
Non-Final Rejection — §101, §103, §112
May 29, 2025
Applicant Interview (Telephonic)
May 29, 2025
Examiner Interview Summary
Jun 27, 2025
Response Filed
Oct 20, 2025
Final Rejection — §101, §103, §112
Jan 15, 2026
Request for Continued Examination
Feb 12, 2026
Response after Non-Final Action
Feb 18, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12580072
CLOUD ANALYTICS PACKAGES
2y 5m to grant Granted Mar 17, 2026
Patent 12555667
SYSTEMS AND METHODS FOR USING AI/ML AND FOR CARDIAC AND PULMONARY TREATMENT VIA AN ELECTROMECHANICAL MACHINE RELATED TO UROLOGIC DISORDERS AND ANTECEDENTS AND SEQUELAE OF CERTAIN UROLOGIC SURGERIES
2y 5m to grant Granted Feb 17, 2026
Patent 12548656
SYSTEM AND METHOD FOR AN ENHANCED PATIENT USER INTERFACE DISPLAYING REAL-TIME MEASUREMENT INFORMATION DURING A TELEMEDICINE SESSION
2y 5m to grant Granted Feb 10, 2026
Patent 12475978
ADAPTABLE OPERATION RANGE FOR A SURGICAL DEVICE
2y 5m to grant Granted Nov 18, 2025
Patent 12462911
CLINICAL CONCEPT IDENTIFICATION, EXTRACTION, AND PREDICTION SYSTEM AND RELATED METHODS
2y 5m to grant Granted Nov 04, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
36%
Grant Probability
83%
With Interview (+46.7%)
3y 7m
Median Time to Grant
High
PTA Risk
Based on 172 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month