DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
Claims 1-2, 4, 6-12, 14, and 16-20 have been examined in view of Applicant’s remarks dated June 27, 2025, and have been rejected. Such claims are currently pending in the application.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-2, 4, 6-12, 14, and 16-20 is/are rejected under 35 U.S.C. § 101 because the claimed invention is directed to a judicial exception (i.e., law of nature, natural phenomenon, or an abstract idea) without significantly more.
Step 1 – Statutory Categories of Invention:
Claims 1-2, 4, 6-12, 14, and 16-20 are directed to a method and a system, which are statutory categories of invention. (Step 1: YES).
Step 2A – Judicial Exception Analysis, Prong One:
Independent Claim 1 recites receiving a request for a risk score associated with a patient of one or more medical providers; executing a risk predictive model using scoring data associated with the patient to generate the risk score indicative of a probability of the patient to access in-person medical care at a medical provider within a temporal window that is subsequent to the receiving the request for the risk score; responsive to determining that the risk score satisfies a criteria, transmitting the risk score indicative of the probability of the patient to access in-person medical care at the medical provider within the temporal window to an impactability predictive model; executing the impactability predictive model using at least a subset of the scoring data to generate an impactability score indicative of a probability that the patient would not access the in-person medical care at the medical provider within the temporal window responsive to receiving a notification from the medical provider; and sending a message to present at least one of the risk score or the impactability score.
Independent Claim 11 recites receive a request for a risk score associated with a patient of one or more medical providers; execute a risk predictive model using scoring data associated with the patient to generate the risk score indicative of a probability of the patient to access in-person medical care at a medical provider within a temporal window that is subsequent to the processor receiving the request for the risk score; responsive to determining that the risk score satisfies a criteria, transmitting the risk score indicative of the probability of the patient to access in- person medical care at the medical provider within the temporal window to an impactability predictive model; execute an impactability predictive model using at least a subset of the scoring data to generate an impactability score indicative of a probability that the patient would not access the in-person medical care at the medical provider within the temporal window responsive to receiving a notification from the medical provider; and send a message to present at least one of the risk score or the impactability score.
The claims, as drafted, recite Certain Methods of Organizing Activity. Per MPEP 2106.04(a)(2), if a claim limitation under its broadest reasonable interpretation, covers managing personal behavior or relationships or interactions between people, then it falls within the “Certain Methods of Organizing Human Activity” grouping of abstract ideas. The claims recite the receiving and modeling of patient data in order to output a risk score and/or a likelihood of accessing medical care, which is/are used to inform future healthcare decisions on the part of the physician or influence future action by a patient to seek care (see: Specification, paragraphs 3-4). Such activity is considered to cover the management of personal behavior or relationships or interactions between people, as this instructs the behavior of patients and future interactions between patients and medical personnel. Accordingly, the claims recite Certain Methods of Organizing Human Activity and, therefore, an abstract idea. (Step 2A, Prong One: YES).
Independent Claims 1 and 11 further recite wherein the risk predictive model is trained via training data comprising at least one of medical data, medical image scores each indicative of a probability that a respective patient of a plurality of patients has a medical illness, social determinants of health scores each associated with a respective neighborhood, and clinician linkages each indicative of a degree of relationship between a plurality of physicians; wherein the impactability predictive model is trained via training data comprising a plurality of identifiers associated with a plurality of patients, each identifier of the plurality of identifiers indicative of whether a respective patient of the plurality of patients accessed in-person medical care at one or more medical providers responsive to receiving the notification from the one or more medical providers.
In light of the 2024 Guidance Update on Patent Subject Matter Eligibility, Including on Artificial Intelligence, the claims also recite a mental process. The use of a computer to train and implement at least two machine learning models using specific forms of patient data (see at least Claims 1, 10-11, and 20) amounts to applying data to an algorithm, organizing the data, and reporting the results (MPEP § 2106.05(f)(2) see case involving a commonplace business method or mathematical algorithm being applied on a general purpose computer within the “Other examples.. i.”) amounting to instruction to implement the abstract idea using a general purpose computer. Alice Corp. Pty. Ltd. V. CLS Bank Int’l, 134 S. Ct. 2347, 1357 (2014) consistent with Example 47 claim 2. Given the lack of detail in the disclosure as to the training techniques employed by the claimed invention, the training is construed to encompass its broadest reasonable interpretation as understood by one of ordinary skill in the art, which amounts to mathematical algorithms and/or mental processes of labeling, classifying, and fitting data to a particular model representation. The claims are therefore are rejected under 35 USC § 101.
Dependent Claims 2 and 12 recite generating a social determinant of health score by executing a second predictive model based on a first set of publicly available data, the social determinant of health score being indicative of a health status within a geographical region associated with the patient.
Dependent Claims 4 and 14 recite wherein when the medical image scores are included in the training data, the medical image scores are generated by a second predictive model based on a plurality of medical images and a plurality of medical diagnosis labels, each medical image of the plurality of medical images are associated with a respective medical diagnosis label of the plurality of medical diagnosis labels.
Dependent Claims 6 and 16 recite wherein the scoring data comprises at least one of medical data associated with the patient, medical image scores associated with the patient, social determinants of health scores associated with the patient, or clinician linkages associated with the patient.
Dependent Claims 7 and 17 recite wherein clinical linkage indicates a degree of relationship between the one or more medical providers.
Dependent Claims 8 and 18 recite wherein the presentation comprises a graph depicting the impactability score on a first axis and a number of interventions on a second axis.
Dependent Claims 9 and 19 recite wherein the presentation comprises a graphical representation of an accuracy value associated with the risk predictive model or the impactability predictive model.
Dependent Claims 10 and 20 recite wherein the risk predictive model is trained with a first set of training data and the impactability predictive model is trained with a second set of training data different from the first set of training data.
Each of the preceding features of the above dependent claims only serve to further limit or specify the abstract features of independent Claims 1 and 11, and, hence, are nonetheless directed towards fundamentally the same abstract idea(s) as the independent claims, utilizing the additional elements later analyzed in their expected manner.
Step 2A – Judicial Exception Analysis, Prong Two:
The judicial exception is not integrated into a practical application because the additional elements within the claims only amount to instructions to implement the judicial exception using a computer (MPEP § 2106.05(f)).
The claims are abstract but the recitation of the additional claim elements including “by (the) one or more processors,” (Claims 1-2 and 11-12) “to a client device instructing the client device,” (Claims 1 and 11) “server comprising a processor and a non-transitory computer-readable medium containing instructions that when executed by the processor causes the processor to perform operations comprising.” (Claim 11).
The above-identified additional claim elements fail to integrate the judicial exception into a practical application, as they are generic computer components recited at a high level of generality, such that they amount to mere instructions to apply the abstract idea to a general purpose computer (MPEP § 2106.05(f)(2) see case involving a commonplace business method or mathematical algorithm being applied on a general purpose computer within the “Other examples.. i.”). Alice Corp. Pty. Ltd. V. CLS Bank Int’l, 134 S. Ct. 2347, 1357 (2014)). For example, paragraphs 59 and 128 of the Specification state that the processors implemented in the analytical management system and the server may include a number of widely known processing technologies, as well as general-purpose processors. Similarly, the client device(s) is/are understood to constitute generic devices, as the disclosure lacks any description as to the particularity of the said devices. In order to avoid concerns regarding written description under 35 U.S.C. § 112(a), the client device is interpreted to encompass generic computer technologies. Finally, the non-transitory computer readable medium, as articulated in paragraph 129 of the Specification, refers to a number of general purpose memory units, such that it amounts to a generic computer component.
Accordingly, the additional claim elements do not integrate the abstract idea into a practical application. (Step 2A, Prong Two: NO).
Step 2B – Additional Elements that Amount to Significantly More:
The present claims do not include additional elements that are sufficient to amount to more than the abstract idea because the additional elements or combination of elements amount to no more than a recitation of instructions to implement the abstract idea on a computer.
Each additional element under Step 2A, Prong 2 is analyzed in light of the Specification’s explanation of the additional element’s structure. The claimed invention’s additional elements do not have sufficient structure in the Specification to be considered a not well-understood, routine, and conventional use of generic computer components. Note that the Specification can support the conventionality of generic computer components if “the additional elements are sufficiently well-known that the Specification does not need to describe the particulars of such additional elements to satisfy 35 U.S.C. § 112(a)” (Berkheimer in. III. Impact on Examination Procedure, A. Formulating Rejections, 1. on p. 3). 26.
Accordingly, the additional claim elements, alone or as an ordered combination, do not amount to significantly more than the abstract idea. (Step 2B: NO).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1, 6, 8-10, 14, 16, and 18-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Filikov et al. (Filikov A, Pethe S, Kelley R, Fischer A, Ozminkowski R. Use of stratified cascade learning to predict hospitalization risk with only socioeconomic factors. J Biomed Inform. 2020;104:103393), hereinafter Filikov, in view of McBride (US 2020/0302358), hereinafter McBride.
As per Claim 1, Filikov teaches a method of executing a plurality of interconnected predictive models, the method comprising: receiving, by one or more processors, a request for a risk score associated with a patient of one or more medical providers (see: Filikov, 2.2 Data Sources, Figs. 1-2, is met by the inputting of patient data into the multi-model architecture for prediction of hospitalization); executing, by the one or more processors, a risk predictive model using scoring data associated with the patient to generate the risk score indicative of a probability of the patient to access in-person medical care at a medical provider within a temporal window that is subsequent to the one or more processors receiving the request for the risk score (see: Filikov, 2.6 Models and analysis, Fig. 2, is met by sub-model 1 computing and assigning a hospitalization risk to each patient using patient medical data), wherein the risk predictive model is trained via training data comprising at least one of medical data, medical image scores each indicative of a probability that a respective patient of a plurality of patients has a medical illness, social determinants of health scores each associated with a respective neighborhood, and clinician linkages each indicative of a degree of relationship between a plurality of physicians (see: Filikov, 2.2 Data Sources, is met by the training of sub-models 1 and 2 using a subset of the PULSE Healthcare survey data); responsive to determining that the risk score satisfies a criteria, transmitting, by the one or more processors, the risk score indicative of the probability of the patient to access in- person medical care at the medical provider within the temporal window to an impactability predictive model (see: Filikov, 2.6 Models and analysis, Fig. 2, is met by sub-models 1 and 2 transmitting the predictability determination and associated hospitalization risk score to sub-model 3 after it’s deemed “predictable”); executing, by the one or more processors, the impactability predictive model using at least a subset of the scoring data to generate an impactability score indicative of a probability that the patient would not access the in- person medical care at the medical provider within the temporal window (see: Filikov, 2.6 Models and analysis, Fig. 2, is met by the calculation of a hospitalization risk score by sub-model 3 using patient medical data).
While Filikov does teach the generation of an impactability score indicative of a probability that the patient would not access the in-person medical care at the medical provider within the temporal window, Filikov fails to specifically teach that such probability takes into account patient behavior responsive to receiving a notification from the medical provider, which is/are taught by McBride (see: McBride, paragraph 11, fig. 1, is met by a probability being calculated that indicates the likelihood of a no-show for an appointment associated with a patient, where one of the input data categories is engagement with reminders from a medical provider). Filikov further fails to specifically teach the following limitation(s), which is/are taught by McBride: wherein the impactability predictive model is trained via training data comprising a plurality of identifiers associated with a plurality of patients, each identifier of the plurality of identifiers indicative of whether a respective patient of the plurality of patients accessed in-person medical care at one or more medical providers responsive to receiving the notification from the one or more medical providers (see: McBride, paragraphs 11 and 18-19, Fig. 1, is met by the no-show predictive model being trained using historical no-shows of various patients); and sending, by the one or more processors, a message to a client device instructing the client device to present at least one of the risk score or the impactability score (see: McBride, Abstract, paragraphs 14-15, is met by the display of the calculated no-show probability on a virtual calendar via a user device).
It would have been obvious to one of ordinary skill in the art, at the time the invention was filed, to modify the data used to train the hospitalization risk sub-models of Filikov to include data indicating historical no-shows of various patients, to modify the hospitalization risk score of Filikov to take into account patient behavior that is in response to reminders from a provider, and to modify the capabilities of Filikov to include the transmission of an alert and/or reminder, regarding and including the no-show score, to a user via a mobile device, as taught by McBride, with the motivation of improving patient scheduling to minimize the negative impact on the medical professional of the no-show (see: McBride, Abstract).
As per Claim 6, Filikov and McBride teach the limitations of Claim 1. Filikov further teaches wherein the scoring data comprises at least one of medical data associated with the patient, medical image scores associated with the patient, social determinants of health scores associated with the patient, or clinician linkages associated with the patient (see: Filikov, Introduction and 2. Materials and Methods, is met by the models being based on electronic medical record (EMR) data and diagnoses).
As per Claim 8, Filikov and McBride teach the limitations of Claim 1. Filikov further teaches wherein the presentation comprises a graph [containing] a first axis [and] a second axis (see: Filikov, Fig. 4, is met by the graphical depiction of the accuracy of both sub-models 2 and 3).
McBride further teaches the impactability score (see: McBride, paragraph 11, fig. 1, is met by a probability being calculated that indicates the likelihood of a no-show for an appointment associated with a patient, where one of the input data categories is engagement with reminders from a medical provider) and a number of interventions (see: McBride, paragraphs 11 and 16-17, fig. 1-2, is met by a mitigating response being determined and provided every time the likelihood of a no-show is of concern).
It would have been obvious to one of ordinary skill in the art, at the time the invention was filed, to modify the graphical display of Filikov and McBride to merely present the data categories taught by McBride of a no-show probability and the frequency of mitigating responses graphically against one another, with the motivation of improving patient scheduling in order to reduce the impact of patient no-shows (see: McBride, Abstract).
As per Claim 9, Filikov and McBride teach the limitations of Claim 1. Filikov further teaches wherein the presentation comprises a graphical representation of an accuracy value associated with the risk predictive model or the impactability predictive model (see: Filikov, Fig. 4, is met by the graphical depiction of the accuracy of both sub-models 2 and 3).
As per Claim 10, Filikov and McBride teach the limitations of Claim 1. Filikov further teaches wherein the risk predictive model is trained with a first set of training data and the impactability predictive model is trained with a second set of training data different from the first set of training data (see: Filikov, 2.6 Methods and analysis, is met by sub-models 2 and 3 being trained on different subsets of the training data from sub-model 1).
As per Claim 11, Filikov teaches a system for executing a plurality of interconnected predictive models, the system comprising: a server comprising a processor and a non-transitory computer-readable medium containing instructions that when executed by the processor causes the processor to perform operations (see: Filikov, Abstract through 3. Results, is met by is met by a machine learning software algorithm wherein one of ordinary skill in the art would recognize software automation of a process requires a computer (see MPEP § 2114(IV)) with a processor, memory, and corresponding software instructions as further evidenced by V. Sze, Y. -H. Chen, J. Emer, A. Suleiman and Z. Zhang, "Hardware for machine learning: Challenges and opportunities," 2017 IEEE Custom Integrated Circuits Conference (CICC), Austin, TX, USA, 2017, pp. 1-8, doi: 10.1109/CICC.2017.7993626, in the § V. Opportunities in Architectures) comprising: receive a request for a risk score associated with a patient of one or more medical providers (see: Filikov, 2.2 Data Sources, Figs. 1-2, is met by the inputting of patient data into the multi-model architecture for prediction of hospitalization); execute a risk predictive model using scoring data associated with the patient to generate the risk score indicative of a probability of the patient to access in-person medical care at a medical provider within a temporal window that is subsequent to the processor receiving the request for the risk score (see: Filikov, 2.6 Models and analysis, Fig. 2, is met by sub-model 2 computing and assigning a hospitalization risk to each patient using patient medical data), wherein the risk predictive model is trained via training data comprising at least one of medical data, medical image scores each indicative of a probability that a respective patient of a plurality of patients has a medical illness, social determinants of health scores each associated with a respective neighborhood, and clinician linkages each indicative of a degree of relationship between a plurality of physicians (see: Filikov, 2.2 Data Sources, is met by the training of sub-model 2 using a subset of the PULSE Healthcare survey data); responsive to determining that the risk score satisfies a criteria, transmitting, by the one or more processors, the risk score indicative of the probability of the patient to access in- person medical care at the medical provider within the temporal window to an impactability predictive model (see: Filikov, 2.6 Models and analysis, Fig. 2, is met by sub-models 1 and 2 transmitting the predictability determination and associated hospitalization risk score to sub-model 3 after it’s deemed “predictable”); execute an impactability predictive model using at least a subset of the scoring data to generate an impactability score indicative of a probability that the patient would not access the in-person medical care at the medical provider within the temporal window (see: Filikov, 2.6 Models and analysis, Fig. 2, is met by the calculation of a hospitalization risk score by sub-model 3 using patient medical data).
While Filikov does teach the generation of an impactability score indicative of a probability that the patient would not access the in-person medical care at the medical provider within the temporal window, Filikov fails to specifically teach that such probability takes into account patient behavior responsive to receiving a notification from the medical provider, which is/are taught by McBride (see: McBride, paragraph 11, fig. 1, is met by a probability being calculated that indicates the likelihood of a no-show for an appointment associated with a patient, where one of the input data categories is engagement with reminders from a medical provider). Filikov further fails to specifically teach the following limitation(s), which is/are taught by McBride: wherein the impactability predictive model is trained via training data comprising a plurality of identifiers associated with a plurality of patients, each identifier of the plurality of identifiers indicative of whether a respective patient of the plurality of patients accessed in-person medical care at one or more medical providers responsive to receiving the notification from the one or more medical providers (see: McBride, paragraphs 11 and 18-19, Fig. 1, is met by the no-show predictive model being trained using historical no-shows of various patients); and send a message to a client device instructing the client device to present at least one of the risk score or the impactability score (see: McBride, Abstract, paragraphs 14-15, is met by the display of the calculated no-show probability on a virtual calendar via a user device).
It would have been obvious to one of ordinary skill in the art, at the time the invention was filed, to modify the data used to train the hospitalization risk sub-models of Filikov to include data indicating historical no-shows of various patients, to modify the hospitalization risk score of Filikov to take into account patient behavior that is in response to reminders from a provider, and to modify the capabilities of Filikov to include the transmission of an alert and/or reminder, regarding and including the no-show score, to a user via a mobile device, as taught by McBride, with the motivation of improving patient scheduling to minimize the negative impact on the medical professional of the no-show (see: McBride, Abstract).
As per Claim 16, Filikov and McBride teach the limitations of Claim 11. Filikov further teaches wherein the scoring data comprises at least one of medical data associated with the patient, medical image scores associated with the patient, social determinants of health scores associated with the patient, or clinician linkages associated with the patient (see: Filikov, Introduction and 2. Materials and Methods, is met by the models being based on electronic medical record (EMR) data and diagnoses).
As per Claim 18, Filikov and McBride teach the limitations of Claim 11. Filikov further teaches wherein the presentation comprises a graph [containing] a first axis [and] a second axis (see: Filikov, Fig. 4, is met by the graphical depiction of the accuracy of sub-models 1, 2, and 3).
McBride further teaches the impactability score (see: McBride, paragraph 11, fig. 1, is met by a probability being calculated that indicates the likelihood of a no-show for an appointment associated with a patient, where one of the input data categories is engagement with reminders from a medical provider) and a number of interventions (see: McBride, paragraphs 11 and 16-17, fig. 1-2, is met by a mitigating response being determined and provided every time the likelihood of a no-show is of concern).
It would have been obvious to one of ordinary skill in the art, at the time the invention was filed, to modify the graphical display of Filikov and McBride to merely present the data categories taught by McBride of a no-show probability and the frequency of mitigating responses graphically against one another, with the motivation of improving patient scheduling in order to reduce the impact of patient no-shows (see: McBride, Abstract).
As per Claim 19, Filikov and McBride teach the limitations of Claim 11. Filikov further teaches wherein the presentation comprises a graphical representation of an accuracy value associated with the risk predictive model or the impactability predictive model (see: Filikov, Fig. 4, is met by the graphical depiction of the accuracy of sub-models 1, 2, and 3).
As per Claim 20, Filikov and McBride teach the limitations of Claim 11. Filikov further teaches wherein the risk predictive model is trained with a first set of training data and the impactability predictive model is trained with a second set of training data different from the first set of training data (see: Filikov, 2.6 Methods and analysis, is met by sub-models 1, 2, and 3 being trained on different subsets of the training data from sub-model 1).
Claim(s) 2 and 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Filikov et al. (Filikov A, Pethe S, Kelley R, Fischer A, Ozminkowski R. Use of stratified cascade learning to predict hospitalization risk with only socioeconomic factors. J Biomed Inform. 2020;104:103393), hereinafter Filikov, in view of McBride (US 2020/0302358), hereinafter McBride, further in view of Butterfield (WO 2020/049404), hereinafter Butterfield.
As per Claim 2, Filikov and McBride teach the limitations of Claim 1. Filikov and McBride fail to specifically teach the following limitation(s), which is/are taught by Butterfield: generating, by the one or more processors, a social determinant of health score by executing a second predictive model based on a first set of publicly available data, the social determinant of health score being indicative of a health status within a geographical region associated with the patient (see: Butterfield, Abstract, paragraphs 23-24 and 28-29, is met by the generation of SDoH scores based on geographically specific data using a PCA machine learning infrastructure).
It would have been obvious to one of ordinary skill in the art, at the time the invention was filed, to modify the functionalities of Filikov and McBride to include the generation of SDoH scores based on geographically specific data using a PCA machine learning infrastructure, as taught by Butterfield, with the motivation of bolstering the robustness of quantification techniques in their determination of the social impacts on health (see: Butterfield, paragraph 1).
As per Claim 12, Filikov and McBride teach the limitations of Claim 11. Filikov and McBride fail to specifically teach the following limitation(s), which is/are taught by Butterfield: generate a social determinant of health score by executing a second predictive model based on a first set of publicly available data, the social determinant of health score being indicative of a health status within a geographical region associated with the patient (see: Butterfield, Abstract, paragraphs 23-24 and 28-29, is met by the generation of SDoH scores based on geographically specific data using a PCA machine learning infrastructure).
It would have been obvious to one of ordinary skill in the art, at the time the invention was filed, to modify the functionalities of Filikov and McBride to include the generation of SDoH scores based on geographically specific data using a PCA machine learning infrastructure, as taught by Butterfield, with the motivation of bolstering the robustness of quantification techniques in their determination of the social impacts on health (see: Butterfield, paragraph 1).
Claim(s) 4 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Filikov et al. (Filikov A, Pethe S, Kelley R, Fischer A, Ozminkowski R. Use of stratified cascade learning to predict hospitalization risk with only socioeconomic factors. J Biomed Inform. 2020;104:103393), hereinafter Filikov, in view of McBride (US 2020/0302358), hereinafter McBride, further in view of Wang (US 2019/0325215), hereinafter Wang.
As per Claim 4, Filikov and McBride teach the limitations of Claim 1. Filikov and McBride fail to specifically teach the following limitation(s), which is/are taught by Wang: wherein when the medical image scores are included in the training data, the medical image scores are generated by a second predictive model based on a plurality of medical images and a plurality of medical diagnosis labels (see: Wang, paragraphs 102 and 104-105, is met by the generation of disease level score vectors from medical image data), each medical image of the plurality of medical images are associated with a respective medical diagnosis label of the plurality of medical diagnosis labels (see: Wang, paragraph 98, is met by the training of the neural network using medical imaging data that contains an associated diagnosis result).
It would have been obvious to one of ordinary skill in the art, at the time the invention was filed, to modify the functionalities of Filikov and McBride to include the generation of disease level score vector from patient medical images using a convolution neural network, which is trained on medical images and associated diagnosis results, as taught by Wang, with the motivation of generating a disease diagnosis result from patient medical images early on in disease progression (see: Wang, paragraph 3).
As per Claim 14, Filikov and McBride teach the limitations of Claim 11. Filikov and McBride fail to specifically teach the following limitation(s), which is/are taught by Wang: wherein when the medical image scores are included in the training data, the medical image scores are generated by a second predictive model based on a plurality of medical images and a plurality of medical diagnosis labels (see: Wang, paragraphs 102 and 104-105, is met by the generation of disease level score vectors from medical image data), each medical image of the plurality of medical images are associated with a respective medical diagnosis label of the plurality of medical diagnosis labels (see: Wang, paragraph 98, is met by the training of the neural network using medical imaging data that contains an associated diagnosis result).
It would have been obvious to one of ordinary skill in the art, at the time the invention was filed, to modify the functionalities of Filikov and McBride to include the generation of disease level score vector from patient medical images using a convolution neural network, which is trained on medical images and associated diagnosis results, as taught by Wang, with the motivation of generating a disease diagnosis result from patient medical images early on in disease progression (see: Wang, paragraph 3).
Claim(s) 7 and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Filikov et al. (Filikov A, Pethe S, Kelley R, Fischer A, Ozminkowski R. Use of stratified cascade learning to predict hospitalization risk with only socioeconomic factors. J Biomed Inform. 2020;104:103393), hereinafter Filikov, in view of McBride (US 2020/0302358), hereinafter McBride, further in view of Obee (US 2020/0272740), hereinafter Obee.
As per Claim 7, Filikov and McBride teach the limitations of Claim 6. Filikov and McBride fail to specifically teach the following limitation(s), which is/are taught by Obee: wherein clinical linkage indicates a degree of relationship between the one or more medical providers (see: Obee, paragraph 118, is met by the generation of a risk score based on a proximity degree of a relationship between providers in a network).
It would have been obvious to one of ordinary skill in the art, at the time the invention was filed, to modify the scoring data of Filikov and McBride to include a risk score based on a proximity degree of a relationship between providers in a network, as taught by Obee, with the motivation of considering multiple relationships based on one or more properties of each relationship among the multiple relationships (see: Obee, paragraph 118).
As per Claim 17, Filikov and McBride teach the limitations of Claim 16. Filikov and McBride fail to specifically teach the following limitation(s), which is/are taught by Obee: wherein clinical linkage indicates a degree of relationship between the one or more medical providers (see: Obee, paragraph 118, is met by the generation of a risk score based on a proximity degree of a relationship between providers in a network).
It would have been obvious to one of ordinary skill in the art, at the time the invention was filed, to modify the scoring data of Filikov and McBride to include a risk score based on a proximity degree of a relationship between providers in a network, as taught by Obee, with the motivation of considering multiple relationships based on one or more properties of each relationship among the multiple relationships (see: Obee, paragraph 118).
Response to Arguments
The arguments submitted by Applicant in the filing dated June 27, 2025 have been acknowledged and will be addressed below in the order in which they appear.
Response to Arguments Under 35 U.S.C. § 101:
In the Remarks, Applicant argues in substance that (1) the rejection of the claims under 35 U.S.C. 101 should be withdrawn because the claims are not directed to an abstract idea.
Examiner respectfully disagrees; such arguments are unpersuasive. Applicant asserts that the claims are not directed to an abstract idea but, instead, directed toward “an electronic roadmap that indicates how two AI models can work together and not organizing human activity.” This is not so. The claims, when analyzed as a whole, are directed to the analyzing of patient data to generate a risk or impactability score, either of which is used to inform future healthcare decisions regarding said patient. Because the administration of health or medical services falls within the “Certain Methods of Organizing Human Activity” grouping of abstract ideas, the claims are directed to an abstract idea. Furthermore, the models are merely used to implement this abstract idea by automating the analytical and predictive processes. The application of these technologies, which are recited at a high level of generality, does not detract from the recitation of an abstract idea. Instead, the use of these models and their associated computer hardware amounts to the mere instructions to apply the abstract idea using a general purpose computer. Furthermore, while the claims are interpreted in light of the specification, Applicant’s arguments regarding “specific, two-stage machine-learning architecture” disclosure are not present in the claims themselves.
In the Remarks, Applicant argues in substance that (2) the rejection of the claims under 35 U.S.C. 101 should be withdrawn because the claims improve conventional software solutions.
Examiner respectfully disagrees; such arguments are unpersuasive. Applicant argues that the present claims reflect an improvement to the analysis of healthcare-specific data because the claims model both risk and impactibility. However, this improvement is one that is directed to the abstract idea of analyzing healthcare data. It’s not as though the conventional software couldn’t model this data to produce an impactibility score; it’s merely that, as Applicant alleges, though Examiner does not concede, the conventional software has simply not been directed to this application of data analysis. As such, there is not a technical problem that is being overcome but rather the abstract endeavor of analyzing patient data is being improved through a different frame of analysis – calculating a risk and impactibility score. Applicant’s assertion that aggregating 4 disparate data classes and converting them to a “vector” (i.e. score) is not a technical solution but rather mere analyzation of data. Furthermore, conventional software readily determines whether further analysis is needed by “meeting a threshold of risk score and impracticability score” for the minority of patients. Because the improvement is directed to the abstract idea and not to the functionality of the software technology, said improvement fails to integrate the abstract idea into a practical application.
Response to Arguments Under 35 U.S.C. § 103:
In the Remarks, Applicant argues in substance that (3) the rejection of the claims under 35 U.S.C. 103 should be withdrawn because the cited references, alone or in combination, fail to teach all of the elements of the independent claims.
Examiner respectfully disagrees; such arguments are unpersuasive. Applicant has misconstrued Examiner’s interpretation of the prior art Filikov, which does not allege that sub-model 2 is transmitting a risk score to sub-model 3 by virtue of its communication of a “predictable” determination. To clarify, the initial risk score is generated by sub-model 1 and that risk score passes through sub-model 2 to sub-model 3 (the impactibility model) only if it meets the necessary criterion for passing through sub-model 2 (predictable). As such, sub-model 2 is merely the filter through which the risk score of sub-model 1 is processed and sub-model 3 outputs the eventual impactibility score based on sub-model 1’s risk score determination. Therefore, as Examiner interprets the prior art, Filikov adequately teaches the limitations of independent Claims 1 and 11 and respectfully disputes Applicant’s arguments to the contrary.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jason Dunham whose telephone number is 571-272-8109. The examiner can normally be reached M-F, 7-4.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Deborah Reynolds, can be reached at 571-272-0734. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JASON B DUNHAM/Supervisory Patent Examiner, Art Unit 3686