DETAILED ACTION
Status of Claims
Notice of Pre-AIA or AIA Status - The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This action is in reply to an amendment filed on 01/28/2026.
Claims 1, 10, 12 and 18 have been amended.
Claims 8, 11, 19 and 23 have been cancelled.
Claims 24-26 have been newly added.
Claims 1-7, 9-10, 12-18, 20-22 and 24-26 are currently pending and have been examined.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-7, 9-10, 12-18, 20-22 and 24-26 are rejected under 35 U.S.C. §101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1:
Claims 1-7, 18, 20-22 and 24-26 are directed to a method (i.e., a process) and claims 10 and 12-17 are directed to a system (i.e., a machine). Accordingly, claims 1-7, 9-10, 12-18, 20-22 and 24-26 are all within at least one of the four statutory categories.
Step 2A - Prong One:
Regarding Prong One of Step 2A, the claim limitations are to be analyzed to determine whether they “recite” a judicial exception or in other words whether a judicial exception is “set forth” or “described” in the claims. An “abstract idea” judicial exception is subject matter that falls within at least one of the following groupings: a) mathematical concepts, b) certain methods of organizing human activity, and/or c) mental processes.
Representative independent claim 10 includes limitations that recite an abstract idea. Note that independent claim 10 is the system claim, while claims 1 and 18 cover method claims.
Specifically, representative independent claim 10 recites:
A computer-implemented system for integrating modulated output from each of a plurality of models to quantify factors to generate a plurality of values, each within a continuous distribution, the system comprising:
an imputation engine configured to impute at least one value that is not included in input associated with a person, by generating automatically the at least one value;
a first data model built by deriving information collected from at least one data source and running on a computing device, the first data model quantifying each of a plurality of measurable endpoints associated with a person that contribute to a current state and a likelihood of a future occurrence of a health condition, wherein each quantified endpoint is calculated within a first continuous distribution, at least as a function of the respective measurable endpoints and as a function of an estimate of risk associated with the health condition;
a second data model built by deriving information collected from at least one data source and running on at least one computing device, the second data model generating respective values representing a plurality of aspects of the person’s health that impact the likelihood of the future occurrence, wherein each of the respective generated values is generated within a second continuous distribution at least as a function of at least some of the input and/or the at least one imputed value that informs the aspects, and as a function of data used to build the second data model;
a third data model built by deriving information collected from at least one data source and running on the at least one computing device, the third data model identifying individual ones of a plurality of factors associated with a subset of the endpoints and the aspects that are individually modifiable to affect the likelihood of the future occurrence, wherein the plurality of factors are identified at least in part as a function of input automatically received from at least one device associated with the person, and generating, for each of the identified plurality of factors, a respective value within a third continuous distribution and respectively representing one of the plurality of factors;
a modulating model running on the at least one computing device that modulates at least one of the quantified endpoints by the first data model and at least one of the values representing the plurality of aspects generated by the second data, to scale a value representing at least one aspect associated with the likelihood of the future occurrence, wherein the modulating is based on at least one of the plurality of factors;
at least one of artificial intelligence and machine learning comprised in the at least one computing device that integrates at least two of the values associated with each of the first, second, and third continuous distributions, wherein the integrated at least two of the values represents the likelihood of the future occurrence of the health condition; and
transmitting, without human intervention by the at least one computing device to at least one other computing device, the integrated at least two of the values.
The Examiner submits that the foregoing underlined limitations constitute: a) certain methods of organizing human activity concepts because qualifying measurable endpoints that contribute to determining the likelihood of a medical event occurring, calculating and estimating a risk associated with a health condition, modulating to scale a value and integrating two values associated with continuous distributions representing the likelihood a health condition all relate to managing human behavior/interactions between people. Furthermore, the foregoing underlined limitations alternately constitute (b) “a mental process” because imputing a value not included in a input associated with a person by generating the value and deriving at information collected from sources associated with a person to identify the likelihood that a future condition will occur are medical advice and clinical observations/evaluations/analysis that can be performed in the human mind or with a pen and paper. The foregoing underlined limitations also relate to claims 1 and 18 (similarly to claim 10).
Accordingly, the claim describes at least one abstract idea.
Turning to the dependent claims, these claims further define the abstract idea. Claims 2-4, 6-9, 12-13, 15-16, 20-21 and 25-26 describe determining steps such as: claim 2 – imputing at least one value that is not included in a set of inputs used by the first data model, claims 3 & 12 - imputing, at least one other value that is not included in the previously imputed value(s) or one of the quantified endpoints, wherein the at least one other imputed value depends on at least one of previously imputed value and wherein the imputed at least one other value is within a continuous distribution, claim 4 & 13 - recalibrating as a function of information received over time or information received from a plurality of data sources, claim 6 - configuring at least some of the values and the aspects receiving at least some of the values and aspects transmitting, without human intervention the quantified values associated with the at least some of the endpoints, the quantified values associated with the at least some of the aspects, and the generated values associated with at least some of the factors respectively from the first data model, the second data model, and the third data model and transmitting, without human intervention the integrated at least two of the values, wherein display the values and the integrated at least two of the values received, claims 7 & 16- regularly and periodically prompts a user to enter values associated with the factors, when values associated with the factors are not received subsequent to previously received values, claim 9 - the values are calculated in the continuous distribution as a function of parametric non-linear mapping, claim 15 - receives at least some of the values and the aspects receive, without human intervention, the quantified values associated with the at least some of the endpoints, the quantified values associated with the at least some of the aspects, and the generated values receive, without human intervention, the integrated at least two of the values, and display the values and the integrated at least two of the values received, claim 20 - integrating least two of the values associated with each of the first, second, and third continuous distributions, wherein the integrated at least two of the values represents the likelihood of the future occurrence of the health condition, claim 21 - the at least two of the values is performed using at least one of artificial intelligence and machine learning, claim 25 - the at least one imputed value is generated via an imputation hierarchy and claim 26 - increasing imputation accuracy as a function of an order of a plurality of imputed values.
Claims 5, 14, 17, 22 and 24 describe types of data gathering such as: claims 5 & 14 - the first data model uses the respective endpoints as input features in a fitting procedure, claim 17 - at least one of the first data model, the second data model and the third data model comprise a selection of at least two other data models, claim 22 - at least one of the first data model, the second data model, and the third data model comprise at least one of artificial intelligence and machine learning and claim 24 - the at least one other imputed value is based on at least one of previously imputed value. As such, these are all similar to features in the independent claim in that they are manual steps that are applied on a computer or insignificant extra solution activity.
Step 2A - Prong Two:
Regarding Prong Two, it must be determined whether the claim as a whole integrates the abstract idea into a practical application. As noted, it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional elements merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application.”
The limitations of claims 1, 10 and 18, as drafted is a process that, under its broadest reasonable interpretation, covers performance of the limitations in the mind but for the recitation of generic computer components. That is, other than reciting a computer-implemented system including at least one computing, display screens and a graphical user interface on the user computing to perform the limitations, nothing in the claim elements precludes the steps from practically being performed in the mind. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation within a health care environment in the mind but for the recitation of generic computer components, then it falls within the “Certain Methods of Organizing Human Activity” and “Mental Process” groupings of abstract ideas. Accordingly, the claims recite an abstract idea. The Examiner notes that performing the abstract idea “without human interaction” is a consequence of confining the abstract idea to a general-purpose computer.
The judicial exception is not integrated into a practical application. In particular, the computer-implemented system including at least one computing, display screens, a software application that provides a graphical user interface and a graphical user interface on the user computing device are recited at high levels of generality (i.e., as generic computer components performing generic computer functions of receiving data/inputs, determining and providing data) such that it amounts no more than mere instructions to apply the exception using the generic computer components. Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Claims 1-7, 9-10, 12-18, 20 and 24-26 are directed to an abstract idea.
The claim further recites the additional element of “at least one of artificial intelligence and machine learning,” “building a first data model”, “building a second data model”, and “building a third data model”, merely links the abstract idea to a particular technological environment or field of use. MPEP 2106.04(d)(I) indicates that generally linking an abstract idea to a particular technological environment or field of use cannot provide a practical application. Thus, taken alone, the additional elements do not amount to significantly more than the above identified judicial exception (the abstract idea).
Regarding the additional limitation “automatically received from at least one device associated with the person,” the Examiner submits that this additional limitation merely adds insignificant pre-solution activity (data gathering; selecting data to be manipulated) to the at least one abstract idea (see MPEP § 2106.05(g)).
Claims 1 and 18 (similar to claim 1) do not have any additional elements.
Regarding dependent claims 2-7, 9, 12-17, 20- 22 and 24-26, the at least one computing device, user computing device, software application and graphical user interface also are recited at high levels of generality (i.e., as generic computer components performing generic computer functions of receiving data/inputs, determining and providing data) such that it amounts no more than mere instructions to apply the exception using the generic computer components.
Looking at the limitations as an ordered combination add nothing that is not already present when looking at the elements taken individually. For instance, there is no indication that the additional elements, when considered as a whole, reflect an improvements in the functioning of a computer or an improvement to another technology or technical field, apply or us the above-noted implement/use to above-noted judicial exception with a particular machine or manufacture that is integral to the claim, effect a transformation or reduction of a particular article to a different state or thing, or apply or use the judicial exception in some meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is not more than a drafting effort designed to monopolize the exception (see MPEP §2106.05). Their collective functions merely provide conventional computer implementation.
Claims 2-7, 9, 12-17, 20-22 and 24-26 are ultimately depend from claims 1, 10 and 18 and include all the limitations of claims 1, 10 and 18. Therefore, claims 2-7, 9, 12-17, 20-22 and 24-26 recite the same abstract idea. Claims 2-3, 15 and 20-22 describe data gathering, claims 14 and 17 describe what the data is such as input features in a fitting procedure and at least two other data models. Claims 7 and 16 describe displaying data. Claims 4, 9 and 12-13 describe determining data such as recalibrating models, calculating in the continuous distribution as a function of parametric non-linear mapping, imputing a value that is not included in a set of inputs used by the first data model, imputing a value that is not included in the previously imputed values or one of the quantified endpoints, where the other imputed value depends on previously imputed value and weighing a plurality of risks by combining probabilities or by using an averaging procedure. Claim 5 describes what the system is merely associated with, and claim 6 describes sending data. These are all just further describing the abstract idea recited in claims 1, 10 and 18, without adding significantly more.
Step 2B:
Regarding Step 2B, in representative independent claim 10, regarding the additional limitations of the computer-implemented system including the one computing, display screens and graphical user interface on the user computing, the Examiner submits that these limitations amount to merely using a computer to perform the at least one abstract idea which is insufficient to provide significantly more (see MPEP § 2106.05(f)).
Thus, representative independent claim 10 and analogous independent claims 1 and 18 do not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for the same reasons to those discussed above with respect to determining that the claim does not integrate the abstract idea into a practical application.
Also, as discussed above with respect to integration of the abstract idea into a practical application, the additional elements of at least one of artificial intelligence and machine learning,” “building a first data model”, “building a second data model”, and “building a third data model” were determined to generally link the abstract idea to a particular technological environment or field of use. This has been re-evaluated under the “significantly more” analysis and has also been found insufficient to provide significantly more. MPEP 2106.05(A) indicates that generally linking an abstract idea to a particular technological environment or field of use cannot provide significantly more. Accordingly, even in combination, this additional element does not provide significantly more.
The dependent claims no not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for the same reason discussed above with respect to determining that the dependent claims do not integrate the at least abstract idea into significantly more.
Therefore, claims 1-7, 9-10, 12-18, 20-22 and 24-26 are ineligible under 35 USC §101.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-5, 10, 12-14, 17-18, 21-22 and 24-26 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Haber (US 2015/0112710 A1).
Claim 1:
Haber discloses a computer-implemented method for integrating modulated output from each of a plurality of models to quantify factors to generate a plurality of values, each within a continuous distribution, (See Fig. 2A- 2B development of multiple models mentioned in P0044, P0068-P0075. Also, see Fig. 8, P0180-P0185.) the method comprising:
accessing, by at least one computing device, input associated with a person (Besides useful patient records in P0049, collecting data from monitored patient in P0104, see P0147-P0148 accessing relevant information when building knowledge bases.);
inputting, by at least one computing device, at least one value that is not included in the input by generating automatically the at least one value (Taught in P0087 as a missing data and value.);
building a first data model by deriving information collected from at least one data source (See P0107 as data sources feed the clinical predictive analytics inputs and P0111 as continuously updated new risk variable data and scoring. Also, see 254 in Fig. 2B training data set #1.);
quantifying, by the first data model running on at least one computing device, each of a plurality of measurable endpoints associated with a person that contribute to a current state and a likelihood of a future occurrence of a health condition, wherein each quantified endpoint is calculated within a first continuous distribution, at least as a function of the respective measurable endpoints and as a function of an estimate of risk associated with the health condition (Taught in Fig. 2B, P0048 as candidate model 252, baseline and dynamic outcome likelihood model 276 mentioned in P0048-P0049, [P0069-P0072] the dynamic outcome likelihood model form 280 is created by defining an outcome likelihood model component, a component of the outcome likelihood model form 262, corresponding to each dynamic risk variable and comparing the component magnitude when calculated with an actual dynamic risk variable value to the component magnitude when calculated with an estimated value of the dynamic risk variable produced by a dynamic risk variable model form.);
building a second data model by deriving information collected from at least one data source;
generating, by the second data model running on the at least one computing device, respective values representing a plurality of aspects of the person's health that impact the likelihood of the future occurrence, wherein each of the respective generated values is generated within a second continuous distribution at least as a function of at least some of the input and/or the at least one imputed value that informs the aspects, and as a function of data from the at least one data source used to build the second data model (Taught in Fig. 2A-2B, as outcome likelihood model 260, and outcome likelihood model 262, where identified set of candidate model risk variables serve as respective values representing a plurality of aspects of the person's health mentioned in [P0048-P0049] The outcome specific etiological knowledge 256 identifies risk factors, e.g., concepts that characterize factors that are of interest in predicting the likelihood of a particular adverse outcome for which a model is to be trained. In general, the outcome specific etiological knowledge 256 includes information that pertains to outcome specific causal factors that may lead to an associated adverse outcome of interest.);
building a third data model by deriving information collected from at least one data source;
identifying, by the third data model running on the at least one computing device, individual ones of a plurality of factors associated with a subset of the endpoints and the aspects that are individually modifiable to affect the likelihood of the future occurrence, wherein the plurality of factors are identified at least in part as a function of input automatically received from at least one device associated with the person, and generating, for each of the identified plurality of factors, a respective value within a third continuous distribution and respectively representing one of the plurality of factors (See Fig. Fig. 2B, P0074-P0076 where the second training data set 284 serve as a plurality of factors associated with a subset of the endpoints and model fitting process 282 serv as the aspects that are individually modifiable to affect the likelihood of the future occurrence generating, for each of the identified plurality of factors, a respective value within a third continuous distribution.);
modulating, by a modulating model running on the at least one computing device, at least one of the quantified endpoints by the first data model and at least one of the values representing the plurality of aspects generated by the second data model, to scale a value representing at least one aspect associated with the likelihood of the future occurrence, wherein the modulating is based on at least one of the plurality of factors (See Fig. Fig. 2B, P0074-P0076 where the statistical parameters associated with each model and outputs an outcome likelihood model 286, baseline outcome likelihood model 288, dynamic outcome likelihood model 290, and dynamic risk variable models 292 quantify the endpoints, generated by the second data model, to scale values representing aspect associated with the likelihood of the future occurrence.);
using at least one of artificial intelligence and machine learning comprised in the at least one computing device (See P0181-P0185 where learning system and training process serve as machine learning.), to integrate at least two of the values associated with each of the first, second, and third continuous distributions, wherein the integrated at least two of the values represents the likelihood of the future occurrence of the health condition (See at least Abstract, P0058-P0059 demographic, clinical variables, two categories of data as baseline data and dynamic data, exemplary values representing the likelihood of the future occurrence of the health condition.); and
transmitting, without human intervention by the at least one computing device to at least one other computing device, the integrated at least two of the values (Taught as predetermined thresholds mentioned in P0078-P0080, shown in Fig. 2B Alerting and Attribution Algorithm 294.).
Claim 2:
Haber further teaches:
imputing, by the at least one computing device, at least one value that is not included in a set of inputs used by the first data model (See Fig. Fig. 2B, P0074-P0076 where the second training data set 284 serve as one value that is not included in the first data model.).
Claim 3:
Haber further teaches:
imputing, by the at least one computing device, at least one other value that is not included in the previously imputed value(s) or one of the quantified endpoints, wherein the at least one other imputed value depends on at least one of previously imputed value (See Fig. Fig. 2B, P0074-P0076 where the second training data set 284 serve as one value that is not included in the first data model.) and wherein the imputed at least one other value is within a continuous distribution (See [P0069-P0070] The dynamic risk variable model development process 272 may employ all the data in the first training data set 254, or only a subset of the data in the first training data set 254.).
Claim 4:
Haber further teaches:
recalibrating at least one of the first data model, the second data model, and the third data model as a function of information received over time or information received from a plurality of data sources (See Fig. 4, [P0127-P0128] This operational information is useful, for instance, in assessing the quality of the system in an operational context and for recalibrating the models and software for operation.).
Claims 5 and 14:
Haber further teaches:
wherein the first data model uses the respective endpoints as input features in a fitting procedure (See Fig. 2B Model Fitting Process mentioned in P0075.).
Claim 10:
Haber discloses a computer-implemented system for integrating modulated output from each of a plurality of models to quantify factors to generate a plurality of values, each within a continuous distribution, (See Fig. 1 processing devices mentioned in P0025, 2A- 2B development of multiple models mentioned in P0044, P0068-P0075. Also, see Fig. 8, P0180-P0185.) the system comprising:
an imputation engine configured to impute at least one value that is not included in input associated with a person, by generating automatically the at least one value (Besides useful patient records in P0049, collecting data from monitored patient in P0104, see P0147-P0148 accessing relevant information when building knowledge bases. Taught in P0087 as a missing data and value.);
a first data model built by deriving information collected from at least one data source building a first data model by deriving information collected from at least one data source (See P0107 as data sources feed the clinical predictive analytics inputs and P0111 as continuously updated new risk variable data and scoring. Also, see 254 in Fig. 2B training data set #1.) and
running on a computing device, the first data model quantifying each of a plurality of measurable endpoints associated with a person that contribute to a current state and a likelihood of a future occurrence of a health condition, wherein each quantified endpoint is calculated within a first continuous distribution, at least as a function of the respective measurable endpoints and as a function of an estimate of risk associated with the health condition (Taught in Fig. 2B, P0048 as candidate model 252, baseline and dynamic outcome likelihood model 276 mentioned in P0048-P0049, [P0069-P0072] the dynamic outcome likelihood model form 280 is created by defining an outcome likelihood model component, a component of the outcome likelihood model form 262, corresponding to each dynamic risk variable and comparing the component magnitude when calculated with an actual dynamic risk variable value to the component magnitude when calculated with an estimated value of the dynamic risk variable produced by a dynamic risk variable model form.);
a second data model built by deriving information collected from at least one data source and running on at least one computing device, the second data model generating respective values representing a plurality of aspects of the person's health that impact the likelihood of the future occurrence, wherein each of the respective generated values is generated within a second continuous distribution at least as a function of at least some of the input and/or the at least one imputed value that informs the aspects, and as a function of data used to build the second data model (Taught in Fig. 2A-2B, as outcome likelihood model 260, and outcome likelihood model 262, where identified set of candidate model risk variables serve as respective values representing a plurality of aspects of the person's health mentioned in [P0048-P0049] The outcome specific etiological knowledge 256 identifies risk factors, e.g., concepts that characterize factors that are of interest in predicting the likelihood of a particular adverse outcome for which a model is to be trained. In general, the outcome specific etiological knowledge 256 includes information that pertains to outcome specific causal factors that may lead to an associated adverse outcome of interest.);
a third data model built by deriving information collected from at least one data source and running on the at least one computing device, the third data model identifying individual ones of a plurality of factors associated with a subset of the endpoints and the aspects that are individually modifiable to affect the likelihood of the future occurrence, wherein the plurality of factors are identified at least in part as a function of input automatically received from at least one device associated with the person, and generating, for each of the identified plurality of factors, a respective value within a third continuous distribution and respectively representing one of the plurality of factors (See Fig. Fig. 2B, P0074-P0076 where the second training data set 284 serve as a plurality of factors associated with a subset of the endpoints and model fitting process 282 serv as the aspects that are individually modifiable to affect the likelihood of the future occurrence generating, for each of the identified plurality of factors, a respective value within a third continuous distribution.);
a modulating model running on the at least one computing device that modulates at least one of the quantified endpoints by the first data model and at least one of the values representing the plurality of aspects generated by the second data, to scale a value representing at least one aspect associated with the likelihood of the future occurrence, wherein the modulating is based on at least one of the plurality of factors (See Fig. Fig. 2B, P0074-P0076 where the statistical parameters associated with each model and outputs an outcome likelihood model 286, baseline outcome likelihood model 288, dynamic outcome likelihood model 290, and dynamic risk variable models 292 quantify the endpoints, generated by the second data model, to scale values representing aspect associated with the likelihood of the future occurrence.);
at least one of artificial intelligence and machine learning comprised in the at least one computing device (See P0181-P0185 where learning system and training process serve as machine learning.), that integrates at least two of the values associated with each of the first, second, and third continuous distributions, wherein the integrated at least two of the values represents the likelihood of the future occurrence of the health condition (See at least Abstract, P0058-P0059 demographic, clinical variables, baseline data and dynamic data as exemplary values representing the likelihood of the future occurrence of the health condition.); and
transmitting, without human intervention by the at least one computing device to at least one other computing device, the integrated at least two of the values (Taught as predetermined thresholds mentioned in P0078-P0080, shown in Fig. 2B Alerting and Attribution Algorithm 294.).
Claim 12:
Haber further teaches:
wherein the at least one computing device is further configured to impute at least one other value that is not included in the previously imputed value(s) or one of the quantified endpoints, wherein the at least one other imputed value depends on at least one of previously imputed value (See Fig. Fig. 2B, P0074-P0076 where the second training data set 284 serve as one value that is not included in the first data model.) and further wherein the imputed other endpoint is within a continuous distribution (See [P0069-P0070] The dynamic risk variable model development process 272 may employ all the data in the first training data set 254, or only a subset of the data in the first training data set 254.).
Claim 13:
Haber further teaches:
at least one computing device configured to recalibrate at least one of the first data model, the second data model, and the third data model as a function of information received over time or information received from a plurality of data sources (See Fig. 4, [P0127-P0128] This operational information is useful, for instance, in assessing the quality of the system in an operational context and for recalibrating the models and software for operation.).
Claim 17:
Haber further teaches:
wherein at least one of the first data model, the second data model and the third data model comprise a selection of at least two other data models (See building predictive models in P0004, P0006, Fig. 3B, P0075 where baseline outcome likelihood model 288, dynamic outcome likelihood model 290, and dynamic risk variable models 292 serve as at least two other data models.).
Claim 18:
Haber discloses a computer-implemented method for integrating modulated output from each of a plurality of models to quantify factors to generate a plurality of values, each within a continuous distribution, (See Fig. 1 processing devices mentioned in P0025, 2A- 2B development of multiple models mentioned in P0044, P0068-P0075. Also, see Fig. 8, P0180-P0185.) the method comprising:
accessing, by at least one computing device, input associated with a person (Besides useful patient records in P0049, collecting data from monitored patient in P0104, see P0147-P0148 accessing relevant information when building knowledge bases.);
imputing, by at least one computing device, at least one value that is not included in the input by generating automatically the at least one value (Taught in P0087 as a missing data and value.);
building a first data model by deriving information collected from at least one data source (See P0107 as data sources feed the clinical predictive analytics inputs and P0111 as continuously updated new risk variable data and scoring. Also, see 254 in Fig. 2B training data set #1.);
quantifying, by the first data model running on at least one computing device, each of a plurality of measurable endpoints associated with a person that contribute to a current state and a likelihood of a future occurrence of a health condition, wherein each quantified endpoint is calculated within a first continuous distribution, at least as a function of the respective measurable endpoints and as a function of an estimate of risk associated with the health condition (Taught in Fig. 2B, P0048 as candidate model 252, baseline and dynamic outcome likelihood model 276 mentioned in P0048-P0049, [P0069-P0072] the dynamic outcome likelihood model form 280 is created by defining an outcome likelihood model component, a component of the outcome likelihood model form 262, corresponding to each dynamic risk variable and comparing the component magnitude when calculated with an actual dynamic risk variable value to the component magnitude when calculated with an estimated value of the dynamic risk variable produced by a dynamic risk variable model form.);
building a second data model by deriving information collected from at least one data source;
generating, by the second data model running on the at least one computing device, respective values representing a plurality of aspects of the person's health that impact the likelihood of the future occurrence, wherein each of the respective generated values is generated within a second continuous distribution at least as a function at least some of the input and/or the at least one imputed value that informs the aspects, and as a function of data from the at least one data source used to build the second data model (Taught in Fig. 2A-2B, as outcome likelihood model 260, and outcome likelihood model 262, where identified set of candidate model risk variables serve as respective values representing a plurality of aspects of the person's health mentioned in [P0048-P0049] The outcome specific etiological knowledge 256 identifies risk factors, e.g., concepts that characterize factors that are of interest in predicting the likelihood of a particular adverse outcome for which a model is to be trained. In general, the outcome specific etiological knowledge 256 includes information that pertains to outcome specific causal factors that may lead to an associated adverse outcome of interest.);
building a third data model by deriving information collected from at least one data source;
identifying, by the third data model running on the at least one computing device, individual ones of a plurality of factors associated with a subset of the endpoints and the aspects that are individually modifiable to affect the likelihood of the future occurrence, wherein the plurality of factors are identified at least in part as a function of input automatically received from at least one device associated with the person, and generating, for each of the identified plurality of factors, a respective value within a third continuous distribution and respectively representing one of the plurality of factors (See Fig. Fig. 2B, P0074-P0076 where the second training data set 284 serve as a plurality of factors associated with a subset of the endpoints and model fitting process 282 serv as the aspects that are individually modifiable to affect the likelihood of the future occurrence generating, for each of the identified plurality of factors, a respective value within a third continuous distribution.); and
modulating, by a modulating model running on the at least one computing device, at least one of the quantified endpoints by the first data model and the at least one of the values representing the plurality of aspects generated by the second data model, to scale a value representing at least one aspect associated with the likelihood of the future occurrence, wherein the modulating is based on at least one of the plurality of factors (See Fig. Fig. 2B, P0074-P0076 where the statistical parameters associated with each model and outputs an outcome likelihood model 286, baseline outcome likelihood model 288, dynamic outcome likelihood model 290, and dynamic risk variable models 292 quantify the endpoints, generated by the second data model, to scale values representing aspect associated with the likelihood of the future occurrence. See at least Abstract, P0058-P0059 demographic, clinical variables, baseline data and dynamic data as exemplary values representing the likelihood of the future occurrence of the health condition.).
Regarding claim 21, Haber further teaches:
integrating at least two of the values associated with each of the first, second and third continuous distributions, and selecting a respective discrete category associated with the integrated values, wherein each of the integrated values and the selected category represent the likelihood of a future occurrence (Integrating values is taught as selected risk variables non-modifiable and modifiable in P0004-P0006, P0033-P0036 when building multiple models and estimating the likelihood of the adverse outcome type.).
Regarding claim 22, Haber further teaches:
wherein at least one of the first data model, the second data model, and the third data model comprise at least one of artificial intelligence and machine learning (See P0181-P0185 where learning system and training process serve as machine learning, selected risk variables non-modifiable and modifiable in P0004-P0006, P0033-P0036 when building multiple models and estimating the likelihood of the adverse outcome type.).
Regarding claim 24, Haber discloses wherein the at least one other imputed value is based on at least one of previously imputed value (See [P0111] The risk variables are evaluated against the previously modeled outcome likelihood model. Also, see P0177.).
Regarding claim 25, Haber discloses further comprising: wherein the at least one imputed value is generated via an imputation hierarchy (Taught in P0118 as greater than or equal to threshold value and P0174 as prioritized new clinical data types.).
Regarding claim 26, Haber discloses further comprising: increasing imputation accuracy as a function of an order of a plurality of imputed values (Taught in P0126-P0127 as continued, periodic improvements for model refinement.).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 6 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Haber (US 2015/0112710 A1) in view of Stivoric (US 2014/0180598 A1).
Claim 6:
Although Haber discloses the method of claim 1 mentioned above, with a graphical user interface (See P0117), Haber does not explicitly teach when the graphical user interface displays a data communication session transmitting the quantified values associated with the endpoints, the quantified values associated with the aspects, and the generated values associated with the factors respectively from the first data model, the second data model, and the third data model. Stivoric teaches further comprising:
configuring a user computing device with a software application that provides a graphical user interface on the user computing device, wherein the graphical user interface receives at least some of the values and the aspects from a user operating the user computing device (See P0128 where operational models and software are applied using processing devices such as smart phones (P0025.);
receiving, by the at least one computing device from the user computing device over a data communication session, at least some of the values and aspects;
transmitting, by the at least one computing device to the user computing device, the quantified values associated with the at least some of the endpoints, the quantified values associated with the at least some of the aspects, and the generated values associated with at least some of the factors respectively from the first data model, the second data model, and the third data model (See Fig. 4, P0068-P0069 Obtain, transmit, input, monitor lifestyle data over time with sensors, transducers, data integration, software services, inputs and user interfaces taught as the graphical user interface, video, audio delivered content (P0017) and activity interactions as sessions (P0060).);
wherein the user computing device is further configured by the software application to: display the received values received from the at least one computing device (See Fig. 29A, example display screen for reporting outcome mentioned in P0158.).
Therefore, it would have been obvious to one of ordinary skill in the art of diagnosing medical conditions before the filing date of the invention to modify the method, software and system of Haber when the graphical user interface displays a data communication session transmitting the quantified values associated with the endpoints, the quantified values associated with the aspects, and the generated values associated with the factors respectively from the first data model, the second data model, and the third data model, as taught by Stivoric, to allow a systematic way of analyzing lifestyle data mentioned in Stivoric’s P0004-P0006.
Regarding claim 21, although Haber discloses the method of claim 20 mentioned above, Haber does not explicitly teach two of the values and selecting the respective discrete category is performed using at least one of artificial intelligence and machine learning. Stivoric further teaches:
wherein the at least two of the values and selecting the respective discrete category is performed using at least one of artificial intelligence and machine learning (Artificial intelligence (P0010, P0014, P0137), artificial intelligence engine (P0172) utilized for the Platform may generate reports, indexes, predictions construe multiple finding as selecting the respective discrete category. The reinforcement learning (P0014, P0130, P0145) with continuous distribution as the analysis of machine learning.).
Therefore, it would have been obvious to one of ordinary skill in the art of diagnosing medical conditions before the filing date of the invention to modify the method, software and system of Haber to have two of the values and selecting the respective discrete category is performed using at least one of artificial intelligence and machine learning, as taught by Stivoric, to allow a systematic way of analyzing lifestyle data mentioned in Stivoric’s P0004-P0006.
Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Haber (US 2015/0112710 A1) in view of Hong (U.S. 2019/0259499 A1).
Regarding claim 15, although Haber discloses the system of claim 10, a software application that, when executed on a user computing device and provide a graphical user interface that receives at least some of the values and the aspects from a user operating the user computing device mentioned above, and receive, without human intervention, the quantified values associated with the at least some of the endpoints, the quantified values associated with the at least some of the aspects, and the generated values associated with at least some of the factors respectively from the first data model, the second data model, and the third data model (See Haber’s Fig. Fig. 2B, P0074-P0076 where the statistical parameters associated with each model and outputs an outcome likelihood model 286, baseline outcome likelihood model 288, dynamic outcome likelihood model 290, and dynamic risk variable models 292 quantify the endpoints, generated by the second data model, to scale values representing aspect associated with the likelihood of the future occurrence.), Haber does not explicitly teach receiving and displaying at least two of the values without human intervention. Hong further comprising, causes the computing device to:
receive, without human intervention, the integrated at least two of the values (See Fig. 1 (Item 120), P0036-P0037 transmit patient data (training input) transmitted to machine learning process in order to generate a training model capable of predicting scores. See Fig. 1 (Item 120), P0036-P0037.); and
display the values and the integrated at least two of the values received from the at least one computing device (See Fig. 4A, and Total SOFA score and Component SOFA score as two values displayed as Fig. 4B, P0083-P0084.).
Therefore, it would have been obvious to one of ordinary skill in the art of medical machine learning before the filing date of the invention to modify the method, software and system of Haber to have receiving and displaying at least two of the values without human intervention, as taught by Hong, to promote confidence while implementing a preventative treatment plan for a patient to avoid multiple organ failure in the future.
Claims 7 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Haber (US 2015/0112710 A1) in view of Cosentino (U.S. 8,419,650 B2).
Claim 7:
Although Haber discloses the method of claim 1 as mentioned above, Haber does not explicitly teach regularly and periodically prompts a user to enter values associated with the factors, automatically provides interactive display screens when values associated with the factors are not received subsequent to previously received values factors. Cosentino further teaches:
configuring a user computing device with a software application that provides a graphical user interface on the user computing device, wherein the graphical user interface regularly and periodically prompts a user to enter values associated with the factors, and further wherein the graphical user interface automatically provides interactive display screens when values associated with the factors are not received subsequent to previously received values factors (An interactive display screens is construed in [column 3, line 39 to column 4, line 5] The monitoring device may prompt the patient to provide responses to health-related questions or requests for physiological characteristics and may upload the responses.).
Therefore, it would have been obvious to one of ordinary skill in the art of medical display mapping before the filing date of the invention to modify the method, software and system of Haber to have regularly and periodically prompts a user to enter values associated with the factors, automatically provides interactive display screens when values associated with the factors are not received subsequent to previously received values factors, as taught by Cosentino, to easily view an optimized display screen with updates of the patient’s vital signs.
Claim 16:
Although Haber discloses the system of claim 10 mentioned above, Haber does not explicitly teach regularly and periodically prompts a user to enter values associated with the factors, automatically provides interactive display screens when values associated with the factors are not received subsequent to previously received values factors. Cosentino further teaches:
a software application that, when executed on a user computing device, causes the computing device to: provide a graphical user interface that regularly and periodically prompts a user to enter values associated with the factors, and further wherein the graphical user interface automatically provides interactive display screens when values associated with the factors are not received subsequent to previously received values (Genetic data, environmental data, transactional data, economic data, socioeconomic data, demographic data in P0011, P0094 each serve as one value that is not included in a set of inputs. An interactive display screens is construes in [column 3, line 39 to column 4, line 5] The monitoring device may prompt the patient to provide responses to health-related questions or requests for physiological characteristics and may upload the responses.).
Therefore, it would have been obvious to one of ordinary skill in the art of medical display mapping before the filing date of the invention to modify the method, software and system of Haber to have regularly and periodically prompts a user to enter values associated with the factors, automatically provides interactive display screens when values associated with the factors are not received subsequent to previously received values factors, as taught by Cosentino, to easily view an optimized display screen with updates of the patient’s vital signs.
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Haber (US 2015/0112710 A1) in view of Luo (US 2011/0021936 A1).
Claim 9:
Luo further teaches:
wherein the values are calculated in the continuous distribution as a function of parametric non-linear mapping (Taught as nonlinear color map in P0042, P0044, shown in Fig. 5e running along with ECG data and stress data(P0035).).
Therefore, it would have been obvious to one of ordinary skill in the art of medical display mapping before the filing date of the invention to modify the method, software and system of Haber to have continuous distribution as a function of parametric non-linear mapping, as taught by Luo, to easily view patient vital signs that may be escalated by stress.
Response to Arguments
Applicant's arguments filed 01/28/2026, see pgs. 10-11 of Remarks have been fully considered but they are not persuasive. The revised amendments do not overcome the maintained 101 rejection (See analysis above).
Applicant’s arguments have been fully considered, but are now moot in view of the new
grounds of rejection. The Examiner has entered a new rejection under 35 USC § 102 and 103 and applied new art and art already of record.
With respect to the 112(a) rejection, the Examiner refers to paragraphs 28 and 75 of Applicant’s specification as support for the 112(a) rejection. Upon further review and consideration, the 112(a) rejection of claims 1-7, 9-18 and 20-23 has been withdrawn.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. (See Haug (US 2008/0133275 A1).).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TERESA S WILLIAMS whose telephone number is (571)270-5509. The examiner can normally be reached Mon-Fri, 8:30 am -6:30 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mamon Obeid can be reached at (571) 270-1813. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/T.S.W./Examiner, Art Unit 3687 02/20/2026
/ALAAELDIN M. ELSHAER/Primary Examiner, Art Unit 3687