Prosecution Insights
Last updated: April 19, 2026
Application No. 18/377,664

DATA PROCESSING SYSTEM AND METHOD FOR PREDICTING A SCORE REPRESENTATIVE OF A PROBABILITY OF A SEPSIS FOR A PATIENT

Final Rejection §101§102§103§112
Filed
Oct 06, 2023
Examiner
MONTICELLO, WILLIAM THOMAS
Art Unit
3681
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Previa Medical
OA Round
2 (Final)
53%
Grant Probability
Moderate
3-4
OA Rounds
3y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 53% of resolved cases
53%
Career Allow Rate
72 granted / 137 resolved
+0.6% vs TC avg
Strong +54% interview lift
Without
With
+54.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
39 currently pending
Career history
176
Total Applications
across all art units

Statute-Specific Performance

§101
39.0%
-1.0% vs TC avg
§103
45.4%
+5.4% vs TC avg
§102
5.8%
-34.2% vs TC avg
§112
7.3%
-32.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 137 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims This Final Office Action is in response to the Amendment and Remarks filed 10/13/2025. Claim 1 is amended. Claim 4 is cancelled. Claims 1-3 and 5-11 are currently pending and considered herein. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 8 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claim 8, claim 8 recites converting data to “HL7(TM) FHIR(TM) resource.” Paragraph 60 of the specification states that FHIR is a registered trademark and HL7 as also being indicated as being trademarked. MPEP 608.01(v) states "The relationship between a mark or trade name and the product, service, or organization it identifies is sometimes indefinite, uncertain, and arbitrary. For example, the formula or characteristics of a product may change from time to time and yet it may continue to be sold under the same mark or trade name. In patent specifications, the details of the product, service, or organization identified by a mark or trade name should be set forth in positive, exact, intelligible language, so that there will be no uncertainty as to what is meant. Arbitrary marks or trade names which are liable to mean different things at the pleasure of the owner do not constitute such language. Ex Parte Kattwinkle, 12 USPQ 11 (Bd. App. 1931)." Per MPEP 2173.05(u), pertaining to trademarks recited in a claim, “if its presence in the claim causes confusion as to the scope of the claim, then the claim should be rejected under 35 USC 112(b)”. Furthermore, the acronyms should be spelled out or defined in their initial introduction to the claims. Appropriate correction is required. Amend claim using the full name of the system/standard in the claim, thereby defining each acronym, to properly respond and the rejection will be dropped. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-11 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Claim 1 recites, wherein the abstract elements are not emboldened: A data processing system for predicting a score representative of a probability of a sepsis for a patient, comprising: a data interface configured to receive, from at least one database stored on at least one server, health data of at least one patient, the health data comprising regularly updated biometric monitoring data and health history data provided by at least one health history database, and a backend server comprising a trained machine learning model configured to predict and provide the score using as input the health data for each patient, and comprising a module for calculating to provide a plurality of sub-scores representative of a correlation between the health data and the predicted score, said module for calculating the sub-scores being configured to compute for each input of the machine learning model the positive or negative weight of said input on the predicted score, and to provide a list of most relevant sub-scores. Independent claim 9 recites substantially similar limitations including training a machine learning model. The claimed invention is directed to the abstract idea of collecting patient information including health data and monitored data, analyzing the information, and generating scores/predictions based on the analyses. The limitations “to receive health data of at least one patient, the health data comprising regularly updated biometric monitoring data and health history data provided by at least one, and to predict and provide the score using as input the health data for each patient, and calculating to provide a plurality of sub-scores representative of a correlation between the health data and the predicted score, said calculating the sub-scores being configured to compute for each input of the positive or negative weight of said input on the predicted score, and to provide a list of most relevant sub-scores,” as drafted, is a process that, under its broadest reasonable interpretation, is an abstract idea that covers performance of the limitation as organizing human activity. For example, but for the generic computer system including reciting training a machine learning model, a data interface, and database and servers, analyzing patient data, in the context of this claim, is an abstract idea that covers performance of the limitation as organizing human activity including following rules or instructions. The claim recites as a whole a method of organizing human activity because the limitations include a method that allows users to access myriad patient data, analyze the data and determine whether certain conditions are met based on the analyses (whether sepsis is onset or not). This is a method of managing interactions between people. The mere nominal recitation of a generic training a machine learning model, a data interface, and database does not take the claims out of the method of organizing human interactions grouping. The additional limitations amount to computer methods for further implementing the abstract idea of organizing human activity. Thus, the claims recite an abstract idea. The claims also recited an abstract idea including mental processes. But for the generic recitation of training a machine learning model, a data interface, and database and servers, nothing in the claims is precluded from being performed in the mind. For example, a physician can collect the patient data and analyze it and determine if there is a risk of sepsis or not based on the analyses. Thus, the claims recite an abstract idea. This judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements of the generic training a machine learning model, a data interface, and database and servers. The computer and/or medical devices and functions in these steps are recited at a high-level of generality (i.e., as a generic processor/server/storage/display performing a generic computer function of receiving inputs, analyzing the inputs, and displaying selected information) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The limitations seem to monopolize the abstract idea of patient analysis and diagnoses and general techniques between a clinician and her patient. Furthermore, there is no clear improvement to the underlying computer technology in the claim. The claim is thus directed to an abstract idea. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of the generic training a machine learning model, a data interface, and database and servers amounts to no more than mere instructions to apply the exception using a computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The additional elements when considered separately and as an ordered combination do not amount to add significantly more as these limitations provide nothing more than to simply apply the exception in a generic computer environment. The dependent claims do not remedy the deficiencies of the independent claims with respect to patent eligible subject matter. The dependent claims further limit the abstract idea. Claim 2 further specifies a module for receiving information and additional databases, which are recited at a high level of generality such that they amount to no more than mere instructions to apply the judicial exception using a generic computer component and cannot provide an inventive concept. Even in combination, the module and databases do not integrate the abstract idea into a practical application and does not amount to significantly more than the abstract idea itself. Claim 3 defines the sub-score and further limits the abstract idea. Claims 5 and 6 include a display device which is recited at a high level of generality such that it amounts to no more than mere instructions to apply the judicial exception using a generic computer component and cannot provide an inventive concept. Even in combination, the display device does not integrate the abstract idea into a practical application and does not amount to significantly more than the abstract idea itself. Claim 7 describes an alert module which is recited at a high level of generality such that it amounts to no more than mere instructions to apply the judicial exception using a generic computer component and cannot provide an inventive concept. Even in combination, the alert module does not integrate the abstract idea into a practical application and does not amount to significantly more than the abstract idea itself. Claim 8 details a data interface to transform health data into a standard format, which is recited at a high level of generality such that it amounts to no more than mere instructions to apply the judicial exception using a generic computer component and cannot provide an inventive concept. Even in combination, the conversion of data to a standard format does not integrate the abstract idea into a practical application and does not amount to significantly more than the abstract idea itself. Claim 10 describes input training data and further limits the abstract idea. Claim 11 details updating the machine learning model by using updated training data which is recited at a high level of generality such that it amounts to no more than mere instructions to apply the judicial exception using a generic computer component and cannot provide an inventive concept. Even in combination, the updating of the machine learning model does not integrate the abstract idea into a practical application and does not amount to significantly more than the abstract idea itself. Therefore, the claims are not patent eligible. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-7 and 9-11 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by U.S. 2024/0321447 A1 to Selvaraj et al., hereinafter “Selvaraj.” Regarding claim 1, Selvaraj discloses A data processing system for predicting a score representative of a probability of a sepsis for a patient, comprising: a data interface configured to receive, from at least one database stored on at least one server, health data of at least one patient (See Selvaraj at least at Abstract; Paras. [0071]-[0072] (“Embodiments of the system are configured to store patient health data 110 for a plurality of patients. The patient health data may be stored in a data store 10, which may be a database or a multiple connected databases stored on local, networked, or cloud based storage devices […] The system and method are configured to generate a plurality of sepsis prediction models and a general population sepsis prediction model each trained using the stored patient health data 120. These models may be machine learning or AI-based models, such as classifier models.”); Figs. 1, 2), the health data comprising regularly updated biometric monitoring data and health history data provided by at least one health history database, (See id. at least at Paras. [0077]-[0080] (“The clinician interface 40 may also be configured to access electronic medical records stored by hospitals, clinics, or other health providers which contain the patient's demographic characteristic profile (i.e. patient metadata) and clinical history, including outcomes of infections, treatments and hospitalizations. Such outcomes in relation to infection and sepsis events can be used to train the sepsis prediction models. Patient metadata such as demographic information, general health characteristics and pre-existing conditions may also be entered via the patient user interface or clinician user interface […] The data store 10 used to store patient health data also provides the data to the infection and sepsis forecaster 50 which is used to monitor the patient using a personalized sepsis prediction model and detect infection and sepsis events. The data transfer between the data storage 10 and the clinician interface 40 and patient interface 30 are bidirectional, in which the data can be retrieved and also pushed with new data or update the existing data content.”); Claims 11-12), a backend server (See id. at least at Paras. [0075]-[0077]) comprising a trained machine learning model (See id. at least at Paras. [0079]-[0090]) configured to predict and provide the score using as input the health data for each patient (See id. at least at Abstract; Paras. [0072] (“The system and method are configured to generate a plurality of sepsis prediction models and a general population sepsis prediction model each trained using the stored patient health data 120. These models may be machine learning or AI-based models, such as classifier models.”), [0077]-[0080] (“As outlined above, the system determines patient similarity measures, such as similarity scores 22 based on the monitored patient's health data such as the patient's history, symptoms and continuously updating vital sign and measurement records to each of the training cohorts, and then uses these scores to select a personalized sepsis prediction model 24 from the set of pre-trained sepsis predictive models stored in the model store 20. As noted above each of these models are pretrained for a set of similar patients (“like patients”), referred to as the training cohort for the respective model, with similarity measures calculated from their health characteristics including history, comorbid conditions, symptoms, physiological measurements and laboratory values. As discussed above, the process to compute the similarity measure could use any or combinations of data items (or encodings) of the patient health data, similarity functions/metrics (which may generate similarity scores), and/or similarity criterion/criteria. There is also no restriction in the methods by which the similarity of various parameters is assessed or how these similarity functions/scores/criterion are combined and or applied to obtain a patient similarity measure.”)), and comprising a module for calculating a plurality of sub-scores representative of a correlation between the health data and the predicted score (See id. at least at Abstract; Paras. [0004]-[0008] (sub-scores including SOFA, Glasgow Coma Scale, qSOFA, SIRS), [0035] (“selecting a model from the set of similar models may be performed by calculating, for each model in the set of similar scores, a total similarity score by, for one or more medical conditions, determining a similarity score between the medical condition of the monitored patient and corresponding medical condition for each patient in the training cohort of the respective model and multiplying the similarity score by a weight for the medical condition, and then summing each of the weighted similarity scores to obtain the total similarity score; and selecting the model with the highest total similarity score.”), [0076]-[0080] (“The patient is intermittently or continuously monitored by one or more home and community based biomedical sensors. This may include one or more wearable devices that measure the patient's physical activity, the patient's physiological vital signs, along with other parameters related to patient health […] The infection and sepsis forecaster 50 uses the selected sepsis prediction model to monitor the incoming patient data. The sepsis prediction model may be any or a combination of a rule-based infection and or sepsis event detector, binary or multi-class classifier or a multivariate regressor assessing the risk for infection and sepsis events based on the monitoring data. The sepsis prediction model is configured to analyze incoming patient health data and generate an output indicating the risk of one or more infection and sepsis events. This may be a binary outcome, likelihood score or a probability measure. Determination of a positive event or a class or a risk associated with infection and sepsis leads to the generation of alerts and notifications 58.”); Figs. 1-3); said module for calculating the sub-scores being configured to compute for each input of the machine learning model the positive or negative weight of said input on the predicted score, and to provide a list of most relevant sub-scores (See id. at least at Paras. [0072] (“The system and method are configured to generate a plurality of sepsis prediction models and a general population sepsis prediction model each trained using the stored patient health data 120. These models may be machine learning or AI-based models, such as classifier models, and may be stored in a model store 20, such as a database or file store that electronically stores the relevant model parameters and configuration (for example by exporting a trained model) to allow later use of the stored model. Generating each of the plurality of sepsis prediction models may comprise identifying a training cohort of similar patients according to a patient similarity measure 122 and then training a sepsis prediction model using the training cohort of similar patients 124. Each time this is performed a different similarity measure is used based on a different combination of data items in the patient health data, one or more similarity functions and/or one or more similarity criterion to generate a different training cohort, and thus a different model. The data items could be different symptoms, measured vital signs and disease conditions. Each similarity measure is thus a distinct measure to enable generation of a distinct model, and through repeating this process we can generate a plurality of distinct (or unique) models. The patient similarity measure could be determined using one or more similarity scores, similarity metrics, similarity functions, or similarity criterion (or criteria), including various combinations of these, applied to various combinations of data items such as symptoms, measured vital signs and disease conditions […] In some embodiments, a similarity function may be used to generate a similarity score, and the score may be used directly as the similarity measure with similar patients selected based on having a score exceeding a threshold. In this embodiment the threshold is a similarity criterion, and thus different groups of similar patients could be identified using the same scoring function but using different thresholds (i.e. different similarity criterion). In another embodiment, similarity measures may be calculated for all patients, and the N (e.g. 500, 1000, 5000, 10000) patients with the highest similarity scores selected. In some embodiments a similarity score may be transformed or combined to obtain a numeric similarity measure. e.g. to convert the score to a similarity probability or to normalize the score to a predefined range such as [0,1]. In some embodiment a single similarity score may be calculated using a specific similarity function, whilst in other embodiment several similarity scores could be calculated each using different similarity function with the similarity scores added. The different scores could be combined using simple addition or some weighted combination (including linear and non-linear combinations) […] When generating a specific combination of data items, similarity functions, and/or similarity criterion/criteria used to generate a similarity measure/similar group of patients, a check could be performed to ensure the current combination is sufficiently different from another set (for example at least 3 different data items selected). Similarly after multiple similar patient groups have been identified using different similarity measures, this set could be filtered to exclude a patient group too similar to another patient group to ensure a diversity of similar patient groups, and thus a diversity of sepsis models. The models may be trained using all available data for patients (for example using deep learning training methods), or using specific data items, which may be determined based on how the similar patients were identified, for example the same set of data items used to calculate similarity. A general population sepsis prediction model is also generated by training a sepsis prediction model on a general population of patients drawn from the plurality of patients.”), [0074] (“The selected sepsis prediction model is then used to monitor the monitored patient to detect infection and sepsis events 150, for example by processing new/updates to patient health data. This may be used to generate electronic alerts if an infection and sepsis event is detected 152. The system may also repeat the step of selecting the most similar sepsis prediction model 140 in response to a change in the patient health data of the monitored patient over time 154. This allows the system to keep using the most similar (and arguably relevant) patient cohort as the patient's measurements and symptoms change, for example as the monitored patient begins to show signs of an infection or sepsis.”); Figs. 1, 4, 5). Regarding claim 2, Selvaraj discloses all the limitations of claim 1 and further discloses a module receiving health history data from at least one health history database each history database being an internal health history database providing health history data from a hospital or an external health history database providing health history data centralized from multiple sources of health history data (See id. at least at Paras. [0071]-[0072], [0077]-[0080] (“The clinician interface 40 may also be configured to access electronic medical records stored by hospitals, clinics, or other health providers which contain the patient's demographic characteristic profile (i.e. patient metadata) and clinical history, including outcomes of infections, treatments and hospitalizations. Such outcomes in relation to infection and sepsis events can be used to train the sepsis prediction models. Patient metadata such as demographic information, general health characteristics and pre-existing conditions may also be entered via the patient user interface or clinician user interface […] The data store 10 used to store patient health data also provides the data to the infection and sepsis forecaster 50 which is used to monitor the patient using a personalized sepsis prediction model and detect infection and sepsis events. The data transfer between the data storage 10 and the clinician interface 40 and patient interface 30 are bidirectional, in which the data can be retrieved and also pushed with new data or update the existing data content.”); Claim 11 (“a data store configured to store patient health data for a plurality of patients, the patient health data for a patient comprising a plurality of data items comprising a plurality of clinical data items obtained from one or more clinical data sources.”), Claim 12). Regarding claim 3, Selvaraj discloses all the limitations of claim 1 and further discloses wherein the sub-scores comprise any one or more of the following: - temperature, - heart rate, - oxygen saturation,- diastolic pressure, - systolic pressure, - respiratory rate, - health history, - age of the patient, - lactate level, - leukocyte level, - platelet level, - bilirubin level, - urine output during the last 24h, - creatinine level, - partial pressure of oxygen in arterial blood (Pa02), - fraction of inspired oxygen (FI02), - Glasgow Coma Score, - perioperative complications, - surgery procedure, - effective operation duration, - planned operation duration, - type of surgery (See id. at least at Paras. [0004]-[0008] (sub-scores including SOFA, Glasgow Coma Scale, qSOFA, SIRS, underwent a surgical procedure), [0076] (heart rate, temperature); Figs. 1, 4, 5). Regarding claim 5, Selvaraj discloses all the limitations of claim 1 and further discloses a display device configured to display at least the predicted score and at least one sub-score (See id. at least at Paras. [0080]-[0081], [0104]-[0105], [0126]-[0127] (“processing and automated decision making for a real-time prospective forecasting of infection and sepsis that is applicable for any patient monitoring settings such as critical care, general hospital ward, out-of-hospital or home settings. Embodiments further describe how to effectively combine the patient interface and clinician interface, derived inputs and patient measurements, determine the current patient's similarity measure (or score), select a personalized or population based pretrained sepsis prediction model based on the patient similarity measure, forecast the infection or sepsis condition in advance, generate notifications to be displayed in clinician interface tools.”); Claim 5; Figs. 1-3). Regarding claim 6, Selvaraj discloses all the limitations of claim 1 and further discloses a display device configured to display at least the predicted score and at least one sub-score, wherein the display device is configured to display at least the predicted score and a predetermined number of first sub-scores on the list of most relevant sub-scores (See id.). Regarding claim 7, Selvaraj discloses all the limitations of claim 1 and further discloses an alert module configured to send an alert if the score of a patient is over a predetermined threshold (See id. at least at Paras. [0029], [0074] (“The selected sepsis prediction model is then used to monitor the monitored patient to detect infection and sepsis events 150, for example by processing new/updates to patient health data. This may be used to generate electronic alerts if an infection and sepsis event is detected 152 […] This may be in response to one or more confirmations of detected infection and sepsis events, once a threshold time has passed.”), [0125]-[0127], Claim 6; Fig. 1). Regarding claim 9, Selvaraj discloses A method of training a machine learning model, comprising: receiving in a data interface from at least one database health data of at least one patient, and training a machine learning model to predict and provide a score using as input the health data for each patient (See Selvaraj at least at Abstract; Paras. [0071]-[0072] (“Embodiments of the system are configured to store patient health data 110 for a plurality of patients. The patient health data may be stored in a data store 10, which may be a database or a multiple connected databases stored on local, networked, or cloud based storage devices […] The system and method are configured to generate a plurality of sepsis prediction models and a general population sepsis prediction model each trained using the stored patient health data 120. These models may be machine learning or AI-based models, such as classifier models […] The system and method are configured to generate a plurality of sepsis prediction models and a general population sepsis prediction model each trained using the stored patient health data 120. These models may be machine learning or AI-based models, such as classifier models.”), [0077]-[0080] (“As outlined above, the system determines patient similarity measures, such as similarity scores 22 based on the monitored patient's health data such as the patient's history, symptoms and continuously updating vital sign and measurement records to each of the training cohorts, and then uses these scores to select a personalized sepsis prediction model 24 from the set of pre-trained sepsis predictive models stored in the model store 20. As noted above each of these models are pretrained for a set of similar patients (“like patients”), referred to as the training cohort for the respective model, with similarity measures calculated from their health characteristics including history, comorbid conditions, symptoms, physiological measurements and laboratory values. As discussed above, the process to compute the similarity measure could use any or combinations of data items (or encodings) of the patient health data, similarity functions/metrics (which may generate similarity scores), and/or similarity criterion/criteria. There is also no restriction in the methods by which the similarity of various parameters is assessed or how these similarity functions/scores/criterion are combined and or applied to obtain a patient similarity measure.”)), and to provide a plurality of sub-scores representative of a correlation between the health data and the predicted score, the health data comprising regularly updated biometric monitoring data and health history data provided by at least one health history database (See id. at least at Abstract; Paras. [0004]-[0008] (sub-scores including SOFA, Glasgow Coma Scale, qSOFA, SIRS), [0035] (“selecting a model from the set of similar models may be performed by calculating, for each model in the set of similar scores, a total similarity score by, for one or more medical conditions, determining a similarity score between the medical condition of the monitored patient and corresponding medical condition for each patient in the training cohort of the respective model and multiplying the similarity score by a weight for the medical condition, and then summing each of the weighted similarity scores to obtain the total similarity score; and selecting the model with the highest total similarity score.”), [0077]-[0080] (“The patient is intermittently or continuously monitored by one or more home and community based biomedical sensors. This may include one or more wearable devices that measure the patient's physical activity, the patient's physiological vital signs, along with other parameters related to patient health […] The infection and sepsis forecaster 50 uses the selected sepsis prediction model to monitor the incoming patient data. The sepsis prediction model may be any or a combination of a rule-based infection and or sepsis event detector, binary or multi-class classifier or a multivariate regressor assessing the risk for infection and sepsis events based on the monitoring data. The sepsis prediction model is configured to analyze incoming patient health data and generate an output indicating the risk of one or more infection and sepsis events. This may be a binary outcome, likelihood score or a probability measure. Determination of a positive event or a class or a risk associated with infection and sepsis leads to the generation of alerts and notifications 58.”); wherein the input training data comprises health data representative from at least one previous hospitalization history from a plurality of patients (See id. at least at Paras. [0071]-[0072], [0077]-[0080] (“The clinician interface 40 may also be configured to access electronic medical records stored by hospitals, clinics, or other health providers which contain the patient's demographic characteristic profile (i.e. patient metadata) and clinical history, including outcomes of infections, treatments and hospitalizations. Such outcomes in relation to infection and sepsis events can be used to train the sepsis prediction models. Patient metadata such as demographic information, general health characteristics and pre-existing conditions may also be entered via the patient user interface or clinician user interface.”), and for each patient: - at least one history of biometric monitoring data over each period of hospitalization (See id. at least at Paras. [0071]-[0072], [0077]-[0080] (“The patient is intermittently or continuously monitored by one or more home and community based biomedical sensors. This may include one or more wearable devices that measure the patient's physical activity, the patient's physiological vital signs, along with other parameters related to patient health […] The infection and sepsis forecaster 50 uses the selected sepsis prediction model to monitor the incoming patient data. The sepsis prediction model may be any or a combination of a rule-based infection and or sepsis event detector, binary or multi-class classifier or a multivariate regressor assessing the risk for infection and sepsis events based on the monitoring data. The sepsis prediction model is configured to analyze incoming patient health data and generate an output indicating the risk of one or more infection and sepsis events. This may be a binary outcome, likelihood score or a probability measure. Determination of a positive event or a class or a risk associated with infection and sepsis leads to the generation of alerts and notifications 58.”), [0090] (“When the patient's symptoms are updated by the self-report application 34, or when the patient's electronic medical history is updated, the model/classifier that is used is updated or re-selected.”), - data representative of occurrence or absence of a sepsis and the severity of any occurrence of a sepsis by the patient during the period of hospitalization (See id. at least at Paras. [0009], [0071]-[0072] (“In some embodiment a single similarity score may be calculated using a specific similarity function, whilst in other embodiment several similarity scores could be calculated each using different similarity function with the similarity scores added. The different scores could be combined using simple addition or some weighted combination (including linear and non-linear combinations).”), [0081] (“The clinician can review the generated positive alerts 59, the corresponding health trend data, and can verify the validity of the generated alerts and provide a feedback in annotating the infection and sepsis events to be true positives or false positives 48. In case of a new clinical event, the clinician interface allows the clinician to make entries of clinical events including severe adverse events and changes in medications.”), [0088]-[0089], [0095]-[0098]). Regarding claim 10, Selvaraj discloses all the limitations of claim 9 and further discloses wherein the input training data comprises, for each patient, history data from the patient (See id. at least at Abstract; Paras. [0017] (“generating and storing a plurality of sepsis prediction models and a general population sepsis prediction model each trained using the stored patient health data.”), [0030]-[0035], [0072]-[0075]). Regarding claim 11, Selvaraj discloses all the limitations of claim 9 and further discloses wherein the input training data is regularly updated, the method further comprising updating the machine learning model by training the machine learning model with the updated training data (See id. at least at Para. [0082] (“if the feedback entries for generated positive infection and sepsis events and or the new entries of qualifying clinical events exceeds a preset threshold, then decision logic is enabled for retraining 58 that results in adaptation or regeneration of the infection models sepsis prediction model for the given data repository containing patent information, continuous and or discrete patient measurements, and episodic symptoms and reference events 60. After retraining the models 156, the updated models 62 are stored in the model store 20, for example replacing the currently stored models.”), [0090] (“When the patient's symptoms are updated by the self-report application 34, or when the patient's electronic medical history is updated, the model/classifier that is used is updated or re-selected. This is to ensure the patient is being compared to the current most “like patients”.”), [0104] (“If it does not believe that a deterioration due to infection is likely 57, then it can simply continue monitoring. The model updated/reselection component described above will change the model (i.e. select a new model from model store 20) as appropriate if the patient's symptoms are to change.”); Figs. 1-3, 5). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Selvaraj, in view of U.S. 2024/0079145 A1 to Conward et al., hereinafter “Conward.” Regarding claim 8, Selvaraj discloses all the limitations of claim 1. Selvaraj may not specifically describe but Conward teaches wherein the data interface is configured to transform the health data to a HL7(TM) FHIR(TM) resource (See Conward at least at Paras. [0022] (“[P]erforming data transformations and mappings that are compliant with the HL7® FHIR® standard.”), [0047], [0022]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the disclosure of Selvaraj to incorporate the teachings of Conward and provide transformation of data to standards. Conward is directed to systems and methods for deriving health indicators from content. (See Conward at Abstract). Incorporating the health indicators and data transformations as in Conward with the method and system for personalized prediction of infection and sepsis as in Selvaraj would thereby increase the applicability, utility, and efficacy of the claimed system and method for predicting a score representative of a probability of sepsis for a patient. Response to Arguments Applicant’s amendments and remarks filed October 13, 2025 have been fully considered, but they are not entirely persuasive. The following explains why: Applicant’s arguments pertaining to prior art rejections are not persuasive. The rejection under 35 U.S.C. §103 is reasserted above. Selvaraj discloses health history as an input of myriad inputs to a training model. The Examiner disagrees with the arguments pertaining to prior art references at pages 12-15 of the Applicant’s Remarks. The claims stand rejected. Applicant’s arguments pertaining to subject matter eligibility are not persuasive. The basis for the previous rejection under 35 U.S.C. §101 is still operative and the claims have been addressed with regard to the updated 35 U.S.C. §101 rejection discussed above, and considered under the 2019 Revised Patent Subject Matter Eligibility Guidance (2019 PEG) and Updated PEG. The arguments at pages 10-12 of Applicant’s Response are not persuasive. The Examiner disagrees there is not an abstract idea. The Examiner disagrees that there is a technological improvement presented in the claims. The examiner disagrees there is a practical application that is integrated in the claims. It appears that computer technology is leveraged as mere instructions to apply the judicial exception abstract idea. The examiner disagrees that “AI” is being used as anything other than a tool to leverage the abstract idea. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. For at least these reasons and those stated above, the claims are not patent eligible. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to WILLIAM T. MONTICELLO whose telephone number is (313)446-4871. The examiner can normally be reached M-Th; 08:30-18:30 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, MARC Q. JIMENEZ can be reached at (571) 272-4530. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WILLIAM T. MONTICELLO/Examiner, Art Unit 3681 /MARC Q JIMENEZ/Supervisory Patent Examiner, Art Unit 3681
Read full office action

Prosecution Timeline

Oct 06, 2023
Application Filed
Mar 18, 2025
Non-Final Rejection — §101, §102, §103
Oct 13, 2025
Response Filed
Jan 15, 2026
Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12542202
BLOCKCHAIN PRESCRIPTION MANAGEMENT SYSTEM
2y 5m to grant Granted Feb 03, 2026
Patent 12539426
CONTROL OF A MEDICAL DEVICE
2y 5m to grant Granted Feb 03, 2026
Patent 12293839
Candidate Screening for a Target Therapy
2y 5m to grant Granted May 06, 2025
Patent 12186051
SYSTEMS AND METHODS FOR DETERMINING AND COMMUNICATING LEVELS OF BILIRUBIN AND OTHER SUBCUTANEOUS SUBSTANCES
2y 5m to grant Granted Jan 07, 2025
Patent 12100504
MACHINE LEARNING SYSTEM FOR ASYMMETRICAL MATCHING OF CARE GIVERS AND CARE RECEIVERS
2y 5m to grant Granted Sep 24, 2024
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
53%
Grant Probability
99%
With Interview (+54.3%)
3y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 137 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month