Prosecution Insights
Last updated: April 19, 2026
Application No. 17/791,879

A TIME-SENSITIVE TRIGGER FOR A STREAMING DATA ENVIRONMENT

Non-Final OA §101§103
Filed
Jul 08, 2022
Examiner
HRANEK, KAREN AMANDA
Art Unit
3684
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Prenosis Inc.
OA Round
3 (Non-Final)
36%
Grant Probability
At Risk
3-4
OA Rounds
3y 7m
To Grant
83%
With Interview

Examiner Intelligence

Grants only 36% of cases
36%
Career Allow Rate
62 granted / 172 resolved
-16.0% vs TC avg
Strong +47% interview lift
Without
With
+46.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
49 currently pending
Career history
221
Total Applications
across all art units

Statute-Specific Performance

§101
30.3%
-9.7% vs TC avg
§103
35.3%
-4.7% vs TC avg
§102
10.6%
-29.4% vs TC avg
§112
20.3%
-19.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 172 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 9/18/2025 has been entered. Status of the Claims The status of the claims as of the response filed 9/18/2025 is as follows: Claims 1-7, 10-17, and 20-21 are currently amended. Claims 8-9 and 18-19 are original. Claims 1-21 are currently pending in the application and have been considered below. Information Disclosure Statement The information disclosure statement (IDS) submitted on 7/9/2025 is in accordance with the provisions of 37 CFR 1.97 and is considered by the Examiner. Response to Amendment Rejection Under 35 USC 101 The claims have been amended but the 35 USC 101 rejections for claims 1-21 are upheld. Rejection Under 35 USC 103 The amendments made to the claims introduce limitations that are not fully addressed in the previous office action, and thus the corresponding 35 USC 103 rejections are withdrawn. However, Examiner will consider the amended claims in light of an updated prior art search and address their patentability with respect to prior art below. Response to Arguments Rejection Under 35 USC 101 On pages 10-11 of the response filed 9/18/2025 Applicant argues that the amendments made to claim 1 directed to communication between and performance of various steps by an application installed on a client device and a server show that the claimed steps “are not abstract, but rather physical tangible actions that are occurring between the application installed on the client device and the server that is communicatively coupled to the client device” such that they “cannot be performed in the human mind.” Applicant further asserts that the amended claims provide “specific improvements in technology by including additional elements that integrate the abstract idea into a practical application,” and makes similar arguments for independent claims 11 and 16 on pages 11-13. Applicant’s arguments are fully considered, but are not persuasive. Examiner maintains that the underlying functions of receiving a dataset, entering a measured value into a first field of the dataset, receiving various calculated/predicted values/metrics based on the dataset, and determining whether a statistically derived metric exceeds a threshold recite an abstract idea that could otherwise be achieved by human actors, for example a certain method of organizing human activity including managing personal behavior, relationships, or interactions between people. For example, a clinician could receive a dataset associated with a patient (e.g. by looking at their chart, communicating with a nurse or other colleague taking care of the patient, etc.) and enter a blood test measurement into a field of the dataset. The clinician could then communicate with a more experienced colleague (e.g. a supervisor, a specialist, a more senior doctor, etc.) to send them the dataset and receive back various predicted values, risk scores, associated metrics, and statistically derived metrics that the colleague used their medical expertise to derive. Finally, the clinician could determine whether the statistically derived metric exceeds a predetermined threshold in order to determine how significant or important obtaining a certain type of clinical data would be for the diagnostic process. Accordingly, these underlying functions still describe an abstract idea, and the performance of such steps via the computing elements of an application installed on a client device and a server exchanging data are evaluated as additional elements under Step 2A – Prong 2 and Step 2B. Under these considerations, the application running on a client device and server (including the trigger logic engine comprising one or more machine learning algorithms trained for sepsis diagnosis and treatment) amount to instructions to “apply” the exception because they are recited at a high level of generality and serve as tools with which to digitize/ automate the otherwise-abstract roles and functions of the clinician and more experienced colleague exchanging and analyzing clinical information. For the reasons outlined above, the 35 USC 101 rejections are upheld for claims 1-21. Rejection Under 35 USC 103 On pages 13-16 of the response Applicant alleges various deficiencies of the Morris and Jackson references with respect to the newly-introduced claim limitations, particularly the subject matter directed to receiving data in a client-server arrangement. Applicant’s arguments are fully considered, but are not persuasive. Paras. [0037]-[0039] of Morris show that the system may be implemented with a client/server architecture, indicating that data (e.g. imputed values, risk scores and associated metrics, statistically derived metrics, etc.) may be exchanged among an application installed on a client device and a remote server. Accordingly, Examiner submits that the combination of Morris and Jackson does sufficiently teach the amended claims, as explained in more detail in the updated 35 USC 103 rejections below. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-21 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 In the instant case, claims 1-10 and 21 are directed to a method (i.e. a process), claims 11-15 are directed to a system (i.e. a machine), and claims 16-20 are directed to a non-transitory, computer readable medium (i.e. a manufacture). Thus, each of the claims falls within one of the four statutory categories. Nevertheless, the claims fall within the judicial exception of an abstract idea. Step 2A – Prong 1 Independent claims 1, 11, and 16 recite steps that, under their broadest reasonable interpretations, cover certain methods of organizing human activity such as managing personal behavior, relationships, or interactions between people. Specifically, claim 1 recites: A method for making dynamic risk predictions, comprising: measuring a value of plasma proteins or nucleic acids from a patient suspected of sepsis; receiving, via an application installed on a client device, a dataset associated with the patient, the dataset comprising a first data field and a second data field; entering, via the application installed on the client device, the measured value in the first data field; accessing, via the application installed on the client device, a trigger logic engine installed on a server communicatively coupled to the client device, wherein the trigger logic engine comprises one or more machine learning algorithms trained for sepsis diagnosis and treatment; receiving, from the server via the application installed on the client device, a first predicted value associated with the second data field; receiving, from the server via the application installed on the client device, a first risk score and a first set of associated metrics based on the measured value and the first predicted value; receiving, from the server via the application installed on the client device, a second predicted value associated with the second data field; receiving, from the server via the application installed on the client device, a second risk score and a second set of associated metrics based on the measured value and the second predicted value; receiving, from the server via the application installed on the client device, a statistically derived metric based on the first risk score, the first set of associated metrics, the second risk score, and the second set of associated metrics; and determining whether the statistically derived metric exceeds a predetermined threshold. Similarly, claim 11 recites: A system, comprising: a client device comprising a memory configured to store instructions; and one or more processors communicatively coupled to the memory and configured to execute instructions and cause the system to: receive, via an application installed on the client device, a dataset comprising a first data field and a second data field, wherein the first data field is populated with a measured value; access, via the application installed on the client device, a trigger logic engine installed on a server communicatively coupled to the client device, wherein the trigger logic engine comprises one or more machine learning algorithms trained for sepsis diagnosis and treatment, wherein the one or more machine learning algorithms are trained using a training dataset comprising a plurality of patients and a plurality of clinical data values of the plurality of patients; receive, from the server via the application installed on the client device, a first predicted value associated with the second data field, receive, from the server via the application installed on the client device, a first risk score and a first set of associated metrics based on the measured value and the first predicted value; receive, from the server via the application installed on the client device, a second predicted value associated with the second data field; receive, from the server via the application installed on the client device, a second risk score and a second set of associated metrics based on the measured value and the second predicted value; receive, from the server via the application installed on the client device, a statistically derived metric based on the first risk score, the first set of associated metrics, the second risk score, and the second set of associated metrics; and determine whether the statistically derived metric exceeds a predetermined threshold, wherein a predetermined action is recommended if the statistically derived metric exceeds the predetermined threshold. Similarly, claim 16 recites: a non-transitory, computer readable medium storing instructions and an application which, when executed by a computer, cause the computer to perform a method, the method comprising: receiving, via the application installed on the computer, a dataset comprising a first data field and a second data field, wherein the first data field is populated with a measured value; accessing, via the application installed on the computer, a trigger logic engine installed on a server communicatively coupled to the computer, wherein the trigger logic engine comprises one or more machine learning algorithms trained for sepsis diagnosis and treatment, wherein the one or more machine learning algorithms are trained using a training dataset comprising a plurality of patients and a plurality of clinical data values of the plurality of patients; receiving, from the server via the application installed on the computer, a first predicted value associated with the second data field; receiving, from the server via the application installed on the computer, a first risk score and a first set of associated metrics based on the measured value and the first predicted value; receiving, from the server via the application installed on the computer, a second predicted value to the second data field; receiving, from the server via the application installed on the computer, a second risk score and a second set of associated metrics based on the measured value and the second predicted value; receiving, from the server via the application installed on the computer, a statistically derived metric based on the first risk score, the first set of associated metrics, the second risk score, and the second set of associated metrics; and determining whether the statistically derived metric exceeds a predetermined threshold, wherein a predetermined action is recommended if the statistically derived metric exceeds the predetermined threshold. Each of the italicized steps highlighted in the independent claims may be achieved as a certain method of organizing human activity, e.g. by a clinician managing their personal behavior and/or interactions with colleagues. For example, a clinician could receive a dataset associated with a patient (e.g. by looking at their chart, communicating with a nurse or other colleague taking care of the patient, etc.) and enter a blood test measurement into a field of the dataset. The clinician could then communicate with a more experienced colleague (e.g. a supervisor, a specialist, a more senior doctor, etc. who has been trained via experience with a plurality of past patients and their respective clinical values) to send them the dataset and receive back various predicted values, risk scores, associated metrics, and statistically derived metrics that the colleague used their medical expertise to derive. Finally, the clinician could determine whether the statistically derived metric exceeds a predetermined threshold in order to determine a next recommended action, e.g. collecting a most significant or impactful type of clinical data about the patient for diagnostic purposes. Thus, each independent claim recites an abstract idea in the form of a certain method of organizing human activity. Dependent claims 2-10, 12-15, and 17-21 inherit the limitations that recite an abstract idea from their dependence on claims 1, 11, or 16, and thus these claims also recite an abstract idea under the Step 2A – Prong 1 analysis. In addition, claims 2-10, 12-15, and 17-21 recite further limitations that merely further describe the abstract idea identified in the independent claims. Specifically, claims 2-10, 12-15, and 17-21 recite further steps for how the received calculations, determinations, comparisons, etc. are made, each of which are accomplished in a manner that an experienced clinician would be capable of achieving by managing their personal behavior and communicating the results back to a requesting clinician. However, recitation of an abstract idea is not the end of the analysis. Each of the claims must be analyzed for additional elements that indicate the abstract idea is integrated into a practical application to determine whether the claim is considered to be “directed to” an abstract idea. Step 2A – Prong 2 The judicial exception is not integrated into a practical application. In particular, independent claims 1, 11, and 16 do not include additional elements that integrate the abstract idea into a practical application. Claims 1, 11, and 16 each include the additional elements of an application installed on a client device or computer, and a trigger logic engine installed on a server communicatively coupled to the client device or computer, wherein the trigger logic engine comprises one or more trained machine learning algorithms. Claim 1 also includes the additional element of measuring a value of plasma proteins or nucleic acids from a patient suspected of sepsis. Claim 11 further recites the additional elements of the client device comprising a memory configured to store instructions as well as one or more processors communicatively coupled to the memory and configured to execute instructions and cause the system to perform the method steps, while claim 16 similarly recites the additional elements of a non-transitory, computer readable medium storing instructions and an application which, when executed by a computer, cause the computer to perform the method steps. These additional elements, when considered in the context of each claim as a whole, do not provide integration into a practical application. The use of computer hardware such as a client device or computer with a processor and memory storing an application and communicating with a server amounts to instructions to implement the abstract idea on a computer because they merely utilize high-level computing components to digitize and/or automate functions that may be otherwise be achieved by and among human actors, as described above. Similarly, specifying that the algorithm is a machine learning algorithm merely recites the high-level concept of “machine learning” as a means to digitize and/or automate the otherwise-abstract roles or functions of the more experienced clinician, and thus also amounts to instructions to apply the exception (see MPEP 2106.05(f)). The step of measuring a value of plasma proteins or nucleic acids from a patient as in claim 1 amounts to a means of necessary data gathering because it merely provides data required for the main analysis and calculation steps, such that it is considered insignificant extra-solution activity (see MPEP 2106.05(g)). Accordingly, each independent claim as a whole is directed to an abstract idea without integration into a practical application. The judicial exception recited in dependent claims 2-10, 12-15, and 17-21 is not integrated into a practical application under the same analysis as above because they do not introduce any new additional elements. Claims 2-10, 12-15, and 21 do not recite any additional elements beyond the abstract idea itself, while claims 17-20 are performed with the same high-level computing components identified in claim 16 such that they also amount to the words “apply it” with a computer and do not provide integration into a practical application. Accordingly, the additional elements of claims 1-21 do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Claims 1-21 are directed to an abstract idea. Step 2B The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of a client device or computer with a processor executing instructions and an application stored in a memory and communicating with a trigger logic engine installed on a server to perform the receiving, accessing, determining, etc. steps of the invention amount to mere instructions to apply the exception using generic computer components. As evidence of the generic nature of the above recited additional elements, Examiner notes at least Fig. 21 and paras. [00134]-[00143] of Applicant’s specification, which describe a computer system 2100 implemented with one or more processors 2102 which “may be a general-purpose microprocessor, a microcontroller, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a Programmable Logic Device (PLD), a controller, a state machine, gated logic, discrete hardware components, or any other suitable entity that can perform calculations or other manipulations of information.” Further, the memory or storage medium are described with examples of many known/suitable types of data storage components in paras. [00136] & [00143]. Additionally, paras. [00141]-[00142] describe how client/server architectures are “typically” implemented, showing that such architectures are known in the art. From all of these disclosures, one of ordinary skill in the art would understand that any generic computing system including processing and storage components and operating in a client/server orientation may be utilized to implement the invention. The use of a trained machine learning algorithm also amounts to mere instructions to apply the exception as explained above. Examiner notes paras. [0032], [0041], & [0043] of the specification, which contemplate many types of high-level machine learning, artificial intelligence, and/or neural network implementations of the system such that one of ordinary skill in the art would understand that known machine learning algorithms may be utilized to achieve the otherwise-abstract data analysis functions of the invention. Further, Examiner notes that it is well-understood, routine, and conventional to utilize various types of machine learning algorithms for analysis steps like data imputation, as evidenced by Pg 452 of Johnson et al. (Reference U on the PTO-892 mailed 3/20/2025); Pgs 10-14 of Leke et al. (Reference W on the PTO-892 mailed 3/20/2025); and Lu et al. (Reference V on the PTO-892 mailed 3/20/2025). A step for measuring a value of plasma proteins or nucleic acids from a patient as in claim 1 amounts to insignificant extra-solution activity in the form of necessary data gathering (as explained above). Further, this activity is recognized as well-understood, routine, and conventional, as evidenced by para. [0087] of Applicant’s specification: “modeling tools and trigger logic engines as disclosed herein utilize features routinely measured for patients suspected of sepsis. Some of these features may be present in the electronic medical record (EMR) for the patient… and also utilize parameters specifically measured for hospitalized patients suspected of sepsis that may not be present in the electronic medical record (e.g., novel plasma proteins, nucleic acids, and the like)”; as well as MPEP 2106.05(d)(II), which notes that “determining the level of a biomarker in blood by any means” is well-understood, routine, and conventional. Analyzing these additional elements as an ordered combination adds nothing that is not already present when considering the elements individually; the overall effect of the client/server and machine learning implementation and measurement of patient values in combination is to digitize/automate and provide input measurements for a patient data analysis operation that could otherwise be achieved as a certain method of organizing human activity. Thus, when considered as a whole and in combination, claims 1-21 are not patent eligible. Claim Rejections - 35 USC § 103 This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-5, 7-14, and 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over Morris et al. (US 20110105852 A1) in view of Jackson et al. (US 20080114576 A1). Claim 1 Morris teaches a method for making dynamic risk predictions (Morris abstract), comprising: ; receiving, via an application installed on a client device, a dataset associated with the patient, the dataset comprising a first data field and a second data field (Morris [0088], [0108], noting the system receives or otherwise accesses patient data, which per [0085] can include many types of populated patient data fields like age, gender, biomarker readings, medication information, etc.; see also [0095], noting some of the fields may have missing or null data. See also [0037]-[0039], noting the system may be implemented with a client/server architecture, indicating that data (e.g. the patient data described above) may be exchanged among an application installed on a client device and a remote server); ; accessing, via the application installed on the client device, a trigger logic engine installed on a server communicatively coupled to the client device, (Morris [0037]-[0039], noting the system may be implemented with a client/server architecture, indicating an application installed on a client device may access data and logic functions installed at the server); receiving, from the server via the application installed on the client device, a first predicted value associated with the second data field (Morris [0094]-[0095], [0113], noting missing or null data values are imputed in the records, such as by creating a plurality of clone records with different values (i.e. at least a first predicted value) imputed into a data field with missing data. See also [0037]-[0039], noting the system may be implemented with a client/server architecture, indicating that data (e.g. the imputed values described above) may be exchanged among an application installed on a client device and a remote server); receiving, from the server via the application installed on the client device, a first risk score and a first set of associated metrics based on the measured value and the first predicted value (Morris [0097], [0114], noting a risk score is computed for each clone record (i.e. including the populated measured value and the first predicted value) and associated metrics such as average or expected risk value, uncertainty, standard deviation, etc. (i.e. a first set of associated metrics) are determined. See also [0037]-[0039], noting the system may be implemented with a client/server architecture, indicating that data (e.g. the risk score and associated metrics described above) may be exchanged among an application installed on a client device and a remote server); receiving, from the server via the application installed on the client device, a second predicted value associated with the second data field (Morris [0094]-[0095], [0113], noting missing or null data values are imputed in the records, such as by creating a plurality of clone records with different values (i.e. at least a second predicted value) imputed into a data field with missing data. See also [0037]-[0039], noting the system may be implemented with a client/server architecture, indicating that data (e.g. the imputed values described above) may be exchanged among an application installed on a client device and a remote server); receiving, from the server via the application installed on the client device, a second risk score and a second set of associated metrics based on the measured value and the second predicted value (Morris [0097], [0114], noting a risk score is computed for each clone record (i.e. including the populated measured value and the second predicted value) and associated metrics such as average or expected risk value, uncertainty, standard deviation, etc. (i.e. a second set of associated metrics) are determined. See also [0037]-[0039], noting the system may be implemented with a client/server architecture, indicating that data (e.g. the risk score and associated metrics described above) may be exchanged among an application installed on a client device and a remote server); receiving, from the server via the application installed on the client device, a statistically derived metric based on the first risk score, the first set of associated metrics, the second risk score, and the second set of associated metrics (Morris [0099]-[0100], noting an average benefit (i.e. statistically derived metric) is determined for each intervention based on the confidence level of a benefit determined based on the variation in the risks of the different clones, i.e. based on evaluating all of the first and second risk scores and the first and second set of associated metrics representing the raw risk values, average, standard deviation, variability, etc. See also [0037]-[0039], noting the system may be implemented with a client/server architecture, indicating that data (e.g. the statistically derived metric described above) may be exchanged among an application installed on a client device and a remote server); and determining whether the statistically derived metric exceeds a predetermined threshold (Morris [0100], noting the determined benefit (i.e. statistically derived metric) is compared to a benefit threshold). In summary, Morris teaches a method for imputing patient data values for the purpose of generating risk scores for health conditions. Though Morris notes that the system may be used to predict risk for “any disease or condition” (see [0049]), it does not specifically include sepsis as one of the conditions, nor does it specify that sepsis-specific biomarkers like plasma proteins or nucleic acids are measured and entered into a first field of a dataset. Further, though Morris contemplates many types of predictive risk models, it does not mention a machine learning algorithm trained for sepsis diagnosis and treatment. Accordingly, Morris fails to explicitly disclose measuring a value of plasma proteins or nucleic acids from a patient suspected of sepsis; entering, via the application installed on the client device, the measured value in the first data field; and wherein the trigger logic engine comprises one or more machine learning algorithms trained for sepsis diagnosis and treatment. However, Jackson teaches a system for predicting a risk of sepsis that measures a value of plasma proteins or nucleic acids from a patient suspected of sepsis (Jackson [0034]-[0038], [0051], noting blood samples are analyzed for biomarkers such as c-reactive protein and/or RNA for the purpose of predicting sepsis; monitored patients may include those in intensive care units, immunocompromised patients, etc. per [0042], i.e. patients suspected of sepsis) and enters the measurements into a dataset on a computer as input for a machine learning model trained for sepsis diagnosis and treatment (Jackson [0038], [0043], [0067]-[0068]). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the client/server-based imputation and analysis process for any kind of condition risk prediction as in Morris to include data measuring, entering, and machine learning analysis functions specific to the field of sepsis monitoring and prediction as in Jackson in order to measure and analyze biomarkers specifically relevant to predicting sepsis, which is a disease that it would be advantageous to predict the onset of so that early treatment may be initiated and patient health outcomes improved (as suggested by Jackson [0002] & [0032]), as well as to utilize artificial intelligence and machine learning techniques known to be capable of identifying patterns in complex data systems such as clinical biomarkers (as suggested by Jackson [0023]-[0024]). Claim 11 Morris teaches a system, comprising: a client device comprising a memory configured to store instructions; and one or more processors communicatively coupled to the memory and configured to execute instructions and cause the system (Morris [0141]-[0144], noting computer processing hardware for executing stored instructions; see also [0037]-[0039], noting the system may be implemented with a client/server architecture, such that the computer processing hardware is considered to include a client device) to: receive, via an application installed on the client device, a dataset comprising a first data field and a second data field, wherein the first data field is populated with a measured value (Morris [0088], [0108], noting the system receives or otherwise accesses patient data, which per [0085] can include many types of populated patient data fields like age, gender, biomarker readings, medication information, etc.; see also [0095], noting some of the fields may have missing or null data. See also [0037]-[0039], noting the system may be implemented with a client/server architecture, indicating that data (e.g. the patient data described above) may be exchanged among an application installed on a client device and a remote server); access, via the application installed on the client device, a trigger logic engine installed on a server communicatively coupled to the client device, (Morris [0037]-[0039], noting the system may be implemented with a client/server architecture, indicating an application installed on a client device may access data and logic functions installed at the server); receive, from the server via the application installed on the client device, a first predicted value associated with the second data field (Morris [0094]-[0095], [0113], noting missing or null data values are imputed in the records, such as by creating a plurality of clone records with different values (i.e. at least a first predicted value) imputed into a data field with missing data. See also [0037]-[0039], noting the system may be implemented with a client/server architecture, indicating that data (e.g. the imputed values described above) may be exchanged among an application installed on a client device and a remote server); receive, from the server via the application installed on the client device, a first risk score and a first set of associated metrics based on the measured value and the first predicted value (Morris [0097], [0114], noting a risk score is computed for each clone record (i.e. including the populated measured value and the first predicted value) and associated metrics such as average or expected risk value, uncertainty, standard deviation, etc. (i.e. a first set of associated metrics) are determined. See also [0037]-[0039], noting the system may be implemented with a client/server architecture, indicating that data (e.g. the risk score and associated metrics described above) may be exchanged among an application installed on a client device and a remote server); receive, from the server via the application installed on the client device, a second predicted value associated with the second data field (Morris [0094]-[0095], [0113], noting missing or null data values are imputed in the records, such as by creating a plurality of clone records with different values (i.e. at least a second predicted value) imputed into a data field with missing data. See also [0037]-[0039], noting the system may be implemented with a client/server architecture, indicating that data (e.g. the imputed values described above) may be exchanged among an application installed on a client device and a remote server); receive, from the server via the application installed on the client device, a second risk score and a second set of associated metrics based on the measured value and the second predicted value (Morris [0097], [0114], noting a risk score is computed for each clone record (i.e. including the populated measured value and the second predicted value) and associated metrics such as average or expected risk value, uncertainty, standard deviation, etc. (i.e. a second set of associated metrics) are determined. See also [0037]-[0039], noting the system may be implemented with a client/server architecture, indicating that data (e.g. the risk score and associated metrics described above) may be exchanged among an application installed on a client device and a remote server); receive, from the server via the application installed on the client device, a statistically derived metric based on the first risk score, the first set of associated metrics, the second risk score, and the second set of associated metrics (Morris [0099]-[0100], noting an average benefit (i.e. statistically derived metric) is determined for each intervention based on the confidence level of a benefit determined based on the variation in the risks of the different clones, i.e. based on evaluating all of the first and second risk scores and the first and second set of associated metrics representing the raw risk values, average, standard deviation, variability, etc. See also [0037]-[0039], noting the system may be implemented with a client/server architecture, indicating that data (e.g. the statistically derived metric described above) may be exchanged among an application installed on a client device and a remote server); and determine whether the statistically derived metric exceeds a predetermined threshold, wherein a predetermined action is recommended if the statistically derived metric exceeds the predetermined threshold (Morris [0100], noting if the determined benefit (i.e. statistically derived metric) is compared to and exceeds a benefit threshold, a recommendation for the associated medical intervention is provided). In summary, Morris teaches a system for imputing patient data values for the purpose of generating risk scores for health conditions. Though Morris notes that the system may be used to predict risk for “any disease or condition” (see [0049]) via many types of risk models, it does not specifically include sepsis as one of the conditions, nor does it mention a machine learning algorithm trained for sepsis diagnosis and treatment as one of the risk models. Accordingly, Morris fails to explicitly disclose imputing the first and second predicted values wherein the trigger logic engine comprises one or more machine learning algorithms trained for sepsis diagnosis and treatment, wherein the one or more machine learning algorithms are trained using a training dataset comprising a plurality of patients and a plurality of clinical data values of the plurality of patients. However, Jackson teaches a system for predicting a risk of sepsis by measuring and inputting various clinical patient values into a machine learning model that has been trained for sepsis diagnosis and treatment via a training dataset correlating a plurality of patients and their clinical values (Jackson [0024], [0043], [0067]-[0068]). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the client/server-based imputation and analysis process for any kind of condition risk prediction as in Morris to be applied to the sepsis-specific machine learning-based risk prediction method of Jackson in order to measure and analyze biomarkers specifically relevant to predicting sepsis, which is a disease that it would be advantageous to predict the onset of so that early treatment may be initiated and patient health outcomes improved (as suggested by Jackson [0002] & [0032]), as well as to utilize artificial intelligence and machine learning techniques known to be capable of identifying patterns in complex data systems such as clinical biomarkers (as suggested by Jackson [0023]-[0024]). Claim 16 Morris teaches a non-transitory, computer readable medium storing instructions and an application which, when executed by a computer, cause the computer to perform a method (Morris [0141]-[0144], noting computer hardware for executing instructions stored in computer readable storage media; see also [0149], noting use of an application program), the method comprising: receiving, via the application installed on the computer, a dataset comprising a first data field and a second data field, wherein the first data field is populated with a measured value (Morris [0088], [0108], noting the system receives or otherwise accesses patient data, which per [0085] can include many types of populated patient data fields like age, gender, biomarker readings, medication information, etc.; see also [0095], noting some of the fields may have missing or null data. See also [0037]-[0039], noting the system may be implemented with a client/server architecture, indicating that data (e.g. the patient data described above) may be exchanged among an application installed on a computer and a remote server); accessing, via the application installed on the computer, a trigger logic engine installed on a server communicatively coupled to the computer, ; receiving, from the server via the application installed on the computer, a first predicted value associated with the second data field (Morris [0094]-[0095], [0113], noting missing or null data values are imputed in the records, such as by creating a plurality of clone records with different values (i.e. at least a first predicted value) imputed into a data field with missing data. See also [0037]-[0039], noting the system may be implemented with a client/server architecture, indicating that data (e.g. the imputed values described above) may be exchanged among an application installed on a computer and a remote server); receiving, from the server via the application installed on the computer, a first risk score and a first set of associated metrics based on the measured value and the first predicted value (Morris [0097], [0114], noting a risk score is computed for each clone record (i.e. including the populated measured value and the first predicted value) and associated metrics such as average or expected risk value, uncertainty, standard deviation, etc. (i.e. a first set of associated metrics) are determined. See also [0037]-[0039], noting the system may be implemented with a client/server architecture, indicating that data (e.g. the risk score and associated metrics described above) may be exchanged among an application installed on a computer and a remote server); receiving, from the server via the application installed on the computer, a second predicted value associated with the second data field (Morris [0094]-[0095], [0113], noting missing or null data values are imputed in the records, such as by creating a plurality of clone records with different values (i.e. at least a second predicted value) imputed into a data field with missing data. See also [0037]-[0039], noting the system may be implemented with a client/server architecture, indicating that data (e.g. the imputed values described above) may be exchanged among an application installed on a computer and a remote server); receiving, from the server via the application installed on the computer, a second risk score and a second set of associated metrics based on the measured value and the second predicted value (Morris [0097], [0114], noting a risk score is computed for each clone record (i.e. including the populated measured value and the second predicted value) and associated metrics such as average or expected risk value, uncertainty, standard deviation, etc. (i.e. a second set of associated metrics) are determined. See also [0037]-[0039], noting the system may be implemented with a client/server architecture, indicating that data (e.g. the risk score and associated metrics described above) may be exchanged among an application installed on a computer and a remote server); receiving, from the server via the application installed on the computer, a statistically derived metric based on the first risk score, the first set of associated metrics, the second risk score, and the second set of associated metrics (Morris [0099]-[0100], noting an average benefit (i.e. statistically derived metric) is determined for each intervention based on the confidence level of a benefit determined based on the variation in the risks of the different clones, i.e. based on evaluating all of the first and second risk scores and the first and second set of associated metrics representing the raw risk values, average, standard deviation, variability, etc. See also [0037]-[0039], noting the system may be implemented with a client/server architecture, indicating that data (e.g. the imputed values described above) may be exchanged among an application installed on a computer and a remote server); and determining whether the statistically derived metric exceeds a predetermined threshold, wherein a predetermined action is recommended if the statistically derived metric exceeds the predetermined threshold (Morris [0100], noting if the determined benefit (i.e. statistically derived metric) is compared to and exceeds a benefit threshold, a recommendation for the associated medical intervention is provided). In summary, Morris teaches a system for imputing patient data values for the purpose of generating risk scores for health conditions. Though Morris notes that the system may be used to predict risk for “any disease or condition” (see [0049]) via many types of risk models, it does not specifically include sepsis as one of the conditions, nor does it mention a machine learning algorithm trained for sepsis diagnosis and treatment as one of the risk models. Accordingly, Morris fails to explicitly disclose wherein the trigger logic engine comprises one or more machine learning algorithms trained for sepsis diagnosis and treatment, wherein the one or more machine learning algorithms are trained using a training dataset comprising a plurality of patients and a plurality of clinical data values of the plurality of patients. However, Jackson teaches a system for predicting a risk of sepsis by measuring and inputting various clinical patient values into a machine learning model that has been trained for sepsis diagnosis and treatment via a training dataset correlating a plurality of patients and their clinical values (Jackson [0024], [0043], [0067]-[0068]). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the client/server-based imputation and analysis process for any kind of condition risk prediction as in Morris to be applied to the sepsis-specific machine learning-based risk prediction method of Jackson in order to measure and analyze biomarkers specifically relevant to predicting sepsis, which is a disease that it would be advantageous to predict the onset of so that early treatment may be initiated and patient health outcomes improved (as suggested by Jackson [0002] & [0032]), as well as to utilize artificial intelligence and machine learning techniques known to be capable of identifying patterns in complex data systems such as clinical biomarkers (as suggested by Jackson [0023]-[0024]). Claims 2 and 12 Morris in view of Jackson teaches the method of claim 1, and the combination further teaches wherein the first set of associated metrics is generated by determining a variability induced in the first risk score by a sampling variability in a within standard deviation value (Morris [0097], [0099], [0115], noting uncertainty, standard deviations, variability, and other metrics for the risk scores are determined, considered to include a variability induced in the first risk score by a sampling variability in a within standard deviation value because the missing data is imputed according to a distribution accounting for normal variation in the sample population per [0061]). Claim 12 recites substantially similar subject matter as claim 2, and is also rejected as above. Claim 3 Morris in view of Jackson teaches the method of claim 1, and the combination further teaches wherein the statistically derived metric is obtained by calculating a standard deviation of the first risk score and the second risk score, referred to as a between standard deviation (Morris [0097], [0099]-[0100], [0115], noting the average benefit (i.e. statistically derived metric) is based on calculating variability, standard deviation, or other uncertainty metrics for the clone risk scores, considered to include calculating a between standard deviation of the first and second risk scores). Claim 4 Morris in view of Jackson teaches the method of claim 1, and the combination further teaches wherein the statistically derived metric is obtained by calculating a total standard deviation that includes a between standard deviation and a within standard deviation value derived from the first risk score, the second risk score, or a mathematical combination of both (Morris [0097], [0099]-[0100], [0115], noting the average benefit (i.e. statistically derived metric) is based on calculating variability, standard deviation, or other uncertainty metrics for the clone risk scores, considered to include calculating a total standard deviation including a between and within standard deviation value derived from the first and second risk scores). Claims 5 and 14 Morris in view of Jackson teaches the method of claim 1, and the combination further teaches wherein the statistically derived metric is obtained by comprises selecting a first risk score or second risk score, or a mathematical combination of both, a standard deviation, a between standard deviation, or a within standard
Read full office action

Prosecution Timeline

Jul 08, 2022
Application Filed
Sep 18, 2024
Non-Final Rejection — §101, §103
Jan 14, 2025
Applicant Interview (Telephonic)
Jan 14, 2025
Examiner Interview Summary
Feb 05, 2025
Response Filed
Mar 17, 2025
Final Rejection — §101, §103
Sep 18, 2025
Request for Continued Examination
Oct 03, 2025
Response after Non-Final Action
Oct 28, 2025
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12580072
CLOUD ANALYTICS PACKAGES
2y 5m to grant Granted Mar 17, 2026
Patent 12555667
SYSTEMS AND METHODS FOR USING AI/ML AND FOR CARDIAC AND PULMONARY TREATMENT VIA AN ELECTROMECHANICAL MACHINE RELATED TO UROLOGIC DISORDERS AND ANTECEDENTS AND SEQUELAE OF CERTAIN UROLOGIC SURGERIES
2y 5m to grant Granted Feb 17, 2026
Patent 12548656
SYSTEM AND METHOD FOR AN ENHANCED PATIENT USER INTERFACE DISPLAYING REAL-TIME MEASUREMENT INFORMATION DURING A TELEMEDICINE SESSION
2y 5m to grant Granted Feb 10, 2026
Patent 12475978
ADAPTABLE OPERATION RANGE FOR A SURGICAL DEVICE
2y 5m to grant Granted Nov 18, 2025
Patent 12462911
CLINICAL CONCEPT IDENTIFICATION, EXTRACTION, AND PREDICTION SYSTEM AND RELATED METHODS
2y 5m to grant Granted Nov 04, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
36%
Grant Probability
83%
With Interview (+46.7%)
3y 7m
Median Time to Grant
High
PTA Risk
Based on 172 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month