Prosecution Insights
Last updated: April 19, 2026
Application No. 17/791,880

A TOOL FOR SELECTING RELEVANT FEATURES IN PRECISION DIAGNOSTICS

Non-Final OA §101§103§112
Filed
Jul 08, 2022
Examiner
HRANEK, KAREN AMANDA
Art Unit
3684
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Prenosis Inc.
OA Round
3 (Non-Final)
36%
Grant Probability
At Risk
3-4
OA Rounds
3y 7m
To Grant
83%
With Interview

Examiner Intelligence

Grants only 36% of cases
36%
Career Allow Rate
62 granted / 172 resolved
-16.0% vs TC avg
Strong +47% interview lift
Without
With
+46.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
49 currently pending
Career history
221
Total Applications
across all art units

Statute-Specific Performance

§101
30.3%
-9.7% vs TC avg
§103
35.3%
-4.7% vs TC avg
§102
10.6%
-29.4% vs TC avg
§112
20.3%
-19.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 172 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 10/3/2025 has been entered. Status of the Claims The status of the claims as of the response filed 10/3/2025 is as follows: Claims 1, 3-8, and 11-20 are currently amended. Claim 2 is as previously presented. Claims 9-10 and 21-23 are original. Claims 1-23 are currently pending in the application and have been considered below. Response to Amendment Rejection Under 35 USC 101 The claims have been amended but the 35 USC 101 rejections for claims 1-23 are upheld. Rejection Under 35 USC 103 The amendments made to the claims introduce limitations that are not fully addressed in the previous office action, and thus the corresponding 35 USC 103 rejections are withdrawn. However, Examiner will consider the amended claims in light of an updated prior art search and address their patentability with respect to prior art below. Response to Arguments Rejection Under 35 USC 101 On pages 10-12 of the response filed 10/3/2025 Applicant argues that the amendments made to claim 1 directed to communication between and performance of various steps by an application installed on a client device and a server show that the “limitations clearly recite concrete and specific steps that occur ‘via an application installed on a client device’” as well as that the claimed steps “are not abstract, but rather are part of a practical application of patient treatment using specific computing devices and operations.” Applicant further alleges that “integrating specific ways information is processed between a client device and a server…. is a practical application of using an artificial intelligence system that is centered on solving a real-world practical health care need” (emphasis original at Pg 12). Applicant makes similar arguments for independent claims 11 and 16 on Pgs 12-15. Applicant’s arguments are fully considered, but are not persuasive. Examiner maintains that the underlying functions of accessing a trained diagnostic model, receiving various imputed/calculated values/rankings, suggesting unmeasured features to be measured, and collecting an observation of a patient recite an abstract idea that could otherwise be achieved by and among human actors, for example a certain method of organizing human activity including managing personal behavior, relationships, or interactions between people. For example, a clinician could access a trained diagnostic model or communicate with a more experienced colleague during an urgent care situation to exchange information and make decisions about diagnosis and treatment for a patient by imputing values for clinical measurement data fields while holding other features constant, evaluating outcomes with mathematical models, determining statistical parameters, assigning rankings to features to suggest features that should be measured, and collecting a suggested measurement of a patient by observing the patient (e.g. visually observing a breathing rate). Accordingly, these underlying functions still describe an abstract idea, and the performance of such steps via the computing elements of an application installed on a client device and a server exchanging data are evaluated as additional elements under Step 2A – Prong 2 and Step 2B. Under these considerations, the application running on a client device and server (including the diagnostic engine comprising one or more machine learning models trained for sepsis diagnosis and treatment) amount to instructions to “apply” the exception because they are recited at a high level of generality and serve as tools with which to digitize/automate the otherwise-abstract roles and functions of the clinician and more experienced colleague exchanging and analyzing clinical information. Examiner notes that solving a “real-world” problem is not the same as integrating an abstract idea into a practical application under Step 2A – Prong 2 or providing “significantly more” than an abstract idea under Step 2B as outlined in MPEP 2106. In the instant case, it appears that abstract diagnostic data sharing and analysis functions are merely being implemented in a known client/server computing architecture such that these otherwise-abstract diagnostic operations are achieved digitally and via high-level “machine learning” automation, which does not provide an improvement to a computer or other technical field, nor any other “practical application” as outlined in MPEP 2106.04(d), and instead amounts to mere instructions to “apply” the abstract idea as outlined in MPEP 2106.05(f). For the reasons outlined above, the 35 USC 101 rejections are upheld for claims 1-23. Rejection Under 35 USC 103 On pages 15-18 of the response Applicant argues that none of the presently cited references teach each and every one of the newly-amended limitations, and appears to specifically point to subject matter directed to receiving data in a client-server arrangement. Applicant’s arguments are fully considered, but are not persuasive. Examiner submits that Rapaka does teach/suggest use of a system in a client-server arrangement (see [0070], noting the machine-learnt model may be hosted on a cloud server for use by a user at a workstation or other client device) as claimed, as explained in more detail in the updated 35 USC 103 rejections below. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-23 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claims 1, 11, and 16 each recite limitations for “receiving, from the server via the application installed on the client device” first and second values imputed to an unmeasured feature in a dataset, as well as a statistical parameter that is determined using the first outcome and the second outcome. Applicant’s original specification does not provide sufficient written support for an application at a client device receiving imputed values or a statistical parameter from the server. At most, Fig. 2 and paras. [0044] & [0058] show that the client devices can receive “suggested features” and their assigned ranks and/or “a predicted outcome or diagnostic” from the server, while paras. [0044] & [0059]-[0072] describe how the imputation, statistics, and modeling tools of the server operate to impute various values to the dataset and determine statistical parameters for the dataset. However, these imputation and statistical parameter calculation steps appear to be internal to the server, and there is no indication that the imputed values and statistical parameters themselves are ever transmitted to the client device as now recited in the amended independent claims. Because these functions were not present in the original disclosure as filed, the corresponding limitations constitute new matter and are rejected under 35 U.S.C. 112(a). Claims 2-10, 12-15, and 17-23 are also rejected on this basis because they inherit the unsupported limitation due to their dependence on claims 1, 11, or 16. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 2 and 12 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 2 recites “prior to imputing the first value, selecting and retrieving a filtered dataset from a master dataset….” There is no positively recited “imputing” step in parent claim 1, which now only recites receiving imputed values from the server. Such values may have been imputed at any time in any manner under the broadest reasonable interpretation of the claim, such that it is now unclear when the functions of claim 2 are intended to occur because the imputing function could have occurred outside the scope of the claimed method steps of claim 1. For purposes of examination, Examiner will interpret the limitation of claim 2 as occurring before the imputed values are received. Claim 12 recites “wherein to assign the unmeasured feature a ranking corresponding to the statistical parameter, the one or more processors execute the instructions to identify, in a filtered dataset, a relative importance of the unmeasured feature….” However, the one or more processors executing instructions as in parent claim 11 are not recited as assigning a ranking to the unmeasured feature; instead, they merely receive an assigned ranking that has been determined by the server. It is therefore unclear how the one or more processors of claim 11 would execute instructions to perform the assigning function recited in claim 12, because it has already been performed by a different computing element (the server). For purposes of examination, Examiner interprets this limitation as describing how the server assigns the ranking before it is received at the client device. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-23 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 In the instant case, claims 1-10 and 21 are directed to a method (i.e. a process), claims 11-15 and 22 are directed to a system (i.e. a machine), and claims 16-20 and 23 are directed to a non-transitory, computer readable medium (i.e. a manufacture). Thus, each of the claims falls within one of the four statutory categories. Nevertheless, the claims fall within the judicial exception of an abstract idea. Step 2A – Prong 1 Independent claims 1, 11, and 16 recite steps that, under their broadest reasonable interpretations, cover certain methods of organizing human activity, e.g. managing personal behavior, relationships, or interactions between people. Specifically, claim 1 recites: A method for decision making for diagnosis that leads to treatment of a patient in an urgent care situation given at least one feature in a dataset associated with the patient is measured, comprising: during the urgent care situation, accessing, via an application installed on a client device, a diagnostic engine installed on a server communicatively coupled to the client device, wherein the diagnostic engine comprises one or more machine learning models trained and configured for sepsis diagnosis and treatment; receiving, from the server via the application installed on the client device, a first value imputed to an unmeasured feature in the dataset while holding another first remaining unmeasured feature in the dataset constant, wherein the dataset comprises at least one of time-series measurements for treatment information, outcome information, or actions taken by healthcare personnel in response to metrology information that includes therapeutic measures, medication administration events, or dosages; receiving, from the server via the application installed on the client device, a first outcome evaluated with the one or more machine learning models using the first imputed value; receiving, from the server via the application installed on the client device, a second value imputed to the unmeasured feature in the dataset while holding another second remaining unmeasured feature in the dataset constant; receiving, from the server via the application installed on the client device, a second outcome evaluated with the one or more machine learning models using the second imputed value; receiving, from the server via the application installed on the client device, a statistical parameter that is determined using the first outcome and the second outcome; receiving, from the server via the application installed on the client device, an assigned ranking for the unmeasured feature corresponding to the statistical parameter; suggesting, based at least on the assigned ranking, one or more unmeasured features in the dataset to be measured; and collecting an observation of the patient based on a measurement of the one or more suggested unmeasured features. Similarly, claim 11 recites: A system for decision making for diagnosis that leads to treatment of a patient in an urgent care situation given at least one feature in a dataset associated with the patient is measured, comprising: a client device comprising a memory configured to store instructions; and one or more processors communicatively coupled with the memory, and configured to execute the instructions to cause the system to: during the urgent care situation, access, via an application installed on the client device, a diagnostic engine installed on a server communicatively coupled to the client device, wherein the diagnostic engine comprises one or more machine learning models trained and configured for diagnosis and treatment; receive, from the server via the application installed on the client device, a first value imputed to an unmeasured feature in the dataset while holding another first remaining unmeasured feature in the dataset constant; wherein the dataset comprises at least one of time-series measurements for treatment information, outcome information, or actions taken by healthcare personnel in response to metrology information that includes therapeutic measures, medication administration events, or dosages; receive, from the server via the application installed on the client device, a first outcome evaluated with the one or more machine learning models using the first imputed value; receive, from the server via the application installed on the client device, a second value imputed to the unmeasured feature in the dataset while holding another second remaining unmeasured feature in the dataset constant; receive, from the server via the application installed on the client device, a second outcome evaluated with the one or more machine learning models using the second imputed value; receive, from the server via the application installed on the client device, a statistical parameter that is determined using the first outcome and the second outcome; receive, from the server via the application installed on the client device, an assigned ranking for the unmeasured feature corresponding to the statistical parameter; suggest, based at least on the assigned ranking, one or more unmeasured features in the dataset to be measured; and collect, via an input communicatively coupled to the client device, an observation of the patient based on a measurement of the one or more suggested unmeasured features. Similarly, claim 16 recites: A non-transitory, computer readable medium storing instructions and an application which, when executed by a computer, cause the computer to perform a method for decision making for diagnosis that leads to treatment of a patient in an urgent care situation given at least one feature in a dataset associated with the patient is measured, the method comprising: during the urgent care situation, accessing, via the application installed on the computer, a diagnostic engine installed on a server communicatively coupled to the computer, wherein the diagnostic engine comprises one or more machine learning models trained and configured for sepsis diagnosis and treatment; receiving, from the server via the application installed on the computer, a first value imputed to an unmeasured feature in the dataset while holding another first remaining unmeasured feature in the dataset constant; wherein the dataset comprises at least one of time-series measurements for treatment information, outcome information, or actions taken by healthcare personnel in response to metrology information that includes therapeutic measures, medication administration events, or dosages; receiving, from the server via the application installed on the computer, a first outcome evaluated with the one or more machine learning models using the first imputed value; receiving, from the server via the application installed on the computer, a second value imputed to the unmeasured feature in the dataset while holding another second remaining unmeasured feature in the dataset constant; receiving, from the server via the application installed on the computer, a second outcome evaluated with the one or more machine learning models using the second imputed value; receiving, from the server via the application installed on the computer, a statistical parameter that is determined using the first outcome and the second outcome; receiving, from the server via the application installed on the computer, an assigned ranking for the unmeasured feature corresponding to the statistical parameter; suggesting, based at least on the assigned ranking, one or more unmeasured features in the dataset to be measured; and collecting an observation of the patient based on a measurement of the one or more suggested unmeasured features. But for the recitation of generic computer components like a client device, processor, non-transitory computer-readable medium, server with a diagnostic engine comprising machine learning models, and an input communicatively coupled to the client device, each of the italicized steps highlighted in the independent claims, when considered as a whole, describe a medical diagnosis and care evaluation operation that a human actor such as a clinician in an urgent care situation could achieve by managing their personal behavior and/or interactions with colleagues. For example, a clinician could access a trained diagnostic model or communicate with a more experienced colleague during an urgent care situation to exchange information and make decisions about diagnosis and treatment for a patient by imputing values for clinical measurement data fields while holding other features constant, evaluating outcomes with mathematical models, determining statistical parameters, assigning rankings to features to suggest features that should be measured, and collecting a suggested measurement of a patient by observing the patient (e.g. visually observing a breathing rate). Thus, each independent claim recites an abstract idea in the form of a certain method of organizing human activity. Dependent claims 2-10, 12-15, and 17-23 inherit the limitations that recite an abstract idea from their dependence on claims 1, 11, or 16, and thus these claims also recite an abstract idea under the Step 2A – Prong 1 analysis. In addition, claims 2-10, 12-15, and 17-23 recite further limitations that merely further describe the abstract idea identified in the independent claims. Specifically, claims 2-8, 12-15, and 17-23 recite further steps for or descriptions of selecting/filtering data and determining statistical parameters that a human actor would be capable of achieving by managing their personal behavior and/or interactions with colleagues to manipulate data and make decisions as described above. Claim 9 recites selecting a sampling frequency of the unmeasured feature based on the ranking corresponding to the statistical parameter, which a human actor could achieve by thinking about how often a given feature should be measured based on how important it is. Claim 10 recites selecting a sensor device to collect a measurement from the unmeasured feature based on a precision and an accuracy of the sensor device and on the ranking of the unmeasured feature, which a human actor could achieve by thinking about features of the sensor devices and making an appropriate selection based on various factors. However, recitation of an abstract idea is not the end of the analysis. Each of the claims must be analyzed for additional elements that indicate the abstract idea is integrated into a practical application to determine whether the claim is considered to be “directed to” an abstract idea. Step 2A – Prong 2 The judicial exception is not integrated into a practical application. In particular, independent claims 1, 11, and 16 do not include additional elements that integrate the abstract idea into a practical application. Claims 1, 11, and 16 each include the additional elements of an application installed on a client device or computer, and a diagnostic engine installed on a server communicatively coupled to the client device or computer, wherein the trigger logic engine comprises one or more trained machine learning models. Claim 11 further recites the additional elements of the client device comprising a memory configured to store instructions, one or more processors communicatively coupled with the memory and configured to execute the instructions to cause the system to perform the method steps, and an input communicatively coupled to the client device. Claim 16 further recites the additional elements of a non-transitory, computer readable medium storing instructions and an application which, when executed by a computer, cause the computer to perform the method steps. These additional elements, when considered in the context of each claim as a whole, amount to instructions to “apply” the abstract idea in a computing environment because they merely utilize high-level computing components like a client device computer or processor executing stored instructions/applications and communicating with a server to digitize and/or automate data sharing and analysis functions that may be otherwise be achieved by and among human actors, as described above (see MPEP 2106.05(f)). Similarly, specifying that the models are machine learning models merely recites the high-level concept of “machine learning” as a means to digitize and/or automate the otherwise-abstract function of using mathematical or statistical models to evaluate clinical outcomes, and also amounts to instructions to “apply” the exception. The use of an input communicatively coupled to the client device to collect an observation as in claim 16 also amounts to instructions to “apply” the exception, because it merely digitizes the input/collection of patient data that could otherwise be collected via human observation. Accordingly, each independent claim as a whole is directed to an abstract idea without integration into a practical application. The judicial exception recited in dependent claims 2-10, 12-15, and 17-23 is not integrated into a practical application under the same analysis as above because they do not introduce any new additional elements. Claims 2-10, 12-15, and 17-23 are performed with the same high-level computing components identified in claims 1, 11, and 16 such that they also amount to the words “apply it” with a computer and do not provide integration into a practical application. Accordingly, the additional elements of claims 1-23 do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Claims 1-23 are directed to an abstract idea. Step 2B The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of a client device or computer with a processor executing instructions and an application stored in memory and communicating with a diagnostic engine installed on a server to perform the accessing, receiving, suggesting, collecting, etc. steps of the invention amount to mere instructions to apply the exception using generic computer components. As evidence of the generic nature of the above recited additional elements, Examiner notes at least Fig. 15 and paras. [00124]-[00133] of Applicant’s specification, which describe a computer system 1500 implemented with one or more processors 1502 which “may be a general-purpose microprocessor, a microcontroller, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a Programmable Logic Device (PLD), a controller, a state machine, gated logic, discrete hardware components, or any other suitable entity that can perform calculations or other manipulations of information.” Further, the memory or storage medium are described with examples of many known/suitable types of data storage components in paras. [00126] & [00133]. Additionally, paras. [0131]-[00132] describe how client/server architectures are “typically” implemented, showing that such architectures are known in the art. Further, para. [0046] notes that “Input device 214 may include a stylus, a mouse, a keyboard, a touch screen, a microphone, or any combination thereof,” showing that many types of known input means may be utilized to enter data to the system. From these disclosures, one of ordinary skill in the art would understand that any generic computing system including processing, storage, and input components and operating in a client/server orientation may be utilized to implement the invention. The use of one or more trained machine learning models to perform the outcome evaluation steps amounts to mere instructions to “apply” the exception as explained above. Examiner notes paras. [0043], [0076], & [00114] of the specification, which contemplate many types of high-level machine learning, artificial intelligence, and/or neural network implementations of the system such that one of ordinary skill in the art would understand that known machine learning models may be utilized to achieve the outcome evaluation functions of the invention. Further, Examiner notes that it is well-understood, routine, and conventional to utilize various types of machine learning models for clinical outcome evaluation, as evidenced by at least [0005] & [0014] of Rapaka et al. (US 20180315182 A1) and [0002] & [0089]-[0090] of Boussios et al. (US 20220148695 A1). Analyzing these additional elements as an ordered combination adds nothing that is not already present when considering the elements individually; the overall effect of the client/server, machine learning, and input elements in combination is to digitize and/or automate a diagnostic decision making operation that could otherwise be achieved as a certain method of organizing human activity. Thus, when considered as a whole and in combination, claims 1-23 are not patent eligible. Claim Rejections - 35 USC § 103 This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 11-15 and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Rapaka et al. (US 20180315182 A1). Claim 11 Rapaka teaches a system for decision making for diagnosis that leads to treatment of a patient in an urgent care situation given at least one feature in a dataset associated with the patient is measured, comprising: a client device comprising a memory configured to store instructions; and one or more processors communicatively coupled with the memory, and configured to execute the instructions to cause the system to (Rapaka Fig. 6, [0083], [0089]-[0090], noting a computing system such as a workstation (i.e. a client device) with a processor and memory that store and execute instructions for implementing the invention): during the urgent care situation, access, via an application installed on a client device, a diagnostic engine installed on a server communicatively coupled to the client device, wherein the diagnostic engine comprises one or more machine learning models trained and configured for diagnosis and treatment (Rapaka [0020], noting the system utilizes a machine-learnt classifier trained for diagnosis and prognosis for patients in emergency situations; see also [0070], noting the machine-learnt model may be hosted on a cloud server, indicating that a client computer (e.g. the workstation, computer, or other data processing system as in [0021] & [0083]) includes software means (e.g. an application) for accessing the remotely hosted model); receive, (Rapaka [0067], [0078]-[0079], noting missing values for a type of data may be substituted (i.e. imputed) via stochastic sampling; because the process of evaluating the varied substitute data values results in a determination of which data type is most important or significant in affecting the output of a classifier model as compared to other unmeasured data types (as in [0078]-[0079]), the values are considered to be imputed one at a time while holding other unmeasured features constant so that the effect of altering a single given variable type can be understood); wherein the dataset comprises at least one of time-series measurements for treatment information, outcome information, or actions taken by healthcare personnel in response to metrology information that includes therapeutic measures, medication administration events, or dosages (Rapaka [0001], [0031], noting many types of data may be collected and evaluated for a patient (i.e. as part of the dataset), including clinical reports, medical images, blood biomarker information, patient demographics, patient history, non-invasive measurements, sensor data, etc., considered equivalent to at least one of the types of data listed in this limitation); receive, from the server via the application installed on the client device, a first outcome evaluated with the one or more machine learning models using the first imputed value (Rapaka [0068], [0078]-[0079], noting the stochastically sampled (i.e. imputed) values for missing data types are used as input to a machine learning classifier that outputs a prediction for the patient (e.g. a condition diagnosis, a risk, an outcome for treatment) for each separate instance of the stochastic distribution; see also [0070], noting the machine-learnt model may be hosted on a cloud server, indicating that a client computer receives the results of the machine learning model from the server (e.g. via appropriate software for communicating with the server)); receive, (Rapaka [0067], [0078]-[0079], noting missing values for a type of data may be substituted (i.e. imputed) via stochastic sampling; because the process of evaluating the varied substitute data values results in a determination of which data type is most important or significant in affecting the output of a classifier model as compared to other unmeasured data types (as in [0078]-[0079]), the values are considered to be imputed one at a time while holding other unmeasured features constant so that the effect of altering a single given variable type can be understood); receive, from the server via the application installed on the client device, a second outcome evaluated with the one or more machine learning models using the second imputed value (Rapaka [0068], [0078]-[0079], noting the stochastically sampled (i.e. imputed) values for missing data types are used as input to a machine learning classifier that outputs a predicted patient outcome for each separate instance of the stochastic distribution; see also [0070], noting the machine-learnt model may be hosted on a cloud server, indicating that a client computer receives the results of the machine learning model from the server (e.g. via appropriate software for communicating with the server)); receive, (Rapaka [0078]-[0079], noting the variation in the distribution of outcomes (i.e. a statistical parameter determined with at least the first and second outcomes) indicates the importance of the missing information (i.e. the unmeasured feature)); receive, from the server via the application installed on the client device, an assigned ranking for the unmeasured feature corresponding to the statistical parameter (Rapaka [0017], [0077]-[0079], noting the importance of each missing (i.e. unmeasured) feature is ranked and output based on the variation in the distribution of outcomes (i.e. the statistical parameter); see also [0070], noting the machine-learnt model may be hosted on a cloud server, indicating that a client computer receives the results of the machine learning model from the server (e.g. via appropriate software for communicating with the server)), suggest, based at least on the assigned ranking, one or more unmeasured features in the dataset to be measured (Rapaka [0017], [0078], noting the system outputs suggested types of missing data to prioritize for collection based on the ranked importance of each missing feature in influencing the patient predictions); and collect, via an input communicatively coupled to the client device, an observation of the patient based on a measurement of the one or more suggested unmeasured features (Rapaka [0078], noting listing the prioritized features allows tests to be ordered or information to be gathered for that patient; see also [0069], noting that patient results, user-performed actions, and additional information are stored as the model is deployed in actual use and used to retrain the models once sufficient additional data is collected, indicating that user actions to facilitate collection of the recommended missing data types as in [0078] are actually performed in an ongoing manner, for example via user input at a user interface as in [0033]-[0034]). In summary, Rapaka teaches a system of imputing missing values to prioritize and suggest collection of the data types most important/impactful for a machine learning diagnostic prediction for a patient in an emergency situation. Rapaka discloses that such a system may include workstations interacting with a server hosting the machine-learnt model for performing the imputation and prioritization steps, whose outputs may include a structured clinical report with key findings, filling in fields in a patient medical record, or other outputs (see [0070]). Rapaka further teaches that the system may utilize a variety of flexible processing architectures, including use of a plurality of processing devices for parallel or sequential processing of data over a network (see [0089]-[0090]). Accordingly, although Rapaka fails to explicitly disclose that the client device receives the first and second imputed values and the statistical parameter from the server, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the system of Rapaka such that all data types evaluated and determined by the system are shared among all computing elements (e.g. the workstation and cloud server of [0070]) in order to implement the more flexible data processing architectures disclosed in [0089]-[0090]. Claim 12 Rapaka teaches the system of claim 11, and further teaches wherein to assign the unmeasured feature a ranking corresponding to the statistical parameter, the one or more processors execute the instructions to identify, in a filtered dataset, a relative importance of the unmeasured feature with one or more known outcomes using model-based feature importance methodologies (Rapaka [0017], [0077]-[0079], noting the importance of each missing (i.e. unmeasured) feature is ranked relative to the other missing features based on the variation in the distribution of outcomes output by the classifier for each missing feature type; the variation in the distribution of outcomes output by the classifier is considered to be a filtered dataset because it contains only the outputs of the classifier and not all data known or evaluated by the system). Claim 13 Rapaka teaches the system of claim 11, and further teaches wherein determining the statistical parameter with the first outcome and the second outcome comprises accessing a master dataset comprising multiple datasets associated with known outcomes (Rapaka [0040], noting the system can access extracted features and known ground truth outcomes for multiple other patients (i.e. a master dataset); Examiner notes that the claim provides no recitation of how the accessed master dataset is specifically used to accomplish determining a statistical parameter with the first outcome and the second outcome, so the mere accessing of this type of data at all (e.g. for the purpose of training the classifier model that leads to the first and second outcomes as in Rapaka) is considered sufficient to anticipate the claim). Claim 14 Rapaka teaches the system of claim 11, and further teaches wherein determining the statistical parameter with the first outcome and the second outcome comprises determining a variance value associated with a model for an outcome, the model based on the unmeasured feature and at least one other distinct feature in a dataset, and evaluating a variation of prediction for an outcome with the model using multiple imputed values for the unmeasured feature in the dataset (Rapaka [0078]-[0079], noting the variation in the distribution of outcomes for a model (i.e. a variance value associated with a model for an outcome) based on imputing a distribution of different values and evaluating known information is used to determine the importance of each missing feature). Claim 15 Rapaka teaches the method of claim 1, and further teaches wherein determining the statistical parameter with the first outcome and the second outcome comprises determining a rule for assessing a decision value based on a dataset, wherein the dataset comprises collected values for multiple measured features in the dataset and the unmeasured feature in the dataset, and wherein the rule is consistent with: (1) multiple known outcomes from a master dataset that comprises multiple datasets and (2) one or more measured features (Rapaka [0040], [0069], noting the system trains and iteratively updates a machine learning classifier (considered to encompass determining a rule for assessing a decision value based on a dataset because classifiers can comprise rules) based on extracted features (i.e. collected values for multiple measured features in an instance), synthesized or substituted information (i.e. unmeasured features), and known ground truth outcomes for previous and current patients (i.e. a master dataset comprising multiple datasets and measured features)). Claim 22 Rapaka teaches the method of claim 1, and t further teaches wherein the other first remaining unmeasured feature is same as the other second remaining unmeasured feature (Rapaka [0067], [0078]-[0079], noting missing values for a type of data may be substituted (i.e. imputed) via stochastic sampling; because the process of evaluating the varied substitute data values results in a determination of which data type is most important or significant in affecting the output of a classifier model as compared to other unmeasured data types (as in [0078]-[0079]), the values are considered to be imputed one at a time while holding other unmeasured features (i.e. the same other unmeasured features) constant so that the effect of altering a single given variable type can be understood). Claims 1-6, 8, 16-19, 21, and 23 are rejected under 35 U.S.C. 103 as being unpatentable over Rapaka in view of Jackson et al. (US 20080114576 A1). Claims 1 and 16 Rapaka teaches a method for decision making for diagnosis that leads to treatment of a patient in an urgent care situation given at least one feature in a dataset associated with the patient is measured (Rapaka abstract, [0016], noting a method for diagnostic decision-making in emergency situations based on measured patient data), comprising: during the urgent care situation, accessing, via an application installed on a client device, a diagnostic engine installed on a server communicatively coupled to the client device, wherein the diagnostic engine comprises one or more machine learning models trained and configured for (Rapaka [0020], noting the system utilizes a machine-learnt classifier trained for diagnosis and prognosis for patients in emergency situations; see also [0070], noting the machine-learnt model may be hosted on a cloud server, indicating that a client computer (e.g. the workstation, computer, or other data processing system as in [0021] & [0083]) includes software means (e.g. an application) for accessing the remotely hosted model); receiving, (Rapaka [0067], [0078]-[0079], noting missing values for a type of data may be substituted (i.e. imputed) via stochastic sampling; because the process of evaluating the varied substitute data values results in a determination of which data type is most important or significant in affecting the output of a classifier model as compared to other unmeasured data types (as in [0078]-[0079]), the values are considered to be imputed one at a time while holding other unmeasured features constant so that the effect of altering a single given variable type can be understood); wherein the dataset comprises at least one of time-series measurements for treatment information, outcome information, or actions taken by healthcare personnel in response to metrology information that includes therapeutic measures, medication administration events, or dosages (Rapaka [0001], [0031], noting many types of data may be collected and evaluated for a patient (i.e. as part of the dataset), including clinical reports, medical images, blood biomarker information, patient demographics, patient history, non-invasive measurements, sensor data, etc., considered equivalent to at least one of the types of data listed in this limitation); receiving, from the server via the application installed on the client device, a first outcome evaluated with the one or more machine learning models using the first imputed value (Rapaka [0068], [0078]-[0079], noting the stochastically sampled (i.e. imputed) values for missing data types are used as input to a machine learning classifier that outputs a prediction for the patient (e.g. a condition diagnosis, a risk, an outcome for treatment) for each separate instance of the stochastic distribution; see also [0070], noting the machine-learnt model may be hosted on a cloud server, indicating that a client computer receives the results of the machine learning model from the server (e.g. via appropriate software for communicating with the server)); receiving, (Rapaka [0067], [0078]-[0079], noting missing values for a type of data may be substituted (i.e. imputed) via stochastic sampling; because the process of evaluating the varied substitute data values results in a determination of which data type is most important or significant in affecting the output of a classifier model as compared to other unmeasured data types (as in [0078]-[0079]), the values are considered to be imputed one at a time while holding other unmeasured features constant so that the effect of altering a single given variable type can be understood); receiving, from the server via the application installed on the client device, a second outcome evaluated with the one or more machine learning models using the second imputed value (Rapaka [0068], [0078]-[0079], noting the stochastically sampled (i.e. imputed) values for missing data types are used as input to a machine learning classifier that outputs a predicted patient outcome for each separate instance of the stochastic distribution; see also [0070], noting the machine-learnt model may be hosted on a cloud server, indicating that a client computer receives the results of the machine learning model from the server (e.g. via appropriate software for communicating with the server)); receiving, (Rapaka [0078]-[0079], noting the variation in the distribution of outcomes (i.e. a statistical parameter determined with at least the first and second outcomes) indicates the importance of the missing information (i.e. the unmeasured feature)); receiving, from the server via the application installed on the client device, an assigned ranking for the unmeasured feature corresponding to the statistical parameter (Rapaka [0017], [0077]-[0079], noting the importance of each missing (i.e. unmeasured) feature is ranked based on the variation in the distribution of outcomes (i.e. the statistical parameter); see also [0070], noting the machine-learnt model may be hosted on a cloud server, indicating that a client computer receives the results of the machine learning model from the server (e.g. via appropriate software for communicating with the server)), suggesting, based at least on the assigned ranking, one or more unmeasured features in the dataset to be measured (Rapaka [0017], [0078], noting the system outputs suggested types of missing data to prioritize for collection based on the ranked importance of each missing feature in influencing the patient predictions); and collecting an observation of the patient based on a measurement of the one or more suggested unmeasured features (Rapaka [0078], noting listing the prioritized features allows tests to be ordered or information to be gathered for that patient; see also [0069], noting that patient results, user-performed actions, and additional information are stored as the model is deployed in actual use and used to retrain the models once sufficient additional data is collected, indicating that user actions to facilitate collection of the recommended missing data types as in [0078] are actually performed in an ongoing manner). In summary, Rapaka teaches a method of imputing missing values to prioritize and suggest collection of the data types most important/impactful for a machine learning diagnostic prediction for a patient in an emergency situation. Rapaka discloses tha
Read full office action

Prosecution Timeline

Jul 08, 2022
Application Filed
Oct 18, 2024
Non-Final Rejection — §101, §103, §112
Mar 04, 2025
Applicant Interview (Telephonic)
Mar 04, 2025
Examiner Interview Summary
Apr 04, 2025
Response Filed
May 02, 2025
Final Rejection — §101, §103, §112
Oct 03, 2025
Request for Continued Examination
Oct 10, 2025
Response after Non-Final Action
Oct 29, 2025
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12580072
CLOUD ANALYTICS PACKAGES
2y 5m to grant Granted Mar 17, 2026
Patent 12555667
SYSTEMS AND METHODS FOR USING AI/ML AND FOR CARDIAC AND PULMONARY TREATMENT VIA AN ELECTROMECHANICAL MACHINE RELATED TO UROLOGIC DISORDERS AND ANTECEDENTS AND SEQUELAE OF CERTAIN UROLOGIC SURGERIES
2y 5m to grant Granted Feb 17, 2026
Patent 12548656
SYSTEM AND METHOD FOR AN ENHANCED PATIENT USER INTERFACE DISPLAYING REAL-TIME MEASUREMENT INFORMATION DURING A TELEMEDICINE SESSION
2y 5m to grant Granted Feb 10, 2026
Patent 12475978
ADAPTABLE OPERATION RANGE FOR A SURGICAL DEVICE
2y 5m to grant Granted Nov 18, 2025
Patent 12462911
CLINICAL CONCEPT IDENTIFICATION, EXTRACTION, AND PREDICTION SYSTEM AND RELATED METHODS
2y 5m to grant Granted Nov 04, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
36%
Grant Probability
83%
With Interview (+46.7%)
3y 7m
Median Time to Grant
High
PTA Risk
Based on 172 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month