DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Specification
The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification.
Drawings
The applicant’s submitted drawings appear to be acceptable for examination purposes. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the drawings.
Information Disclosure Statement
As required by M.P.E.P. 609(c), the applicant's submission of the Information Disclosure Statement, dated 25 January 2024, is acknowledged by the examiner and the cited references have been considered in the examination of the claims now pending. As required by M.P.E.P 609 C(2), a copy of the PTOL-1449 initialed and dated by the examiner is attached to the instant office action.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 15 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
The terms “high-risk” and “high-cost” in claim 15 are relative terms which render the claim indefinite. The terms “high-risk” and “high-cost” are not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-3, 9-13, and 15-23 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Kallonen (WO 2022/074300).
As per claim 1, Kallonen teaches a system comprising: memory hardware configured to store instructions; and processor hardware configured to execute the instructions [the system includes one or more data processing devices, which comprise one or more processors (processor hardware) and at least one connected memory storing instructions to be executed by the processor(s) (pg. 18, line 18 to pg. 19, line 15; fig. 1; etc.)], wherein the instructions include: loading a machine learning model, loading a training data set, loading baseline hyperparameters, configuring the machine learning model with the baseline hyperparameters [the system includes a data processing device which comprises one or more processors (processor hardware) and at least one connected memory storing instructions to be executed by the processor(s), which includes computerized models (CMs) and their parameters, as well as training data for the models (pg. 18, line 18 to pg. 19, line 15; fig. 1; etc.), and can also include stored CM performance metrics and setting (baseline) hyperparameters of the CMs (pg. 29, lines 23-32; pg. 39, lines 3-24; figs. 1 and 18; etc.); where the CM may include an ensemble of transformer networks or CNNs, etc. (pg. 3, lines 8-20; pg. 42, lines 23-29; etc.); so that the initial CM/hyperparameters are the machine learning model with baseline hyperparameters], providing the training data set as inputs to the machine learning model configured with the baseline hyperparameters to determine baseline performance metrics [the CMs can be trained using training sample sequences input to the model(s) (pg. 18, line 18 to pg. 19, line 15; pg. 21, line 16 to pg. 22, line 16; fig. 3; etc.), including setting (baseline) hyperparameters of the CMs (pg. 29, lines 23-32; pg. 39, lines 3-24; fig. 18; etc.)], determining whether the baseline performance metrics are above a threshold [training of a CM may include determining whether an expected performance (metric) of the CM has been achieved by comparing it to a threshold (pg. 40, lines 1-3; pg. 41, line 19 to pg. 42, line 6; fig. 18; etc.)], in response to determining that the baseline performance metrics are above the threshold, saving the baseline hyperparameters as optimal hyperparameters, configuring the machine learning model with the optimal hyperparameters [training of a CM may include determining whether an expected performance (metric) of the CM has been achieved by comparing it to a threshold, and finishing training when the performance exceeds the threshold, at which point the CM model and optimized hyperparameters are saved as ready for use (pg. 40, lines 1-3; pg. 41, line 19 to pg. 42, line 6; fig. 18; etc.); where the finished model with optimized hyperparameters are the machine learning model with optimal hyperparameters], loading input variables, providing the input variables as inputs to the machine learning model configured with the optimal hyperparameters to generate output variables [when the performance exceeds the threshold, the CM model and optimized hyperparameters are saved as ready for use (pg. 40, lines 1-3; pg. 41, line 19 to pg. 42, line 6; fig. 18; etc.) where, after the model is finished training, measured biosignals are sent to the trained CM as inputs (variables), and a prediction of patient conditions are provided as output (variables) (pg. 8, lines 3-11; see also: pg. 9, lines 10-16; pg. 21, lines 16-28; pg. 28, lines 1-21; figs. 6 and 18; etc.)], saving the output variables to a database [the outputs may be stored in a database with patient identifiers and time-domain sample sequences from patients (pg. 11, lines 17-28; see also: pg. 12, lines 5-14; pgs. 15-17; pg. 26, line 19 to pg. 27, line 7; figs. 1 and 5; etc.)], and generating a graphical user interface, wherein the graphical user interface is configured to access the output variables from the database and display the output variables to a user [the system may include a user interface device providing a user interface display (GUI) (pg. 17, lines 26 to pg. 18, line 3) which can be used to display the conditions output by the CM (pg. 19, lines 16-27; etc.)].
As per claim 2, Kallonen teaches wherein: the input variables include an identifier of an entity in a population [along with the time sequence data, patient identifiers and timestamps may be used as the time sequence samples received by the data processing system to use as inputs to the CMs (pg. 12, lines 1-13; see also: pg. 16, lines 5-15; pg. 27, lines 5-8; pg. 39, lines 25-32; etc.); where the patients are entities of a population]; the output variables include a score for the entity indicated by the identifier [the output of the CMs may include one or more scores or a composite score of multiple (ensemble) CNNs associated with the patient identifier (pg. 21, lines 19-25; pg. 43, lines 4-20; fig. 21; etc.)]; and the score indicates a likelihood of a feature of merit exceeding a threshold [the output of the CMs may include one or more scores or a composite score of multiple (ensemble) CNNs, which scores indicate classes which indicate predicted conditions as well as a risk score (pg. 21, lines 19-25; pg. 43, lines 4-28; figs. 21-22; etc.) where the scores generated by the CMs provide a probability of a life-threatening condition or need for treatment, or a distribution of such probability values (pg. 20, line 19 to pg. 21, line 28; pg. 34, lines 5-12; etc.) which may be compared to a determined decision threshold for each condition (pg. 36, line 13 to pg. 37, line 2; pg. 42, lines 30-33; etc.); where the life-threatening condition or need for treatment is the feature of merit exceeding the decision threshold].
As per claim 3, Kallonen teaches wherein the instructions include: generating a plurality of scores for a plurality of entities in the population [patient identifiers and associated time sequence samples and model outputs may be stored for a number of patients (pg. 18, lines 4-12; see also: pg. 11, lines 17-28; pg. 12, lines 5-14; pgs. 15-17; pg. 26, line 19 to pg. 27, line 7; figs. 1 and 5; etc.)]; and clustering the plurality of scores into a plurality of clusters [additional models can include unsupervised learning, including clustering, where clustering is performed on the outputs of the prior (classification) models (pg. 44, line 13 to pg. 45, line 11; etc.)].
As per claim 9, Kallonen teaches wherein the instructions include, in response to determining that the baseline metrics are not above the threshold, adjusting the baseline hyperparameters [if the performance metric is not satisfactory (not above the threshold – see above) the method proceeds to permute the hyperparameters according to a defined permutation function and proceeds to the next iteration of training (pg. 40, lines 27-33; fig. 18; etc.)].
As per claim 10, Kallonen teaches wherein the instructions include configuring the machine learning model with the adjusted hyperparameters [if the performance metric is not satisfactory (not above the threshold – see above) the method proceeds to permute the hyperparameters according to a defined permutation function and proceeds to the next iteration of training (pg. 40, lines 27-33; fig. 18; etc.); where the next training iteration proceeds with the adjusted model hyperparameters (see, e.g., fig. 18)].
As per claim 11, Kallonen teaches wherein the instructions include providing the training data set as inputs to the machine learning model configured with the adjusted hyperparameters to determine updated performance metrics [if the performance metric is not satisfactory (not above the threshold – see above) the method proceeds to permute the hyperparameters according to a defined permutation function and proceeds to the next iteration of training (pg. 40, lines 27-33; fig. 18; etc.), which includes using training sample sequences input to the model(s) (pg. 18, line 18 to pg. 19, line 15; pg. 21, line 16 to pg. 22, line 16; fig. 3; etc.)].
As per claim 12, Kallonen teaches wherein the instructions include determining whether the updated performance metrics are more optimal than the baseline performance metrics [each iteration includes determining whether the updated performance metric is satisfactory and, if the performance metric is not satisfactory (not above the threshold – see above) the method proceeds to permute the hyperparameters according to a defined permutation function and proceeds to the next iteration of training (pg. 40, lines 27-33; fig. 18; etc.); which is determining whether the updated performance metrics are an improvement (see, e.g., fig. 18)].
As per claim 13, Kallonen teaches wherein the instructions include, in response to determining that the updated performance metrics are more optimal than the baseline performance metrics, saving the adjusted hyperparameters as the baseline hyperparameters [each iteration includes determining whether the updated performance metric is satisfactory and, if the performance metric is not satisfactory (not above the threshold – see above) the method proceeds to permute the hyperparameters according to a defined permutation function and proceeds to the next iteration of training (pg. 40, lines 27-33; fig. 18; etc.); where the permuted hyperparameters are saved with the model(s) (pg. 29, lines 23-32; pg. 39, lines 3-24; figs. 1 and 18; etc.)].
As per claim 15, Kallonen teaches wherein the output variables include at least one of (i) a per-patient risk score indicating a risk of a patient having a high-risk episode or a high-cost treatment, (ii) a patient identifier, (iii) a physician identifier, (iv) a physician state, and (v) a patient state [the output of the CMs may include one or more scores or a composite score of multiple (ensemble) CNNs, which scores indicate classes which indicate predicted conditions as well as a risk score (pg. 21, lines 19-25; pg. 43, lines 4-28; figs. 21-22; etc.) where the scores generated by the CMs provide a probability of a life-threatening condition or distribution of probability values (pg. 20, line 19 to pg. 21, line 28; pg. 34, lines 5-12; etc.); which includes at least per-patient risk scores indicating high-risk episodes/high-cost treatment, patient identifiers, and patient state].
As per claim 16, Kallonen teaches wherein the input variables are stored on one or more storage devices [the system includes a data processing device which comprises one or more processors (processor hardware) and at least one connected memory storing instructions to be executed by the processor(s), which includes computerized models (CMs) and their parameters, as well as training data for the models (pg. 18, line 18 to pg. 19, line 15; fig. 1; etc.); where the memory is the storage device storing the input variables (training data including time sequence data, patient identifiers, etc. – see above)].
As per claim 17, Kallonen teaches wherein the processor hardware is configured to access the one or more storage devices via one or more networks [the processor(s) may access the memory via a wired or wireless network(s) (pg. 10, lines 5-7; etc.)].
As per claim 18, see the rejection of claim 1, above.
As per claim 19, see the rejection of claim 9, above.
As per claim 20, see the rejection of claim 10, above.
As per claim 21, see the rejection of claim 11, above.
As per claim 22, see the rejection of claim 12, above.
As per claim 23, see the rejection of claim 13, above.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 4-7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kallonen (WO 2022/074300) in view of Rivas (US 2021/0065846).
As per claim 4, Kallonen teaches the system of claim 3, as described above.
While Kallonen teaches clustering the plurality of output scores (see above), it has not been relied upon for teaching wherein the plurality of clusters is three clusters.
Rivas teaches wherein the plurality of clusters is three clusters [a risk score model, including multiple hyperparameters, is used to produce risk scores for patients (paras. 0033, 0056, etc.), which are clustered using k-means clustering to cluster high risk profiles, which clusters can be associated with 3 types of genetic subtypes (paras. 0033, 0044; figs. 3 and 15; etc.); so, this includes 3 clusters (type 1, type 2, and type 3 – see, e.g., fig. 3)].
Kallonen and Rivas are analogous art, as they are within the same field of endeavor, namely predicting individual/patient risk scores using machine learning models.
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to include clustering the risk scores into different risk profiles, including 3 types, as taught by Rivas, for the clustering being performed on the predicted scores in the system taught by Kallonen.
Rivas provides motivation as [the determined clusters can be used to separate each subtype into relevant groupings (para. 0033, etc.) and used to asses which components drive risk (para. 0127, etc.)].
As per claim 5, Kallonen/Rivas teaches wherein: the plurality of clusters includes a particular cluster associated with a greatest risk [the clusters can include identifying outliers, which includes individuals with the greatest risk (Rivas: paras. 0033, 0043-44, etc.)]; and the instructions include adapting the graphical user interface in response to the score being assigned to the particular cluster [the clusters can include identifying outliers, which includes individuals with the greatest risk (Rivas: paras. 0033, 0043-44, etc.); and the system may include a user interface device providing a user interface display (GUI) (Kallonen: pg. 17, lines 26 to pg. 18, line 3) which can be used to display the conditions output by the CM (Kallonen: pg. 19, lines 16-27; etc.) as well as displaying increased risk (Kallonen: pg. 34, lines 5-12; etc.); which display would thus be adapted to provide the outlier risk cluster assignment (increased risk)].
As per claim 6, Kallonen/Rivas teaches wherein the score is a value between zero and one hundred inclusive [the scores generated by the CMs provide a probability of a life-threatening condition or distribution of probability values (Kallonen: pg. 20, line 19 to pg. 21, line 28; pg. 34, lines 5-12; etc.) where risk can be measured as percentage (Rivas: para. 0039, etc.); and where probability is a percentage, which is a score between zero and one hundred, inclusive].
As per claim 7, Kallonen/Rivas teaches wherein: the population includes entities that consume services; and the feature of merit is a measure of service consumption of the entity [the output of the CMs may include one or more scores or a composite score of multiple (ensemble) CNNs, which scores indicate classes which indicate predicted conditions as well as a risk score (Kallonen: pg. 21, lines 19-25; pg. 43, lines 4-28; figs. 21-22; etc.) where the scores generated by the CMs provide a probability of a life-threatening condition or need for treatment, or a distribution of such probability values (Kallonen: pg. 20, line 19 to pg. 21, line 28; pg. 34, lines 5-12; etc.) which may be compared to a determined decision threshold for each condition (Kallonen: pg. 36, line 13 to pg. 37, line 2; pg. 42, lines 30-33; etc.); where the need for treatment is the feature of merit that is a measure of service consumption (treatment)].
Claim(s) 8 and 25 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kallonen (WO 2022/074300), in view of Rivas (US 2021/0065846), and further in view of DeLong et al. (Comparing Risk-Adjustment Methods for Provider Profiling, 1997, pgs. 2645-2664).
As per claim 8, Kallonen/Rivas teaches the system of claim 7, as described above.
While Kallonen/Rivas teaches using population data that includes healthcare service data (see above), it has not been relied upon for teaching wherein: the population includes entities that coordinate services; and the feature of merit is an amount of services advised by the entity.
DeLong teaches wherein: the population includes entities that coordinate services; and the feature of merit is an amount of services advised by the entity [a risk assessment prediction can be made for providers to compile provider profiles (pg. 2645, Summary) using a risk-adjustment prediction model (pg. 2647, section 2.2 and 2.4; etc.) which can include analyzing observed patient data (pg. 2648, section 2.4.1; etc.) which is the feature of merit including an amount of services advised (i.e., patients seen, etc.); which can be a risk score in the system of Kallonen/Rivas above].
Kallonen/Rivas and DeLong are analogous art, as they are within the same field of endeavor, namely risk assessment/prediction in healthcare services using machine learning models.
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to include healthcare provider risk assessment prediction, as taught by DeLong, in the patient risk assessment predictions in the system taught by Kallonen/Rivas.
DeLong provides motivation as [provider risk profiling is a useful tool for measuring quality and value of health care (pg. 2645, Summary, etc.)].
As per claim 25, Kallonen/Rivas/DeLong teaches wherein: the output variables include (i) per-provider risk scores and (ii) clusters for the per-provider risk scores; and each cluster indicates a risk category [additional models can include unsupervised learning, including clustering, where clustering is performed on the outputs of the prior (classification) models (Kallonen: pg. 44, line 13 to pg. 45, line 11; etc.); where the classifiers provide risk scores (Kallonen: pg. 21, lines 19-25; pg. 43, lines 4-28; figs. 21-22; etc.) which are clustered using k-means clustering to cluster high risk profiles, which clusters can be associated with different types (Rivas; paras. 0033, 0044; figs. 3 and 15; etc.), and which can include per provider risk assessments (DeLong: pg. 2647, section 2.2 and 2.4; etc.)].
Examiner’s Note: the reasoning and motivation for the combination is provided in the rejection of claim 8, above.
Claim(s) 14 and 24 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kallonen (WO 2022/074300) in view of Zheng et al. (Time-to-event prediction analysis of patients with chronic heart failure comorbid with atrial fibrillation: a LightGBM model, Aug 2021, pgs. 1-12).
As per claim 14, Kallonen teaches the system of claim 1, as described above.
While Kallonen teaches using a machine learning model for making the patient risk predictions (see above), it has not been relied upon for teaching wherein the machine learning model is a light gradient-boosting machine (LightGBM) regressor model.
Zheng teaches wherein the machine learning model is a light gradient-boosting machine (LightGBM) regressor model [a light gradient boosting machine (LightGBM) model using logistic regression is used to make patient risk assessment predictions (pg. 1, Abstract; etc.)].
Kallonen and Zheng are analogous art, as they are within the same field of endeavor, namely patient risk assessment prediction using machine learning models.
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to utilize a LightGBM model to make the patient risk assessment, as taught by Zheng, as the machine learning model(s) for predicting patient risk scores in the system taught by Kallonen.
Zheng provides motivation as [Risk stratification based on the LightGBM model showed better discriminative ability than traditional model in predicting 1- to 3-year all-cause mortality of patients with CHF comorbid with AF. Individual patients’ prognosis could also be obtained, and the subgroup of patients with a higher risk of mortality could be identified. It can help clinicians identify and manage high- and low-risk patients and carry out more targeted intervention measures to realize precision medicine and the optimal allocation of health care resources (pg. 1, Abstract)].
As per claim 24, Kallonen/Zheng teaches wherein the machine learning model is a light gradient-boosting machine (LightGBM) regressor model or a LightGBM classifier model [a light gradient boosting machine (LightGBM) model using logistic regression is used to make patient risk assessment predictions (Zheng: pg. 1, Abstract; etc.)].
Examiner’s Note: the reasoning and motivation for the combination is provided in the rejection of claim 14, above.
Conclusion
The following is a summary of the treatment and status of all claims in the application as recommended by M.P.E.P. 707.07(i): claims 1-25 are rejected.
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Drake (US 2021/0174958) – discloses a system including hyperparameter optimization for a ML classifier for analyzing blood-based diagnostic tests.
Hancock et al. (Leveraging LightGBM for Categorical Big Data, Aug 2021, pgs. 149-154) – discloses using LightGBM models for assessments on multiple kinds of healthcare data, including provider fraud detection/assessment.
Huang et al. (Deep significance clustering: a novel approach for identifying risk-stratified and predictive patient subgroups, Sept 2021, pgs. 2641-2653) – discloses deep significance clustering (DICE) for self-supervised training and prediction of risk-based patient groupings.
The examiner requests, in response to this Office action, that support be shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line number(s) in the specification and/or drawing figure(s). This will assist the examiner in prosecuting the application.
When responding to this office action, Applicant is advised to clearly point out the patentable novelty which he or she thinks the claims present, in view of the state of the art disclosed by the references cited or the objections made. He or she must also show how the amendments avoid such references or objections. See 37 CFR 1.111(c).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GEORGE GIROUX whose telephone number is (571)272-9769. The examiner can normally be reached M-F 10am-6pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Omar Fernandez Rivas can be reached at 571-272-2589. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/GEORGE GIROUX/Primary Examiner, Art Unit 2128