DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis ( i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1 -5, 8, 12-16, 19 are rejected under 35 U.S.C. 103 as being unpatentable over Guimaraes et al . (“Risk prediction with office and ambulatory blood pressure using artificial intelligence”, Applicant’s submitted IDS filed 12/21/2023), hereinafter “ Guimaraes ”, and in view of Chang et al. (US 2023/0157533 A1), hereinafter “ Chang ”. As per claim 1 , Guimaraes teaches a method of establishing a prediction model for predicting probability of a subject experiencing white coat effect, to be implemented by a computing device, “ the computing device storing a plurality of training data sets that are respectively related to a plurality of samples and a set of target hyperparameters that is related to a target machine learning algorithm, each of the training data sets including a plurality of characteristic parameters respectively related to a plurality of characteristic conditions of the corresponding one of the samples, and a label that indicates whether the corresponding one of the sample experiences white coat effect ” at page 2, “Background”, page s 5 -6 , “Data source”; (Guimaraes teaches a machine learning model for prediction of mortality risk using office and ambulatory blood pressure and to access the possibility of predicting ABP phenotypes (i.e., white-coat, ambulatory and masked hypertensin) utilizing clinical variables. These variables ranged from physical characteristic such as age, height, or weight, to clinical history, medication and mortality data) the characteristic parameters including a plurality of physiological parameters that are respectively related to a plurality of physiological conditions of the corresponding one of the samples, and a plurality of drug-usage indicators that respectively indicate usage conditions respectively of a plurality of specific drugs by the corresponding one of the samples ” at pages 21-22, Table 1 ; ( Guimaraes teaches the characteristic parameter includes a plurality of physiological parameter s and a plurality of drug-usage indicators as shown at Table 1 ) the method comprising steps of: “ obtaining , by using the target machine learning algorithm and a model-explanation tool based on the training data sets and the set of target hyperparameters, impact values respectively for the characteristic conditions, each of the impact values being related to impact of the characteristic parameters that are respectively included in the training data sets and that are related to the corresponding one of the characteristic conditions on an output of a model that is obtained using the target machine learning algorithm” at pages 8-9, “k-fold cross-validation”, “Feature selection”; ( Guimaraes teaches performing 5-fold stratified cross-validation to train, tune and evaluate all the models. The dataset was randomly split into 5 groups of equal size while maintain the ratio of samples from each class in every group. The classifier was then trained and tested 5 times. The test set was classified using the best performing hyperparameter combination, as accessed by grid-search evaluated with the tuning set. The class-validation was repeated 20 times with different splits) “ for each of the training data sets, selecting one of the characteristic parameters that is related to one of the characteristic conditions corresponding to a greatest one of the impact values from the training data set as a training data subset ” at pages 8-9, “k-fold cross-validation”, “Feature selection”; ( Guimaraes teaches performing 5-fold stratified cross-validation to train, tune and evaluate all the models. The dataset was randomly split into 5 groups of equal size while maintain the ratio of samples from each class in every group . The test set was classified using the best performing hyperparameter (i.e., “greatest one of the impact values ”) combination , as accessed by grid-search evaluated with the tuning set ) “ obtaining , based on the training data subsets and the set of target hyperparameters, a candidate model by using the target machine learning algorithm, and an evaluation value related to the candidate model by using a first validation method ; for each of the training data subsets, supplementing the training data subset with one of the characteristic parameters that is related to one of the characteristic conditions corresponding to a greatest one of the impact values among the characteristic parameters that are not included in the training data subset ” at pages 8-9, “k-fold cross-validation”, “Feature selection”; (Guimaraes teaches f orward stepwise selection was applied to each machine learning approach to select the best subset of features. First, each variable was evaluated independently using the AUC, to select which one can better discriminate the data. Thereafter, recursively tested the addition of all other variables, adding the combination that perform ) Guimaraes does not explicitly teach the detail implementation of each steps as claimed. However, Chang teaches a similar method of establishing a prediction model including the steps of: “ obtaining , by using the target machine learning algorithm and a model-explanation tool based on the training data sets and the set of target hyperparameters, impact values respectively for the characteristic conditions, each of the impact values being related to impact of the characteristic parameters that are respectively included in the training data sets and that are related to the corresponding one of the characteristic conditions on an output of a model that is obtained using the target machine learning algorithm” at [0084]-[0093] , [0177]-[0178] , [0197]-[0201] and Table 1 ; (Chang teaches each of the input variables contributes to the prediction to a different extent and is stored in the respective model and is associated with a respective ‘importance’ value, as resulting from a calculation adopting a given importance metric . The importance value is implemented as a Shapley value . Chang teaches at step 418 the step of performing feature selection to select the most important subject features and generating reduced training set, which is used to train 3 the machine learning algorithm at step 412 ) “for each of the training data sets, selecting one of the characteristic parameters that is related to one of the characteristic conditions corresponding to a greatest one of the impact value s from the training data set as a training data subset ” at [0090] , [0170] -[ 017 8 ] and Fig. 4 ; (Chang teaches model hyperparameters can be optimized using an appropriate k-fold cross-validation approach, e.g., splitting the data into K=4 parts and training 4 models, each time on ¾ of the data and evaluating the performance on the remaining %. Chang teaches at step 418 the step of performing feature selection to select the most important subject features and generating reduced training set, which is used to train the machine learning algorithm at step 412 ) “ obtaining , based on the training data subsets and the set of target hyperparameters, a candidate model by using the target machine learning algorithm, and an evaluation value related to the candidate model by using a first validation method ; for each of the training data subsets, supplementing the training data subset with one of the characteristic parameters that is related to one of the characteristic conditions corresponding to a greatest one of the impact values among the characteristic parameters that are not included in the training data subset ” at [0173]-[0181] and Fig. 4; (Chang teaches the training set is used to train the machine learning algorithm and generate machine learning models based on the training. Validation set is used to evaluate the generated machine learning models as well as update machine learning model hyperparameters for better performance. Cross-validation (i.e., “first validation method”) is used, i.e., multiple training and validation data sets may be randomly created, in which each record is in a validation set once. The test set is used to access how well the trained machine learning models perform on unseen data, and is also used to estimate machine learning model performance when applied to new sets of subject data (e.g., subject data that is not included in the process data set)) “ obtaining , based on the training data subsets thus supplemented and the set of target hyperparameters, another candidate model by using the target machine learning algorithm, and another evaluation value related to said another candidate model by using the first validation method , repeating the step of supplement the training data subset, and the step of obtaining another candidate model and another evaluation value related to said another candidate model based on the training data subsets thus supplemented and the set of target hyperparameter, until the training data subsets, each being supplemented to include all of the characteristic parameters, have been used in the step of obtaining another candidate model and another evaluation value ” at [0174]-[0179] and Fig. 4; (Chang teaches iteratively perform the actions of blocks 412-418, determining performance metrics at block 416 based on machine leaning models that are trained/validated using reduced training set. Each iteration generates a new machine learning model based on a new reduced training sets comprising a new subset of input features ) “selecting, from among the candidate models that are obtained in the step of obtaining a candidate model and the step of obtaining another candidate model, one of the candidate models as the prediction model based on the evaluation values respectively related to the candidate models” at [0180] -[ 0181] and Fig. 4. (Chang teaches at block 420, the computing system selects a machine learning model based on the one or more performance metrics determined for the various models generated during recursive feature elimination. In this case, the computing system selects the machine learning model (of all of the machine learning models generated for each of the machine learning algorithm used) that has the highest performance, in consideration of the primary objective on the validation set or based on cross-validation) Thus, it would have been obvious to one of ordinary skill in the art to combine Chang with Guimaraes ’s teaching in order to iteratively train the machine learning algorithm using a subset of the highest impact features and select a best model to perform the prediction based on the evaluation value of the model at each iterative, as suggested by Chang. As per claim 2 , Guimaraes and Chang teach t he method as claimed in claim 1 discussed above. Chang also teaches: wherein the model-explanation tool is SHapley Additive exPlanations (SHAP), and each of the impact values is a Shapley value ” at [0092]. As per claim 3 , Guimaraes and Chang teach t he method as claimed in claim 1 discussed above. Guimaraes also teaches: for each of the training data sets, of: determining whether the training data set is missing a physiological parameter related to one of the physiological conditions; when it is determined that the training data set is missing a physiological parameter, filling the training data set with a predetermined parameter related to the one of the physiological conditions; and performing standardization on each of the physiological parameters ” at page 25, Fig. 1. As per claim 4 , Guimaraes and Chang teach t he method as claimed in claim 1 discussed above. Chang also teaches: wherein the step of selecting one of the candidate models as the prediction model is to select one of the candidate models that corresponds to a greatest one of the evaluation values as the prediction model ” at [0180] -[ 0181] and Fig. 4. As per claim 5 , Guimaraes and Chang teach the method as claimed in claim 1 discussed above. Chang also teaches: “ storing a plurality of sets of candidate hyperparameters that are respectively related to a plurality of candidate machine learning algorithms, the method further comprising steps, prior to the step of obtaining impact values, of: for each of the sets of candidate hyperparameters, obtaining, based on the training data sets and the set of candidate hyperparameters, a preliminary model by using the corresponding one of the candidate machine learning algorithms, and an evaluation value related to the preliminary model by using a second validation method; and selecting, from among the sets of candidate hyperparameters, one of the sets of candidate hyperparameters that corresponds to a greatest one of the evaluation values that are obtained in the step of obtaining a preliminary model and an evaluation value as the set of target hyperparameters ” at [0173]-[0181] and Fig. 4. As per claim 8 , Guimaraes and Chang teach the method as claimed in claim 1 discussed above. Chang also teaches: “ receiving a test data set that is related to the subject, the test data set including at least one characteristic parameter that is related to one of the physiological conditions and the usage conditions of the subject; and feeding the test data set into the prediction model to obtain the probability of the subject experiencing white coat effect ” at [0181]. Claims 12-16, 19 recite similar limitation as in claims 1-5, 8, and are therefore rejected by the same reasons. Allowable Subject Matter Claims 6- 7, 9- 11, 17-18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Examiner's Note: Examiner has cited particular columns and line numbers in the references applied to the claims above for the convenience of the applicant. Although the specified citations are representative of the teachings of the art and are applied to specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested from the applicant in preparing responses, to fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. In the case of amending the Claimed invention, Applicant is respectfully requested to indicate the portion(s) of the specification which dictate(s) the structure relied on for proper interpretation and also to verify and ascertain the metes and bounds of the claimed invention. Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT KHANH B PHAM whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)272-4116 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT Monday - Friday, 8am to 4pm . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT Sanjiv Shah can be reached at FILLIN "SPE Phone?" \* MERGEFORMAT (571)272-4098 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KHANH B PHAM/ Primary Examiner, Art Unit 2166 February 26, 2026