DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. This action is made non-final. This action is in response to the claims filed on June 29, 2023. Claims 1-6, 8, 9, 11-14, 16, 28, 37-40, 91, and 93 are pending in the case and have been examined. Claims 1-6, 8, 9, 11-14, 16, 28, 37-40, 91, and 93 are rejected. Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. FILLIN "Insert series code and serial no. of parent." 2020116219940 , filed on FILLIN "Enter the date filing of the parent application." June 29, 2023 . Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. FILLIN "Insert series code and serial no. of parent." 202011617342X , filed on FILLIN "Enter the date filing of the parent application." June 29, 2023 . Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Should applicant desire to obtain the benefit of foreign priority under 35 U.S.C. 119(a)-(d) prior to declaration of an interference, a certified English translation of the foreign application must be submitted in reply to this action. 37 CFR 41.154(b) and 41.202(e). Failure to provide a certified translation may result in no benefit being accorded for the non-English application. Response to Amendment Acknowledgement is made of Applicant’s amendments to the claims and Specification filed June 29, 2023 and have been entered. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis ( i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim (s) FILLIN "Insert the claim numbers which are under rejection." \d "[ 1 ]" 1, 11, 12, 16, 28, 37-39, 91, and 93 is/are rejected under 35 U.S.C. 103 as being unpatentable over FILLIN "Insert the prior art relied upon." \d "[ 2 ]" Ong et al. (US 20220083906 A1, referred to as Ong), in view of Zhang et al. (“Hybrid Federated Learning: Algorithms and Implementation”, referred to as Zhang), in view of Zhu et al. (“US 20170193066 A1”, referred to as Zhu), in view of Cheng et al. (“ SecureBoost : A Lossless Federated Learning Framework”, referred to as Cheng) . Regarding claim 1, Ong teaches, a method for training a federated learning model, performed by a server, the method comprising (FIG .1, [0029-0033]: Describes a “tree boosting aggregator” in a “federated learning environment 100 for training a machine learning model using XGBoost ” and states that the aggregator is configured to “to train the machine learning model 120 using the XGBoost algorithm”. The tree boosting aggregator transmits model information to the parties and receives model updates from the parties during iterative training. The tree boosting aggregator is the central coordinating entity in the federated learning system. It performs the training operations, communicates with the participating parties, and controls the iterative model-building process, corresponding to a server.) : Although Ong teaches, a method for training a federated learning model, performed by a server, the method comprising. It does not teach obtaining a target split mode corresponding to the training node. Zhang teaches, split-mode (Abstract: Teaches a federated learning being categorized into horizontal, vertical, and hybrid settings, and that both horizontal and vertical are special cases of hybrid federated learning.) It would have been obvious to one of ordinary skill in the art at the time of the claimed invention to have combined Ong’s federated boosting-tree training, with Zhang’s hybrid federated-learning environment. Doing so would have enabled the system to maintain data locality and keep labels at clients while improving commination efficiency in mixed distribution federated environments. Although Zhang teaches, a split-mode. It does not teach obtaining a target split mode corresponding to a training node. Zhu teaches, obtaining a target split mode corresponding to a training node (Zhu [0035-0046]: Describes that after selecting one model form among multiple computer models, a “model identifier that identifies the selected computer model is stored”. This corresponds to associated selected machine learning processing with identifying information indicating which selected option applies. ) It would have been obvious to one of ordinary skill in the art at the time of the claimed invention to have combined Ong’s federated boosting-tree training, with Zhang’s hybrid federated-learning environment with Zhu’s identifier tagging. Doing so would have improved compatibility with hybrid federated learning operations while preserving coordination of node level split processing across parties. Ong in view of Zhang in view of Zhu further teaches, in response to determining that the training node satisfies a preset splitting condition, in which the training node is a node of one boosting tree among a plurality of boosting trees (Ong [0005-0007]: Describes an ensemble of weak prediction models including decision trees and it rebuilds the machine learning model by adding the split candidate to a decision tree. ; [0029-0035]: Describes that the tree boosting aggregator determines a split candidate in the decision tree and rebuilds the machine learning model by adding the split candidate to a decision tree. It also teaches that after each training iteration “a split candidate can be added to the decision tree where the previous tree had the largest errors or residuals” and that “leaves that contained errors can then be split to grow the decision tree”.) obtaining a target split mode corresponding to a training node in response to determining that the training node satisfies a preset splitting condition, in which the training node is a node of one boosting tree among a plurality of boosting trees ( Ong [0050-0053]: Describes that a split finding component determines split candidates for the machine learning model based on aggregated statistics derived from histograms provided by participating parties. The component evaluates candidate splits and identifies the best split candidate based on aggregated statistics. The machine learning, model’s decision tree is rebuilt by adding split candidates to leaves with the highest gain, and the leaf with the highest gain is selected for splitting, corresponding to growing the decision tree.) ; Although Ong in view of Zhang in view of Zhu teaches, obtaining a target split mode corresponding to a training node in response to determining that the training node satisfies a preset splitting condition, in which the training node is a node of one boosting tree among a plurality of boosting trees. It does not teach notifying a client to perform, based on the target split mode, node splitting. Cheng teaches, notifying a client to perform, based on the target split mode, node splitting (Page 5, Federated Learning with SecureBoost , and Algorithm 2: Describes, that after the active part obtains the global optimal split, it returns the selected split information, including the feature identifier and threshold identifier, the corresponding passive party. The passive party then determines the selected attribute’s value according to the received split information and partitions the current instance space according to the selected attribute’s value. Thereby, the active party notifies the client/passive party to perform node splitting based on the selected split information.) ; It would have been obvious to one of ordinary skill in the art at the time of the claimed invention to have combined Ong’s in view of Zhang’s in view of Zhu’s federated boosting-tree training system with Cheng’s communication architecture. Doing so would have allowed the system to transmit selected split information to a participating client, so that client can perform the corresponding node partitioning on its own data, improving compatibility with the federated learning architectures and preserving distributed data locality. Ong in view of Zhang in view of Zhu in view of Cheng teaches, performing a next round of training by taking a left subtree node generated by performing the node splitting as a new training node (Cheng, Page 5, Federated Learning with SecureBoost , Federated Inference based on the Learned Model, Algorithm 2 and Figure 3: Describes that, after the passive party partitions the current instance space and returns [id, IL], the active party splits the current node according to IL and associates the current node accordingly. Traversal from the root to the left child, node 1, and continuing node-by-node until a leaf is reached. Which corresponds to using a generated left child/left subtree node as the next node in the tree process.) Ong in view of Zhang in view of Zhu in view of Cheng teaches, until an updated training node does not satisfy the preset splitting condition (Ong [0052-0053]: Describes that if the stopping criterion is not achieved, the training process returns and repeats recursively, and if the stopping criterion is achieved, the training stops. It identifies example stopping criteria such as decision-tree depth, accuracy, or iterations. Corresponding to continuing the next round of training until the updated node/model state no longer satisfies the splitting/training condition.) ; It would have been obvious to one of ordinary skill in the art at the time of the claimed invention to have combined Ong’s in view of Zhang’s in view of Zhu’s recursive federated boosting-tree training process with Cheng’s node-partitioning and child-nod e progression. Doing so would allow the system to improve training efficiency and enable orderly tree growth in a distributed learning environment. Ong in view of Zhang in view of Zhu , in view of Cheng further teaches, performing a next round of training by taking another non-leaf node of the boosting tree as a new training node (Ong [0023], [0026], [0031], and [0034]: Describes that a decision tree is built by splitting a source set into successor children and that the process is repeated on each derived subset in a recursive manner. The tree boosting aggregator conducts training in an iterative manner, determines new split candidates, and rebuilds the machine learning model by adding additional splits. The boosted decision tree can be grown recursively and that leaves containing errors can be split to grow the decision tree. This corresponds to continuing training by taking additional nodes of the boosting tree, including another non-leaf node, as a new training node.) ; and stopping training and generating a target federated learning model in response to determining that a node dataset of the plurality of boosting trees is empty (Ong, [0052-0053]: Describes that the tree boosting aggregator rebuilds the machine learning model using split candidates to grow the decision tree. The rebuilt machine learning model is analyzed to determine whether a stopping criterion has been achieved, and if the stopping criterion is not achieved, the process is complete and the training process stops. Corresponding to stopping training and generating a trained federated learning model once further node processing is no longer required.) . Regarding claim 16 , Ong in view of Zhang in view of Zhu , in view of Cheng teaches, t he method of claim 1, wherein after notifying the client to perform, based on the target split mode, the node splitting, the method further comprises: obtaining an updated training node (Cheng Pages 3-5, Federated Learning with SecureBoost , and Federated Inference based on the Learned Model: Describes that after the selected party partitions the current instance space and returns the split result, the active party splits the current node according to I L and associates the current node with party id, and record id. The resulting child-node structure bis referred to a movement from the root to left child, node 1, and then to the record associated with that node. Corresponding to an updated training node after node splitting, were that updated training node is the post-split current node/ child node.) ; stopping the training and generating the target federated learning model in response to determining that an updated training node satisfies a training stop condition (Ong [0050-0055], and Figure 2: Describes rebuilding the machine learning model using split candidates in an iterative manner, and then analyzing the rebuilt machine learning model to determine whether a stop criterion has been achieved. In figure 2 it is shown the sequence of determining if a criterion has been met, and retrains if it has not.) ; and obtaining a verification set (Cheng Pages 8-9, Experiments: Describes separating the data into a training portion and a testing portion stating that 2/3 of the datasets are used for training and the remainder for testing. Corresponding to training a set used to verify mode performance.) and verifying the target federated learning model in collaboration with a verification client based on the verification set (Ong [0050-0055]: Describes that after rebuilding the machine learning model, the rebuilt model is transmitted to the participating parties, where the parties can train and test the machine learning model; Zhang Page 5-7, 3Section 3.1: Describes that, in the testing phase, the local clients test their local model on samples and that performance is evaluated by averaging over clients and computing global accuracy using the matched global model.) , wherein the verification client is one of clients involved in the training of the federated learning model (Zhang Page 5-7, 3Section 3.1: Describes that the client performing the verification/testing is one of the clients involved in training the federated learning model. The local clients train in the training phase and that the local clients test their local model in the testing phase. This corresponds to the verification client is one of the clients involved in training the federated learning model.) . Regarding claim 28 , which substantially recites the same limitations as claim 1 , but further recites the client-side of claim 1’s server-side limitations . Ong in view of Zhang in view of Zhu , in view of Cheng teaches, receiving a target split mode sent by a server in response to determining by the server that a training node satisfies a preset splitting condition, wherein the training node is a node of one boosting tree among a plurality of boosting trees (Cheng Pages 3-5, Federated Learning with SecureBoost , and Federated Inference based on the Learned Model: Describes server/client communication, after obtaining the global optimal split, the active party returns split-related information to the corresponding party.; Ong [0004] descries that gradient boosting uses an ensemble of weak prediction models, such as decision trees, corresponding to training node is a node of one boosting tree among a plurality of boosting trees. Mode indicator reference still Zhang in view of Zhu) ; and performing node splitting on the training node based on the target split mode ( Cheng Pages 3-5, Federated Learning with SecureBoost , Federated Inference based on the Learned Model, and Algorithm 2: Describes performing node splitting on the training node based on information received from the server/coordinator. After the active party returns k opt and v opt to the corresponding passive party, the passive party determines the selected attribute’s value according to k opt and v opt and partitions the current instance space, after which the active party splits the current node according to I L .) . Regarding claim 37, which recites substantially the same limitations as claim 16 and further recites receiving a verification set sent by the server (Ong [0005-0006], and [0029-0035]: Describes server/aggregator transmitting to parties in a federated-learning environment where the server transmits the machine learning model to the parties and the parties participate in the federated process. ) to perform the client side steps of the server steps of claim 16, respectively, and is rejected for the same reasons as described above. Regarding claim 38, which recites substantially the same limitations as claim 16 and further describes its steps as the client-side verification (Cheng Pages 3-5, Federated Learning with SecureBoost , Federated Inference based on the Learned Model, and Algorithm 2: Describes a coordinated federated inference process in which the active party identifies the current node by its associated party id, and record id, it then asks the corresponding party to receive the relevant feature/threshold information from its lookup table for a particular sampler, and then the party then decides whether the sample goes left or right at that node. The active party then uses that decision to move to the next node, and the process repeats until a leaf if reached. ) to perform the client side steps of the server steps of claim 16, respectively, and is rejected for the same reasons as described above. Regarding claim 39, which recites substantially the same limitations as claim 16 and further recites determining the node proceeding direction based on the split information and the respective feature value of each feature ( Cheng Pages 3-5, Federated Learning with SecureBoost , Federated Inference based on the Learned Model, and Algorithm 2: Describes that the relevant arty retrieves the corresponding attribute for a given sample/user from its lookup table and determines the selected attribute’s value according to the split information. Then for X6 user, the party knows the feature value and compares it to the split threshold, then based on that comparison, makes the decision that it should move down to its left child, node 1. ) to perform the client side steps of the server steps of claim 16, respectively, and is rejected for the same reasons as described above. Regarding claim 91, which recites substantially the same limitations as claim 1 and further recites a memory, a processor and computer programs stored on the memory and runnable on the processor (Ong [00 58 -00 61 ] , and Figure 4 : Describes a computer system executing federated learning using computer hardware including, memory processors and other computer hardware. ) to perform the server steps of claim 1, respectively, and is rejected for the same reasons as described above. Regarding claim 92, which recites substantially the same limitations as claim 28 and further recites a memory, a processor and computer programs stored on the memory and runnable on the processor ( Ong [00 58 -00 61 ] , and Figure 4 : Describes a computer system executing federated learning using computer hardware including, memory processors and other computer hardware. ) to perform the client side steps of the server steps of claim 28, respectively, and is rejected for the same reasons as described above. Claim(s) FILLIN "Insert the claim numbers which are under rejection." \d "[ 1 ]" 2-6, 8, 11-14, and 40 is/are rejected under 35 U.S.C. 103 as being unpatentable over FILLIN "Insert the prior art relied upon." \d "[ 2 ]" Ong et al. (US 20220083906 A1, referred to as Ong), in view of Zhang et al. (“Hybrid Federated Learning: Algorithms and Implementation”, referred to as Zhang), in view of Zhu et al. (“US 20170193066 A1”, referred to as Zhu), in view of Cheng et al. (“ SecureBoost : A Lossless Federated Learning Framework”, referred to as Cheng), in view of Ong et al. ("Adaptive Histogram-Based Gradient Boosted Trees For Federated Learning", referred to as Ong '2'), in view of Wu et al. ("Privacy Preserving Vertical Federated Learning for Tree-based Models", referred to as Wu) . Regarding claim 2, Ong in view of Zhang in view of Zhu in view of Cheng teaches the method of claim 1. Although Ong in view of Zhang in view of Zhu in view of Cheng teaches the method of claim 1. They do not teach, wherein obtaining the target split mode corresponding to the training node comprises, obtaining a first split value corresponding to the training node by performing, based on a first training set, horizontal federated learning in collaboration with the client. Ong ‘2’ teaches, obtaining a first split value corresponding to the training node by performing, based on a first training set, horizontal federated learning in collaboration with the client (Ong ‘2’ Page 3-6 Sections 2.1, 3.1, 3.2, and 4.1: Describe horizontal federated learning, with an aggregator and parties collaboratively training model. It provides gradients, hessians, and feature-value split candidates, and that a gain score is used to find the best split for a leaf node. Which corresponds to obtaining a split value for a training node by performing horizontal federated learning in collaboration with clients.) ; It would have been obvious to one of ordinary skill in the art at the time of the claimed invention to have combined Ong in view of Zhang in view of Zhu in view of Cheng ’s federated boosting-tree framework with Ong2’s horizontal federated split-value determination. Doing so would allow for a more effective split strategy for the node, improving efficiency and performance of the federated boosting-tree training. Although Ong, teaches obtaining the target split mode corresponding to the training node comprises. It does not teach obtaining a second split value corresponding to the training node by performing, based on a second training set, vertical federated learning in collaboration with the client Wu teaches, obtaining a second split value corresponding to the training node by performing, based on a second training set, vertical federated learning in collaboration with the client (Page 3, Section 2.3, and Page 5, Section 4.1:Describes vertical federated learning in which parties share the same sample IDs but have disjoint features. For a given node, the clients derive statistics to identify the nodes best spilt, split the sample set into left and right partitions, and recursively build subtrees, and that the approach extends to GBT.) ; It would have been obvious to one of ordinary skill in the art at the time of the claimed invention to have combined Ong in view of Zhang in view of Zhu in view of Cheng, in view of Ong’2’s federated boosting-tree framework with Wu’s vertical federated tree split determination. Doing so would allow for a more effective split strategy for the node, improving efficiency and performance of the federated boosting-tree training. Wu further teaches, determining the target split mode corresponding to the training node based on the first split value and the second split value (Wu Pages 4-6, Section 4.1 and Algorithm 3: Describes comparing candidates split gains/values and selecting the best one.) Regarding claim 3, Ong in view of Zhang in view of Zhu in view of Cheng, in view of Ong’2’, in view of Wu teaches , the method of claim 2, wherein determining the target split mode corresponding to the training node based on the first split value and the second split value comprises: determining a larger one between the first split value and the second split value as a target split value corresponding to the training node (Wu Pages 4-6, Section 4.1 and Algorithm 3: Describes comparing candidate split gains/values and selecting the maximum one as the best split for the node. It compares each split’s impurity gain with the current maximum gain and updating the stored maximum when the new gain is larger. Which determines the larger of competing split values and using that larger value as the operative split value for the node.) ; and determining, based on the target split value, the target split mode corresponding to the training node (Zhu [0035-0046]: Describes determining corresponding identifying information based on a selected machine learning result. It selects a particular computer model from among multiple computer models based on performance metrics and then storing a model identifier that identifies the selected computer model.) . Regarding claim 4, Ong in view of Zhang in view of Zhu in view of Cheng, in view of Ong’2’, in view of Wu teaches , the method of claim 2, wherein obtaining the first split value corresponding to the training node by performing, based on the first training set, the horizontal federated learning in collaboration with the client comprises: generating a first subset of features usable by the training node from the first training set (Ong’2’ Page 5-6 Section 4.1, 4.2 and Algorithm 1: Describes taking a party’s local dataset and constructing a surrogate histogram representation/histogram feature data from that local training data for downstream federated tree training. ) , and sending the first subset of features to the client (Wu Page 4, Section 3.4 and Pages 4-6 Section 4.1:Describes sending training-use information form a training participant/super client to the other clients.) ; for each feature contained in the first subset of features, receiving feature values of the feature from the client (Ong [0035-0039]: Describes that for features contained in the subset of features, receiving feature values from the client/party, by receiving model updates from the parties, where the updates include features-related information. It receives a first model update and a second model update form the first and second part, the fusing component utilizes an ordered list feature values and a bin index form each party, where the list and bin can be provided in the model updates transmitted to the tree boosting aggregator by the parties.) ; for each feature contained in the first subset of features, determining a respective horizontal split value corresponding to the feature for using the feature as a split feature point based on the feature values of the feature (Ong [0035-0039], and [0050-0055]: Describes determining for each feature in the subset of features, a corresponding split value of that feature. The fusing component utilizes an ordered list of feature values where “the ordered list of feature vales can define the threshold for each bin per histogram ranges”. The split finding component can determine the split candidates based on points according to percentiles of feature distribution represented in the histograms”, and “find the best split candidate based on aggregated statistics”. ) ; and determining the first split value of the training node based on the respective horizontal split value corresponding to each feature (Ong [0035-0039], and [0051-0052]: Describes determining the split value of the training node based on the respective split values corresponding to the features, it determines split candidates from the feature distributions represented in the histograms and then finds the best split candidate based on aggregated statistics, after which the machine learning model is rebuilt by adding the split candidate to the decision tree.) . Regarding claim 5, Ong in view of Zhang in view of Zhu in view of Cheng, in view of Ong’2’, in view of Wu teaches , the method of claim 4, wherein for each feature contained in the first subset of features, determining the respective horizontal split value corresponding to the feature for using the feature as a split feature point based on the feature values of the feature comprises: for each feature contained in the first subset of feature, determining a respective split threshold of the feature based on the feature values of the feature (Ong [0035-0039, and [0051]: Describes determining a respective split threshold of a feature based on the feature values of the feature, since the fusing component utilizes “an ordered list of feature values” and that “the ordered list of feature values can define the threshold for each bin per histogram ranges”. The split finding component determines split candidates based on the feature distribution represented in the histograms.) ; for each feature contained in the first subset of feature, obtaining a first set of data instance identity documents and a second set of data instance identity documents corresponding to the feature based on the respective split threshold (Wu Pages 4-6 Section 4.1: Describes that for each feature, it obtains a first set of data instance identifiers and a second set of data instance identifiers corresponding to the feature based on the respective split threshold. For each feature j and split value τ, “first constructs two size-n indicator vectors v l and v r ,such that (i) the t- th element in v l equals 1 if Sample t’s feature j is no more than τ, and 0 otherwise”, and v r components v l . Corresponding to partitioning the sample instances into two corresponding sets based on the split threshold for the feature. The client divides the local samples into two partitions and constructs indicator vectors to specify the samples that it contains.) , wherein the first set of data instance identity documents comprises data instance identity documents belonging to a first left subtree space (Wu Pages 4-6 Section 4.1: Describes that the first set of data instance identifiers belongs to a first left subtree space because a split value induces two possible child nodes, including a left child node and the client divides the samples int a left partition and a right partition. In the illustrated example, “The first partition (referred to as the left partition) consists of Samples 1, 2, and 4,” and the client constructs an indicator vector to specify the samples that it contains. These samples belong to the left child node.) , and the second set of data instance identity documents comprises data instance IDs belonging to a first right subtree space (Wu Pages 4-6 Section 4.1: Describes that the second set of data instance identifiers belongs to a first right subtree space because the client divides the samples into a left partition and a right partition, where “the second partition (referred to as the right partition) contains Samples 3 and 5”, and constructs an indicator vector “to specify the samples that it contains”. These samples belong to the right child node of the split.) ; and for each feature contained in the first subset of feature, determining the respective horizontal split value corresponding to the feature based on the first set of data instance identity documents and the second set of data instance identity documents (Wu Pages 4-6 Section 4.1: Describes determining the respective horizontal split value corresponding to the feature based on the first set of data instance identifiers and the second set of data instance identifiers because, for each split value τ, it forms left-side and right-side sample sets and computes corresponding statistics for the left and right child nodes. For each split value τ, the client generates encrypted statistics for the left and right child node, and that “using these statistics, the clients identify the best split of the current node”. The clients “compute the impurity gain of each split τ”, and “jointly determine the best split”.) . Regarding claim 6, Ong in view of Zhang in view of Zhu in view of Cheng, in view of Ong’2’, in view of Wu teaches , the method of claim 5, wherein obtaining the first set of data instance identity documents and the second set of data instance identity documents corresponding to the feature based on the respective split threshold comprises: for each feature, sending the respective split threshold to the client (Cheng Pages 3-5, Federated Learning with SecureBoost : Describes sending the respective split threshold to the client because, after determining the optimal split, the active party “returns the feature id k and threshold id v to the corresponding passive party i”, after which the passive party uses the returned threshold information to determine the selected attribute’s value.) ; for each feature, receiving an initial set of data instance identity documents corresponding to the training node sent by the client (Cheng Pages 3-5, Federated Learning with SecureBoost , and Algorithm 2: Describes receiving an initial set of data instance identifiers corresponding to the training node sent by the client because after local split processing, the passive party/client returns the record id back to the active party, where the active party then uses the returned instance-space information for node splitting.) , wherein the initial set of data instance identity documents is generated by performing the node splitting for the feature via the client based on the respective split threshold (Cheng Pages 3-5, Federated Learning with SecureBoost , and Algorithm 2: Describes that the returned initial set is generated by performing node splitting for the feature at the client based on the split threshold because the passive party/client determines the selected attribute’s value according to the returned identifier and threshold identifier, and then partitions current instance space.) , and the initial set of data instance identity documents comprises the data instance identity documents belonging to the first left subtree space (Cheng Pages 3-5, Federated Learning with SecureBoost , and Algorithm 2: Describes that the initial set comprises the data instance identifiers belonging to a first left subtree space because the passive party returns the instance space of the left nodes after the split to the active party.) ; and for each feature, obtaining the first set of data instance identity documents and the second set of data instance identity documents based on the initial set of data instance identity documents and all data instance identity documents (Cheng Pages 3-5, Federated Learning with SecureBoost , and Algorithm 2: Describes obtaining the first set of data instance identifiers and the second set of data instance identifiers based on the initial set of data instance identifiers and all data instance identifiers, it begins with the instance space of current nodes, then partitions the current instance space according to the selected attribute’s value, and returns the instance space of the left nodes after the split to the active party, and splits the current node according to the received instance space. ; Wu Pages 4-6 Section 4.1: Describes obtaining both resulting sets from the node’s available sample set because the node is associated wit h an encrypted mask vector indicating the available samples on the node, and for each split value the client constructs two indicator vectors v l and v r , where v r complements v l . ) . Regarding claim 8, Ong in view of Zhang in view of Zhu in view of Cheng, in view of Ong’2’, in view of Wu teaches , the method of claim 2, wherein obtaining the second split value corresponding to the training node by performing, based on the second training set, the vertical federated learning in collaboration with the client comprises: notifying the client to perform, based on the second training set, the vertical federated learning (Cheng Pages 3-5, Federated Learning with SecureBoost , Algorithm 1 and Algorithm 2: Describes notifying the client to perform, based on the training data, vertical federated learning where federated learning is a framework for data split among different parties in the feature dimension, i.e., a vertical federated learning setting, in which the server/active arty coordinates client/passive-party participation in training. In a typical federated iteration, “each client downloads the current global model from server”, after which “each client computes an updated model based on its local data” and sends the update back to the server.) ; for each feature, receiving respective first gradient information of at least one third set of data instance identity documents of the feature sent by the client (Cheng Pages 3-5, Federated Learning with SecureBoost , Algorithm 1 and Algorithm 2: Describes that for each feature, receiving gradient information sent by the client because, for each passive party, the passive party maps the features into buckets and “aggregates the encrypted gradient statistics based on the buckets”, and “the active party only needs to collect the aggregated encrypted gradient statistics from all passive parties”.) , wherein the third set of data instance identity documents comprises data instance identity documents belonging to a second left subtree space (Wu Pages 4-6 Section 4.1: Describes that the third set of data instance identifiers comprises data instance identifiers belonging to a second left subtree space because, for a candidate split, the client divides the samples into a left and a right partition, respectively, where the left partition specifics the samples on the left side of the split. The resulting statistics correspond to samples that “belong to the left … child node”.) , the second left subtree space is a left subtree space generated by performing the node splitting based on one feature value of the feature (Wu Pages 4-6 Section 4.1: Describes that for a feature j and a split value τ, the client constructs a left-side indicator vector for samples whose feature j values are no more than τ, and further explains that the split value τ induces two child nodes, including the left child node. The splitting is based on whether deposit vales are larger than 15000 generates a left partition containing samples whose deposit values are no more than 15000.) , and different feature values correspond to different second left subtree spaces (Wu Pages 4-6 Section 4.1: Describes that different feature values correspond to different second left subtree spaces because for a feature j, there is a set of split values S ij , and for any split value τ ∈ Sij , and for any split value τ ∈ Sij , the client constructs left and right partitions, where the split value τ induces the corresponding child nodes. Thus, different split values for the feature correspond to different resulting left child node sample sets.) ; for each feature, determining a respective vertical split value of the feature based on the respective first gradient information and total gradient information of the training node (Cheng Pages 3-5, Federated Learning with SecureBoost , Algorithm 1 and Algorithm 2: Describes collecting gradient statistics from the passive parties and computes total gradient information for the current node (Algorithm 2). Then, for each feature and threshold candidate, accumulates left-side gradient statistics, computes right-side gradient statistics from the node totals, calculates a split score, and returns the optimal feature and threshold.) ; and determining the second split value corresponding to the training node based on the respective vertical split value corresponding to each feature (Cheng Pages 3-5, Federated Learning with SecureBoost , Algorithm 1 and Algorithm 2: Describes evaluating candidate feature/threshold splits across the features for the current node and then determines the “globally optimal split as described in Algorithm 2”. Algorithm 2 enumerates the features and threshold candidates, computes a split score, and returns k opt and v opt when the maximum score is obtained,) . Regarding claim 11, Ong in view of Zhang in view of Zhu in view of Cheng, in view of Wu teaches , determining the training node as a leaf node in response to the training node not satisfying the preset splitting condition (Wu Pages 4-6, Section 4.1 and Algorithm 3: Describes a decision tree algorithm that if prune conditions are satisfied, the method returns a leaf node, including classification and regression cases .) , and obtaining a weight value of the leaf node (Cheng Pages 3-5, Federated Learning with SecureBoost : Describes that, when an optimal tree structure is obtained, the optimal weight of leaf j can be computed, and provides the leaf-weight equation (4) page 4, “where I j is the instance space of leaf j”.) ; and sending the weight value of the leaf node to the client (Cheng Pages 3-5, Federated Learning with SecureBoost : Describes a federated-learning framework where each client downloads the current global model form the server, and the server aggregates model updates to construct an improved global model When an optimal tree structure is obtained, the optimal weight of leaf j is computed, and further discusses a learned SecureBoost model in terms of the weight of the first tree’s leaves. This corresponds to the leaf-node weight value is provided to the client as part of the shared/global model.) . Regarding claim 12, Ong in view of Zhang in view of Zhu in view of Cheng, in view of Wu teaches the method of claim 11, wherein obtaining the weight value of the leaf node comprises: obtaining data instances belonging to the leaf node (Cheng Pages 3-5, Federated Learning with SecureBoost : Describes that I j as the instance space of leaf j in connection with computing the leaf weight w * j. Corresponding to a set of data instances that belong to a given leaf node.) ; and obtaining first-order gradient information and second-order gradient information of the data instances belonging to the leaf node (Cheng Pages 3-5, Federated Learning with SecureBoost : Describes that the calculation of the optimal weight of a leaf depends on g i and h i for the instances in the instance space of leaf j, I j , and further defines g i and h i for training instances. Corresponding to obtaining first-order gradient information and second-order gradient information for the data instances belonging to the leaf node, where the first-order and second-order gradient information correspond to g i and h i values for instances i ∈ I j .) , and obtaining the weight value of the leaf node based on the first-order gradient information and the second-order gradient information (Cheng Pages 3-5, Federated Learning with SecureBoost : Describes that when optimal tree structure is obtained, the optimal weight w * j of leaf j is computed as (Equation 4 , page 4) where I j is the instance space of leaf j. This corresponds to obtaining the weight value of the leaf node based on the first-order gradient information and second-order gradient information of the data instances belonging to that leaf node, where the first-order and second-order gradient information correspond to g i and h i .) . Regarding claim 13, Ong in view of Zhang in view of Zhu in view of Cheng, in view of Wu teaches the method of claim 3, before notifying the client to perform, based on the target split mode, the node splitting, further comprising: sending split information to the client (Cheng Pages 3-5, Federated Learning with SecureBoost : Describes that after determining the global optimal split, the active party returns the selected split information, namely the feature id and threshold id, to the corresponding passive part, which then uses that information to determine the selected attribute’s value and partition the current instance space. Corresponding to sending split information to the client before the client performs node splitting.) , wherein the split information comprises the target split mode (Zhu [0041], [0106]: Describes a ssociating machine learning processing with identifying information, such as storing a model identifier that identifies the selected computer model after selecting one mode form among multiple models. It stores operation data that identifies multiple transformation operations. ) a target split feature selected as a feature split point (Cheng Pages 3-5, Federated Learning with SecureBoost : Identifies the global optimal split using feature id (k) and returns that feature id to the corresponding passive party. Corresponding to a target split feature selected as a feature split point, where the feature id 9k)/selected attribute correspond to a target split feature.) , and the target split value (Cheng Pages 3-5, Federated Learning with SecureBoost : Describes that after obtaining the global optimal split, the active party returns the selected threshold information to the corresponding passive party, and the passive part records the selected attribute’s value as feature, threshold value.) . Regarding claim 14, Ong in view of Zhang in view of Zhu in view of Cheng, in view of Wu teaches the method of claim 13, wherein in response to the target split mode being the vertical split mode, before notifying the client to perform, based on the target split mode, the node splitting, the method further comprises: sending the split information to clients having labels (Cheng Pages 3-5, Federated Learning with SecureBoost , and Federated Inference based on the Learned Model: Describes sending split-related information, sent to the corresponding passive party. After obtaining the best split, the active party returns the selected split identifiers, k opt and v opt , to the corresponding passive party, which then determines the selected attribute’s value and partitions the current instance space.) receiving a set of left subtree spaces sent by the clients having labels (Cheng Pages 3-5, Federated Learning with SecureBoost , and Federated Inference based on the Learned Model: Describes receiving left-side subtree-space information. After the selected party partitions the current instance space according to the selected attribute’s value, that party returns the record id I L back to the active party, where I L is the instance space of left noes after the split.) ; splitting the second training set based on the set of left subtree spaces (Cheng Pages 3-5, Federated Learning with SecureBoost , and Federated Inference based on the Learned Model: Describes that after the passive party returns the instance space of left nodes after the split (I L ) to the active party, the active party splits the current node according to the received instance space/according to I L . Where splitting based on received left-side split-space information, where the instance space of left nodes after the split (I L ), corresponds to left subtree spaces, and current node/current instance space being split corresponds to a second training set.) ; and associating the training node with identity documents of the clients having labels (Cheng Pages 3-5, Federated Learning with SecureBoost , and Federated Inference based on the Learned Model: Describes that after receiving the split result, the active party associates the current node with a party id and a record id. It uses prediction proceeds by referring to node-associated records such as part id:1, and record id:1 for node 1.) . Regarding claim 14, Ong in view of Zhang in view of Zhu in view of Cheng, in view of Wu teaches receiving a model prediction value of a data instance represented by the data instance identity document sent by the server in response to determining that all data instance identity documents contained in the verification set are verified (Wu, Page 4, Section 3.4, ;Page 7, Section 4.3, Algorithm 4; and Pages 13-14, Section 9.1.2: Describes obtaining a prediction output for a particular verified data instance during distributed tree-model prediction. Given an input sample with distributed feature values, the clients jointly produce a prediction, and in Algorithm 4 returns the predicted label k after the prediction path is resolved.) ; obtaining a final verification result based on the model prediction value, and generating a verification indication message for indicating whether to reserve and use the target federated learning model by comparing the verification result with a previous verification result (Zhu [0144 0147], and [0162-0165]: Describes obtaining a verification result from model predictions and generating an indication of whether the model should continue to be used . It validates a model by generating predictions, comparing those predictions to actual/known answers, and generating one or more performance metrics. It selects the better-performing model based on those results, including that if a model does not exceed a performance threshold it is not selected for deployment, and in a comparison of an old and new model, the worse model is dropped or ceases to be used while the better model replaces the other.) ; and sending the verification indication message to the server (Zhu [0092]: Describes sending an indication/result message to the server. The execution of the validation/testing workflow causes an API call to be made to the machine learning service 110, and that the API call indicates a status of the model-related process.) . Claim(s) FILLIN "Insert the claim numbers which are under rejection." \d "[ 1 ]" 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over FILLIN "Insert the prior art relied upon." \d "[ 2 ]" Ong et al. (US 20220083906 A1, referred to as Ong), in view of Zhang et al. (“Hybrid Federated Learning: Algorithms and Implementation”, referred to as Zhang), in view of Zhu et al. (“US 20170193066 A1”, referred to as Zhu