Prosecution Insights
Last updated: April 19, 2026
Application No. 18/299,386

METHODS, APPARATUSES, AND SYSTEMS FOR TRAINING MODEL BY USING MULTIPLE DATA OWNERS

Non-Final OA §101
Filed
Apr 12, 2023
Examiner
MARU, MATIYAS T
Art Unit
2148
Tech Center
2100 — Computer Architecture & Software
Assignee
Alipay (Hangzhou) Information Technology Co., Ltd.
OA Round
1 (Non-Final)
58%
Grant Probability
Moderate
1-2
OA Rounds
4y 6m
To Grant
70%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
23 granted / 40 resolved
+2.5% vs TC avg
Moderate +12% lift
Without
With
+12.5%
Interview Lift
resolved cases with interview
Typical timeline
4y 6m
Avg Prosecution
39 currently pending
Career history
79
Total Applications
across all art units

Statute-Specific Performance

§101
35.9%
-4.1% vs TC avg
§103
50.9%
+10.9% vs TC avg
§102
1.9%
-38.1% vs TC avg
§112
11.3%
-28.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 40 resolved cases

Office Action

§101
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim(s) 1 – 17 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e. an abstract idea) without significantly more. In step 1, of the 101-analysis set forth in the MPEP 2106, the examiner has determined that the following limitations recite a process that, under the broadest reasonable interpretation, falls within one or more statutory categories (processes). In step 2A prong 1, of the 101-analysis set forth in MPEP 2106, the examiner has determined that the following limitations recite a process that, under broadest reasonable interpretation, covers a mental process but for the recitation of generic computer components: Regarding claim 1: determining, by using a private set intersection (PSI) algorithm at each second data owner according to first data owned by each first data owner, second feature data from second data owned by the second data owner that intersect each piece of first data; (i.e.: the broadest reasonable interpretation, the claim recites abstract idea: mental process: It involves observing two sets of data, comparing them to identify overlaps or intersections and make decision which portions correspond. See (MPEP 2106.04)). determining, as a training unit, first data owned by a first data owner and respective second feature data that intersect the first data and that are owned by each second data owner; (i.e.: the broadest reasonable interpretation, the claim recites abstract idea: mental process: It involves comparing datasets to identify intersecting portions and associating corresponding data items. See (MPEP 2106.04)). If the claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mental process, but for the recitation of generic computer components, then it falls within the mental process. Accordingly, the claim recites an abstract idea. Step 2A Prong 2 of the 101-analysis, set forth in MPEP 2106, the examiner has determined that the following additional elements do not integrate this judicial exception into a practical application: training a model by using a plurality of data owners (i.e.: deemed insufficient to transform the judicial exception to a patentable invention because the claim recites limitation which does not amount to more than a recitation of the words "apply it" (or an equivalent), such as mere instructions to implement an abstract idea on a computer. See MPEP 2106.05(f)). wherein the plurality of data owners comprise a plurality of first data owners and a plurality of second data owners, each first data owner has a respective first model and horizontally divided first data, and each second data owner has a respective second model and vertically divided second data (i.e.: deemed insufficient to transform the judicial exception to a patentable invention because the claim recites limitation simply links the judicial exception to a field of use and/or technology environment, see MPEP 2106.05(h)). performing the following main iteration process until a first iteration end condition is met: performing, for each training unit by using a first training sample and a second training sample, cooperative training on a first model of a first data owner that participates in training of the training unit, the respective second model of the each second data owner, and a third model of a slave server that participates in training of the training unit, (i.e.: deemed insufficient to transform the judicial exception to a patentable invention because the claim recites limitation which does not amount to more than a recitation of the words "apply it" (or an equivalent), such as mere instructions to implement an abstract idea on a computer. See MPEP 2106.05(f)). wherein: at least a part of first data in the training unit is used as the first training sample, a plurality of second feature data that are owned by the plurality of second data owners and that intersect the first training sample are used as the second training sample, and (i.e.: deemed insufficient to transform the judicial exception to a patentable invention because the claim recites limitation which does not amount to more than a recitation of the words "apply it" (or an equivalent), such as mere instructions to implement an abstract idea on a computer. See MPEP 2106.05(f)). the respective first model of the each first data owner and the respective second model of the each second data owner comprise first N layers of a neural network model, and the third model comprises one or more remaining layers of the neural network model except the first N layers; (i.e.: deemed insufficient to transform the judicial exception to a patentable invention because the claim recites limitation simply links the judicial exception to a field of use and/or technology environment, see MPEP 2106.05(h)). performing, at a master server, federated aggregation on the first model trained from each training unit to obtain a first global model, and/or on the third model trained from each training unit to obtain a third global model; and (i.e.: deemed insufficient to transform the judicial exception to a patentable invention because the claim recites limitation which does not amount to more than a recitation of the words "apply it" (or an equivalent), such as mere instructions to implement an abstract idea on a computer. See MPEP 2106.05(f)). updating the respective first model of the each first data owner according to the first global model, and/or updating the third model of the slave server according to the third global model. (i.e.: deemed insufficient to transform the judicial exception to a patentable invention because the claim recites limitation which does not amount to more than a recitation of the words "apply it" (or an equivalent), such as mere instructions to implement an abstract idea on a computer. See MPEP 2106.05(f)). In Step 2B of the 101-analysis set forth in the 2019 PEG, the examiner has determined that the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception: Regarding limitation (I, III, IV, VI and VII), recite mere application of the abstract idea or mere instructions to implement an abstract idea on a computer are deemed insufficient to transform the judicial exception to a patentable invention because the limitations generally apply the use of a generic computer and/or process with the judicial exception, see MPEP 2106.05(f). Regarding limitation (II and V), additional elements are deemed insufficient to transform the judicial exception to a patentable invention to a patentable invention because they generally link the judicial exception to the technology environment, see MPEP 2106.05(h). As analyzed above, the additional elements, analyzed above, do not integrate the noted judicial exception into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Therefore, the claim is directed to an abstract idea. Regarding claim 2, dependent upon claim 1, and fail to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. The claim recites: performing the following sub-iteration process for each training unit until a second iteration end condition is met Deemed insufficient to transform the judicial exception to a patentable invention because the limitation is directed to mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea and are considered to adding the words “apply it” (or an equivalent) with the judicial exception, See MPEP 2106.05(f). Limitations directed to using the computer as a tool for implementing an abstract idea cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. selecting, from first data owned by a first data owner participating in training of the training unit, at least a part of the first data as a current first training sample in a current sub-iteration process; (i.e.: the broadest reasonable interpretation, the claim recites abstract idea: mental process: It involves observing available data, evaluating which portion is relevant for a given iteration and making a judgment to select that section as a training sample. See (MPEP 2106.04)). selecting a plurality of second feature data from the plurality of second feature data owned by the plurality of second data owners that intersect the current first training sample as a second training sample in the current sub-iteration process; (i.e.: the broadest reasonable interpretation, the claim recites abstract idea: mental process: It involves comparing datasets to identify intersecting data, evaluating relevance, and deciding which data to include as training samples. See (MPEP 2106.04)). separately inputting the first training sample in the training unit and respective second training sample of each second data owner into a model of a respective data owner, to obtain feature information output by each model; The recitation in the additional limitation directed to mere data gathering as deemed insufficient to transform the judicial exception because claimed elements are considered insignificant extra-solution activity and well-understood routine and conventional (2106.05(d)). Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TL| Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network). See MPEP 2106.05(d)(II). The additional limitations as analyze failed to integrate a judicial exception into a practical application at Step 2A and provide an inventive concept in Step 2B, per the analysis above. encrypting and sending the feature information at each data owner participating in training of the training unit to a slave server participating in training of the training unit; The recitation in the additional limitation directed to mere data gathering as deemed insufficient to transform the judicial exception because claimed elements are considered insignificant extra-solution activity and well-understood routine and conventional (2106.05(d)). Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TL| Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network). See MPEP 2106.05(d)(II). The additional limitations as analyze failed to integrate a judicial exception into a practical application at Step 2A and provide an inventive concept in Step 2B, per the analysis above. calculating a model gradient for the training unit based on the third model of the slave server and the feature information received from each data owner; (i.e.: the broadest reasonable interpretation, the claim recites abstract idea: mental process: It involves observing model parameters and feature values, evaluating their relationships and determining an adjustment direction(gradient). See (MPEP 2106.04)). updating, according to the model gradient, respective models at each data owner and the slave server participating in training of the training unit Deemed insufficient to transform the judicial exception to a patentable invention because the limitation is directed to mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea and are considered to adding the words “apply it” (or an equivalent) with the judicial exception, See MPEP 2106.05(f). Limitations directed to using the computer as a tool for implementing an abstract idea cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Regarding claim 3, dependent upon claim 2, and fail to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. The claim recites: fusing, [ ], the feature information received from each data owner, to obtain fused feature information; (i.e.: the broadest reasonable interpretation, the claim recites abstract idea: mental process: It involves combining multiple sets of information into a unified representation based on perceived relevance or correspondence. See (MPEP 2106.04)). predicting the fused feature information by using the third model of the slave server to obtain a prediction result; Deemed insufficient to transform the judicial exception to a patentable invention because the limitation is directed to mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea and are considered to adding the words “apply it” (or an equivalent) with the judicial exception, See MPEP 2106.05(f). Limitations directed to using the computer as a tool for implementing an abstract idea cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. calculating, at a data owner that participates in training of the training unit and that has a label, a loss value by using a loss function based on a label of a training sample that participates in this round of training and the prediction result from the slave server; (i.e.: the broadest reasonable interpretation, the claim recites abstract idea: mental process: It involves comparing expected outcome (label) with an observed outcome, evaluating the difference and determining a numerical measure of error. See (MPEP 2106.04)). calculating, [ ], the model gradient according to the loss value received from the data owner, and sending the model gradient to the first data owner and each second data owner. (i.e.: the broadest reasonable interpretation, the claim recites abstract idea: mental process: It involves observing a loss value, evaluating how it should influence model parameters, and deciding a corresponding adjustments. See (MPEP 2106.04)). … at the slave server participating in training of the training unit … Deemed insufficient to transform the judicial exception to a patentable invention because the limitation is directed to mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea and are considered to adding the words “apply it” (or an equivalent) with the judicial exception, See MPEP 2106.05(f). Limitations directed to using the computer as a tool for implementing an abstract idea cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. … at the slave server … Deemed insufficient to transform the judicial exception to a patentable invention because the limitation is directed to mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea and are considered to adding the words “apply it” (or an equivalent) with the judicial exception, See MPEP 2106.05(f). Limitations directed to using the computer as a tool for implementing an abstract idea cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Regarding claim 4, dependent upon claim 2, and fail to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. The claim recites: wherein a first model and a third model that participate in training of a same training unit are combined as one complete model used to complete forward propagation The recitation in the additional limitation simply links the judicial exception to a field of use and/or technology environment, see MPEP 2106.05(h). Limitations directed to field of use cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Regarding claim 5, dependent upon claim 1, and fail to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. The claim recites: in response to that data distribution of a plurality of first data owned by the plurality of first data owners is even, performing, at the master server, federated aggregation on the first model and third model trained from each training unit to obtain the first global model and the third global model. Deemed insufficient to transform the judicial exception to a patentable invention because the limitation is directed to mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea and are considered to adding the words “apply it” (or an equivalent) with the judicial exception, See MPEP 2106.05(f). Limitations directed to using the computer as a tool for implementing an abstract idea cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Regarding claim 6, dependent upon claim 1, and fail to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. The claim recites: in response to that data distribution of a plurality of first data owned by the plurality of first data owners is uneven, performing, at the master server, federated aggregation on the first model or third model trained from each training unit, to obtain the first global model for the first model or the third global model for the third model. Deemed insufficient to transform the judicial exception to a patentable invention because the limitation is directed to mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea and are considered to adding the words “apply it” (or an equivalent) with the judicial exception, See MPEP 2106.05(f). Limitations directed to using the computer as a tool for implementing an abstract idea cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Regarding claim 7, dependent upon claim 1, and fail to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. The claim recites: wherein the PSI algorithm comprises at least one of a computer-implemented method based on a same hash function, a computer-implemented method based on Diffie-Hellman key exchange, or a computer-implemented method based on oblivious transfer. The recitation in the additional limitation simply links the judicial exception to a field of use and/or technology environment, see MPEP 2106.05(h). Limitations directed to field of use cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Claim(s) 12 and 16, recite similar subject matter as claim 7, so are rejected under the same rationale. Regarding claim 8, dependent upon claim 1, and fail to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. The claim recites: wherein the first N layers comprise an input layer and a hidden layer, and the one or more remaining layers comprise an input layer after the hidden layer. The recitation in the additional limitation simply links the judicial exception to a field of use and/or technology environment, see MPEP 2106.05(h). Limitations directed to field of use cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Claim(s) 13 and 17, recite similar subject matter as claim 8, so are rejected under the same rationale. Regarding claim 9, In step 2A prong 1, determining a training unit, (i.e.: the broadest reasonable interpretation, the claim recites abstract idea: mental process: It involves observing multiple sets of data, comparing them to identify intersections and judging which data should be grouped together as a training unit. See (MPEP 2106.04)). wherein the first data owned by the first data owner are used to determine, using a private set intersection (PSI) algorithm, respective second feature data from second data owned by the each second data owner that intersect the first data; (i.e.: the broadest reasonable interpretation, the claim recites abstract idea: mental process: It involves observing two sets of data, comparing them to identify overlaps or intersections and make decision which portions correspond. See (MPEP 2106.04)). In step 2A prong 2: training a model by using a plurality of data owners, (i.e.: deemed insufficient to transform the judicial exception to a patentable invention because the claim recites limitation which does not amount to more than a recitation of the words "apply it" (or an equivalent), such as mere instructions to implement an abstract idea on a computer. See MPEP 2106.05(f)). wherein the plurality of data owners comprise a plurality of first data owners and a plurality of second data owners, each first data owner has a respective first model and horizontally divided first data, each second data owner has a respective second model and vertically divided second data, (i.e.: deemed insufficient to transform the judicial exception to a patentable invention because the claim recites limitation simply links the judicial exception to a field of use and/or technology environment, see MPEP 2106.05(h)). providing first data owned by a first data owner to each second data owner, (i.e.: deemed insufficient to transform the judicial exception to a patentable invention because the claim recites limitation directed to mere data gathering as deemed insufficient to transform the judicial exception because claimed elements are considered insignificant extra-solution activity, See MPEP (2106.05(g))). wherein the training unit is determined based on the first data owned by the first data owner and the respective second feature data that intersect the first data and that are owned by the each second data owner; (i.e.: deemed insufficient to transform the judicial exception to a patentable invention because the claim recites limitation simply links the judicial exception to a field of use and/or technology environment, see MPEP 2106.05(h)). performing the following main iteration process until a first iteration end condition is met: performing, in the training unit, cooperative training on a first model of the first data owner using a first training sample and a second training sample to obtain a trained first model, (i.e.: deemed insufficient to transform the judicial exception to a patentable invention because the claim recites limitation which does not amount to more than a recitation of the words "apply it" (or an equivalent), such as mere instructions to implement an abstract idea on a computer. See MPEP 2106.05(f)). wherein: at least a part of the first data is used as the first training sample, and a plurality of second feature data that are owned by the plurality of second data owner and that intersect the first training sample are combined as the second training sample, a second model of each second data owner (i.e.: deemed insufficient to transform the judicial exception to a patentable invention because the claim recites limitation simply links the judicial exception to a field of use and/or technology environment, see MPEP 2106.05(h)). a third model of a slave server participating in training of the training unit are trained in the cooperative training, (i.e.: deemed insufficient to transform the judicial exception to a patentable invention because the claim recites limitation which does not amount to more than a recitation of the words "apply it" (or an equivalent), such as mere instructions to implement an abstract idea on a computer. See MPEP 2106.05(f)). the respective first model of each first data owner and the respective second model of the each second data owner comprise first N layers of a neural network model, and the third model comprises one or more remaining layers of the neural network model except the first N layers; (i.e.: deemed insufficient to transform the judicial exception to a patentable invention because the claim recites limitation simply links the judicial exception to a field of use and/or technology environment, see MPEP 2106.05(h)). sending the trained first model to a master server, (i.e.: deemed insufficient to transform the judicial exception to a patentable invention because the claim recites limitation directed to mere data gathering as deemed insufficient to transform the judicial exception because claimed elements are considered insignificant extra-solution activity, See MPEP (2106.05(g))). wherein federated aggregation is performed on the trained first model to obtain a first global model, or the federated aggregation is performed on the trained first model and the third model trained from each training unit to obtain the first global model and a third global model for the third model; and (i.e.: deemed insufficient to transform the judicial exception to a patentable invention because the claim recites limitation simply links the judicial exception to a field of use and/or technology environment, see MPEP 2106.05(h)). updating the first model of the first data owner according to the first global model received from the master server. (i.e.: deemed insufficient to transform the judicial exception to a patentable invention because the claim recites limitation which does not amount to more than a recitation of the words "apply it" (or an equivalent), such as mere instructions to implement an abstract idea on a computer. See MPEP 2106.05(f)). In Step 2B of the 101-analysis set forth in the 2019 PEG, the examiner has determined that the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception: Regarding limitation (I, V, VII and XI), recite mere application of the abstract idea or mere instructions to implement an abstract idea on a computer are deemed insufficient to transform the judicial exception to a patentable invention because the limitations generally apply the use of a generic computer and/or process with the judicial exception, see MPEP 2106.05(f). Regarding limitation (III and IX), additional elements considered extra/post solution activity, as analyzed above, are activity that are well-understood routine and conventional, specifically: the courts have recognized the computer functions as well‐understood, routine, and conventional functions. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TL| Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network). See MPEP 2106.05(d)(II). Regarding limitation (II, IV, VI, VIII and X), additional elements are deemed insufficient to transform the judicial exception to a patentable invention to a patentable invention because they generally link the judicial exception to the technology environment, see MPEP 2106.05(h). As analyzed above, the additional elements, analyzed above, do not integrate the noted judicial exception into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Therefore, the claim is directed to an abstract idea. Regarding claim 10, dependent upon claim 9, and fail to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. The claim recites: performing the following sub-iteration process in the training unit until a second iteration end condition is met: Deemed insufficient to transform the judicial exception to a patentable invention because the limitation is directed to mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea and are considered to adding the words “apply it” (or an equivalent) with the judicial exception, See MPEP 2106.05(f). Limitations directed to using the computer as a tool for implementing an abstract idea cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. selecting at least a part of first data from the first data as a first training sample in a current sub-iteration process; (i.e.: the broadest reasonable interpretation, the claim recites abstract idea: mental process: It involves evaluating available data and deciding which portion to use in a given iteration. See (MPEP 2106.04)). inputting the first training sample into the first model of the first data owner to obtain feature information output by the first model; The recitation in the additional limitation directed to mere data gathering as deemed insufficient to transform the judicial exception because claimed elements are considered insignificant extra-solution activity and well-understood routine and conventional (2106.05(d)). Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TL| Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network). See MPEP 2106.05(d)(II). The additional limitations as analyze failed to integrate a judicial exception into a practical application at Step 2A and provide an inventive concept in Step 2B, per the analysis above. encrypting and sending the feature information to the slave server participating in training of the training unit, The recitation in the additional limitation directed to mere data gathering as deemed insufficient to transform the judicial exception because claimed elements are considered insignificant extra-solution activity and well-understood routine and conventional (2106.05(d)). Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TL| Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network). See MPEP 2106.05(d)(II). The additional limitations as analyze failed to integrate a judicial exception into a practical application at Step 2A and provide an inventive concept in Step 2B, per the analysis above. wherein a model gradient is calculated according to the feature information and second feature information obtained by the second model of each second data owner based on inputting a respective second training sample, wherein the respective second training sample is obtained by selecting, from respective second feature data owned by the each second data owner, respective second feature data that intersect the first training sample; The recitation in the additional limitation simply links the judicial exception to a field of use and/or technology environment, see MPEP 2106.05(h). Limitations directed to field of use cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. updating the first model according to the model gradient. Deemed insufficient to transform the judicial exception to a patentable invention because the limitation is directed to mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea and are considered to adding the words “apply it” (or an equivalent) with the judicial exception, See MPEP 2106.05(f). Limitations directed to using the computer as a tool for implementing an abstract idea cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Regarding claim 11, dependent upon claim 10, and fail to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. The claim recites: encrypting and sending the feature information to the slave server participating in training of the training unit, wherein the feature information and the second feature information are fused to obtain fused feature information, and the fused feature information is predicted, by using the third model of the slave server, to obtain a prediction result; The recitation in the additional limitation directed to mere data gathering as deemed insufficient to transform the judicial exception because claimed elements are considered insignificant extra-solution activity and well-understood routine and conventional (2106.05(d)). Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TL| Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network). See MPEP 2106.05(d)(II). The additional limitations as analyze failed to integrate a judicial exception into a practical application at Step 2A and provide an inventive concept in Step 2B, per the analysis above. calculating a loss value by using a loss function based on a label of a training sample participating in this round of training and the prediction result from the slave server, (i.e.: the broadest reasonable interpretation, the claim recites abstract idea: mental process: It involves comparing an expected result (label) with an observed result and evaluating the difference to assign a numerical value to that difference. See (MPEP 2106.04)). wherein the model gradient is calculated according to the loss value and a respective loss value sent by each second data owner. The recitation in the additional limitation simply links the judicial exception to a field of use and/or technology environment, see MPEP 2106.05(h). Limitations directed to field of use cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Regarding claim 14, In step 2A prong 1, determining, by using a private set intersection (PSI) algorithm according to each piece of first data owned by each first data owner, respective second feature data from second data owned by the second data owner that intersect the each piece of first data; (i.e.: the broadest reasonable interpretation, the claim recites abstract idea: mental process: It involves observing two sets of data, comparing them to identify overlaps or intersections and make decision which portions correspond. See (MPEP 2106.04)). determining a training unit to which of the respective second feature data belongs… ; (i.e.: the broadest reasonable interpretation, the claim recites abstract idea: mental process: It involves observing multiple sets of data, comparing them to identify intersections and judging which data should be grouped together as a training unit. See (MPEP 2106.04)). the first training sample is determined from at least a part of the piece of first data in the training unit, (i.e.: the broadest reasonable interpretation, the claim recites abstract idea: mental process: It involves making a determination to which portion of the data within a training unit should be used as a sample. See (MPEP 2106.04)). In step 2A prong 2: training a model by using a plurality of data owners (i.e.: deemed insufficient to transform the judicial exception to a patentable invention because the claim recites limitation which does not amount to more than a recitation of the words "apply it" (or an equivalent), such as mere instructions to implement an abstract idea on a computer. See MPEP 2106.05(f)). wherein the plurality of data owners comprise a plurality of first data owners and a plurality of second data owners, each first data owner has a respective first model and horizontally divided first data, each second data owner has a respective second model and vertically divided second data (i.e.: deemed insufficient to transform the judicial exception to a patentable invention because the claim recites limitation simply links the judicial exception to a field of use and/or technology environment, see MPEP 2106.05(h)). wherein the training unit is determined based on first data owned by a first data owner and second feature data that intersects the first data and that are owned by the second data owner; and (i.e.: deemed insufficient to transform the judicial exception to a patentable invention because the claim recites limitation simply links the judicial exception to a field of use and/or technology environment, see MPEP 2106.05(h)). performing the following main iteration process until a first iteration end condition is met: performing, in the training unit, cooperative training on a second model of the second data owner by using owned second feature data that intersect a first training sample as a second training sample and combining the first training sample with second feature data of each other second data owner, (i.e.: deemed insufficient to transform the judicial exception to a patentable invention because the claim recites limitation which does not amount to more than a recitation of the words "apply it" (or an equivalent), such as mere instructions to implement an abstract idea on a computer. See MPEP 2106.05(f)). a first model of a first data owner that participates in training of the training unit, a second model of the each other second data owner, and a third model of a slave server that participates in training of the training unit are trained in the cooperative training, (i.e.: deemed insufficient to transform the judicial exception to a patentable invention because the claim recites limitation which does not amount to more than a recitation of the words "apply it" (or an equivalent), such as mere instructions to implement an abstract idea on a computer. See MPEP 2106.05(f)). the respective first model of the each first data owner and the respective second model of the each second data owner comprise first N layers of a neural network model, and the third model comprises one or more remaining layers of the neural network model except the first N layers; (i.e.: deemed insufficient to transform the judicial exception to a patentable invention because the claim recites limitation simply links the judicial exception to a field of use and/or technology environment, see MPEP 2106.05(h)). wherein, after training using a plurality of training units, a plurality of first models are federally aggregated at a master server to obtain a first global model, and/or a plurality of third models are federally aggregated at the master server to obtain a third global model, and the respective first model of the each first data owner is updated according to the first global model, and/or (i.e.: deemed insufficient to transform the judicial exception to a patentable invention because the claim recites limitation which does not amount to more than a recitation of the words "apply it" (or an equivalent), such as mere instructions to implement an abstract idea on a computer. See MPEP 2106.05(f)). the third model of the slave server is updated according to the third global model. (i.e.: deemed insufficient to transform the judicial exception to a patentable invention because the claim recites limitation which does not amount to more than a recitation of the words "apply it" (or an equivalent), such as mere instructions to implement an abstract idea on a computer. See MPEP 2106.05(f)). In Step 2B of the 101-analysis set forth in the 2019 PEG, the examiner has determined that the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception: Regarding limitation (I, IV, V, VII and VIII), recite mere application of the abstract idea or mere instructions to implement an abstract idea on a computer are deemed insufficient to transform the judicial exception to a patentable invention because the limitations generally apply the use of a generic computer and/or process with the judicial exception, see MPEP 2106.05(f). Regarding limitation (II, III and VI), additional elements are deemed insufficient to transform the judicial exception to a patentable invention to a patentable invention because they generally link the judicial exception to the technology environment, see MPEP 2106.05(h). As analyzed above, the additional elements, analyzed above, do not integrate the noted judicial exception into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Therefore, the claim is directed to an abstract idea. Regarding claim 15, dependent upon claim 14, and fail to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. The claim recites: performing the following sub-iteration process in the training unit until a second iteration end condition is met: Deemed insufficient to transform the judicial exception to a patentable invention because the limitation is directed to mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea and are considered to adding the words “apply it” (or an equivalent) with the judicial exception, See MPEP 2106.05(f). Limitations directed to using the computer as a tool for implementing an abstract idea cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. selecting, from second feature data owned by the second data owner, second feature data that intersects a first training sample as a second training sample (i.e.: the broadest reasonable interpretation, the claim recites abstract idea: mental process: It involves comparing datasets to identify overlapping data, evaluating relevance and selecting corresponding data as a training sample. See (MPEP 2106.04)). wherein the first training sample is determined from at least a part of first data selected from first data owned by the first data owner participating in training of the training unit; The recitation in the additional limitation simply links the judicial exception to a field of use and/or technology environment, see MPEP 2106.05(h). Limitations directed to field of use cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. inputting the second training sample into the second model of the second data owner, to obtain feature information output by the second model; encrypting and sending the feature information to the slave server participating in training of the training unit The recitation in the additional limitation directed to mere data gathering as deemed insufficient to transform the judicial exception because claimed elements are considered insignificant extra-solution activity and well-understood routine and conventional (2106.05(d)). Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TL| Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network). See MPEP 2106.05(d)(II). The additional limitations as analyze failed to integrate a judicial exception into a practical application at Step 2A and provide an inventive concept in Step 2B, per the analysis above. wherein a model gradient is calculated according to the third model of the slave server and feature information received from each data owner; and The recitation in the additional limitation simply links the judicial exception to a field of use and/or technology environment, see MPEP 2106.05(h). Limitations directed to field of use cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. updating the second model according to the model gradient. Deemed insufficient to transform the judicial exception to a patentable invention because the limitation is directed to mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea and are considered to adding the words “apply it” (or an equivalent) with the judicial exception, See MPEP 2106.05(f). Limitations directed to using the computer as a tool for implementing an abstract idea cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Allowable subject matter Claim(s) 1 – 17 would be allowable if rewritten or amended to overcome the rejection under 35 U.S.C. 101 set forth in this office action, because when reading the claims in light of the specification, as per MPEP 2111.01, none of the references of record alone or in combination disclose or suggest the limitations found within the independent claim(s) 1, 9 and 14 as a whole with regards to technical features recited by the claim limitations including directed to: Claim 1 A computer-implemented method for: training a model by using a plurality of data owners, wherein the plurality of data owners comprise a plurality of first data owners and a plurality of second data owners, each first data owner has a respective first model and horizontally divided first data, and each second data owner has a respective second model and vertically divided second data ; and the computer-implemented method comprises: determining, by using a private set intersection (PSI) algorithm at each second data owner according to first data owned by each first data owner, second feature data from second data owned by the second data owner that intersect each piece of first data; determining, as a training unit, first data owned by a first data owner and respective second feature data that intersect the first data and that are owned by each second data owner; performing the following main iteration process until a first iteration end condition is met: performing, for each training unit by using a first training sample and a second training sample, cooperative training on a first model of a first data owner that participates in training of the training unit, the respective second model of the each second data owner, and a third model of a slave server that participates in training of the training unit, wherein: at least a part of first data in the training unit is used as the first training sample, a plurality of second feature data that are owned by the plurality of second data owners and that intersect the first training sample are used as the second training sample, and the respective first model of the each first data owner and the respective second model of the each second data owner comprise first N layers of a neural network model, and the third model comprises one or more remaining layers of the neural network model except the first N layers; performing, at a master server, federated aggregation on the first model trained from each training unit to obtain a first global model, and/or on the third model trained from each training unit to obtain a third global model; and updating the respective first model of the each first data owner according to the first global model, and/or updating the third model of the slave server according to the third global model. Closest prior arts: Angel et al., Pub. No.: US20210174243A1. Angel teaches a key distribution component can distribute respective feature-dimension public keys and respective sample-dimension public keys to respective participants in a vertical federated learning framework governed by a coordinator (¶ [0030], [0041] – [0042]), wherein the respective participants can send to the coordinator respective local model updates encrypted by the respective feature-dimension public keys and respective local datasets encrypted by the respective sample-dimension public keys (¶ [0009], [0030], [0071]). However, Angle does not teach a process first uses a private set intersection (PSI) algorithm to identify overlapping data between a first data owner’s data and second feature data held by multiple second data owners, forming training units from the intersecting data. An iterative cooperative training process is then performed in which first and second data owners jointly train the first N layers of neural network, while a slave server trains the remaining layers using the aligned training samples. After each iteration, a master server performs federated aggregation to produce global models, which are then used to update the local models of the data owners and the server until a stopping condition is met. Guttmann et al., Pub. No.: US20200202243A1. Guttmann teaches determining a global update to inference models using update information from a plurality of external systems are provided. A first and a second update information associated with an inference model may be received. The first update information may be based on an analysis of a first plurality of training examples by a first external system, and the second update information may be based on an analysis of a second plurality of training examples accessible by a second external system (¶ [0012] – [0014], [0019], [0095], [0116] – [0117] and [0125]). However, Guttmann does not teach a process first uses a private set intersection (PSI) algorithm to identify overlapping data between a first data owner’s data and second feature data held by multiple second data owners, forming training units from the intersecting data. An iterative cooperative training process is then performed in which first and second data owners jointly train the first N layers of neural network, while a slave server trains the remaining layers using the aligned training samples. After each iteration, a master server performs federated aggregation to produce global models, which are then used to update the local models of the data owners and the server until a stopping condition is met. Cheng, et al., "Secureboost: A lossless federated learning framework." (2021). Cheng propose a novel lossless privacy-preserving tree-boosting system known as SecureBoost in the setting of federated learning. SecureBoost first conducts entity alignment under a privacy-preserving protocol and then constructs boosting trees across multiple parties with a carefully designed encryption strategy, which allows the learning process to be jointly conducted over multiple parties with common user samples but different feature sets, which corresponds to a vertically partitioned data set (pg. 4, (section 5), pg. 5 (section 6 – 7), pg. 7 (section 8.2)). However, Cheng does not teach a process first uses a private set intersection (PSI) algorithm to identify overlapping data between a first data owner’s data and second feature data held by multiple second data owners, forming training units from the intersecting data. An iterative cooperative training process is then performed in which first and second data owners jointly train the first N layers of neural network, while a slave server trains the remaining layers using the aligned training samples. After each iteration, a master server performs federated aggregation to produce global models, which are then used to update the local models of the data owners and the server until a stopping condition is met. Claim 9. A computer-implemented method for training a model by using a plurality of data owners, wherein the plurality of data owners comprise a plurality of first data owners and a plurality of second data owners, each first data owner has a respective first model and horizontally divided first data, each second data owner has a respective second model and vertically divided second data, and the computer-implemented method comprises: providing first data owned by a first data owner to each second data owner, wherein the first data owned by the first data owner are used to determine, using a private set intersection (PSI) algorithm, respective second feature data from second data owned by the each second data owner that intersect the first data; determining a training unit, wherein the training unit is determined based on the first data owned by the first data owner and the respective second feature data that intersect the first data and that are owned by the each second data owner; performing the following main iteration process until a first iteration end condition is met: performing, in the training unit, cooperative training on a first model of the first data owner using a first training sample and a second training sample to obtain a trained first model, wherein: at least a part of the first data is used as the first training sample, and a plurality of second feature data that are owned by the plurality of second data owner and that intersect the first training sample are combined as the second training sample, a second model of each second data owner and a third model of a slave server participating in training of the training unit are trained in the cooperative training, and the respective first model of each first data owner and the respective second model of the each second data owner comprise first N layers of a neural network model, and the third model comprises one or more remaining layers of the neural network model except the first N layers; sending the trained first model to a master server, wherein federated aggregation is performed on the trained first model to obtain a first global model, or the federated aggregation is performed on the trained first model and the third model trained from each training unit to obtain the first global model and a third global model for the third model; and updating the first model of the first data owner according to the first global model received from the master server. Closest prior arts: Angel et al., Pub. No.: US20210174243A1. Angel teaches a key distribution component can distribute respective feature-dimension public keys and respective sample-dimension public keys to respective participants in a vertical federated learning framework governed by a coordinator (¶[0030], [0041] – [0042]), wherein the respective participants can send to the coordinator respective local model updates encrypted by the respective feature-dimension public keys and respective local datasets encrypted by the respective sample-dimension public keys (¶[0009], [0030], [0071]). However, Angle does not teach first shares first data from a first data owner with multiple second data owners to identify intersecting feature data using a private set intersection (PSI) algorithm, which is then used to form a training unit. An iterative cooperative training process is performed in which the first data and the intersecting second feature data are used as training samples to jointly train neural network models, where the first data owner and second data owners train the first N layers and a slave server trains the remaining layers. After training, the first model (and optionally the third model) is sent to a master server for federated aggregation to generate global models, which are then used to update the first data owner’s local model until a termination condition is satisfied. Guttmann et al., Pub. No.: US20200202243A1. Guttmann teaches determining a global update to inference models using update information from a plurality of external systems are provided. A first and a second update information associated with an inference model may be received. The first update information may be based on an analysis of a first plurality of training examples by a first external system, and the second update information may be based on an analysis of a second plurality of training examples accessible by a second external system (¶[0012] – [0014], [0019], [0095], [0116] – [0117] and [0125]). However, Guttmann does not teach first shares first data from a first data owner with multiple second data owners to identify intersecting feature data using a private set intersection (PSI) algorithm, which is then used to form a training unit. An iterative cooperative training process is performed in which the first data and the intersecting second feature data are used as training samples to jointly train neural network models, where the first data owner and second data owners train the first N layers and a slave server trains the remaining layers. After training, the first model (and optionally the third model) is sent to a master server for federated aggregation to generate global models, which are then used to update the first data owner’s local model until a termination condition is satisfied. Cheng, et al., "Secureboost: A lossless federated learning framework." (2021). Cheng propose a novel lossless privacy-preserving tree-boosting system known as SecureBoost in the setting of federated learning. SecureBoost first conducts entity alignment under a privacy-preserving protocol and then constructs boosting trees across multiple parties with a carefully designed encryption strategy, which allows the learning process to be jointly conducted over multiple parties with common user samples but different feature sets, which corresponds to a vertically partitioned data set (pg. 4, (section 5), pg. 5 (section 6 – 7), pg. 7 (section 8.2)). However, Cheng does not teach first shares first data from a first data owner with multiple second data owners to identify intersecting feature data using a private set intersection (PSI) algorithm, which is then used to form a training unit. An iterative cooperative training process is performed in which the first data and the intersecting second feature data are used as training samples to jointly train neural network models, where the first data owner and second data owners train the first N layers and a slave server trains the remaining layers. After training, the first model (and optionally the third model) is sent to a master server for federated aggregation to generate global models, which are then used to update the first data owner’s local model until a termination condition is satisfied. Claim 14. A computer-implemented method for training a model by using a plurality of data owners, wherein the plurality of data owners comprise a plurality of first data owners and a plurality of second data owners, each first data owner has a respective first model and horizontally divided first data, each second data owner has a respective second model and vertically divided second data, and the computer-implemented method comprises: determining, by using a private set intersection (PSI) algorithm according to each piece of first data owned by each first data owner, respective second feature data from second data owned by the second data owner that intersect the each piece of first data; determining a training unit to which of the respective second feature data belongs, wherein the training unit is determined based on first data owned by a first data owner and second feature data that intersects the first data and that are owned by the second data owner; and performing the following main iteration process until a first iteration end condition is met: performing, in the training unit, cooperative training on a second model of the second data owner by using owned second feature data that intersect a first training sample as a second training sample and combining the first training sample with second feature data of each other second data owner, wherein: the first training sample is determined from at least a part of the piece of first data in the training unit, a first model of a first data owner that participates in training of the training unit, a second model of the each other second data owner, and a third model of a slave server that participates in training of the training unit are trained in the cooperative training, and the respective first model of the each first data owner and the respective second model of the each second data owner comprise first N layers of a neural network model, and the third model comprises one or more remaining layers of the neural network model except the first N layers; wherein, after training using a plurality of training units, a plurality of first models are federally aggregated at a master server to obtain a first global model, and/or a plurality of third models are federally aggregated at the master server to obtain a third global model, and the respective first model of the each first data owner is updated according to the first global model, and/or the third model of the slave server is updated according to the third global model. Closest prior arts: Angel et al., Pub. No.: US20210174243A1. Angel teaches a key distribution component can distribute respective feature-dimension public keys and respective sample-dimension public keys to respective participants in a vertical federated learning framework governed by a coordinator (¶ [0030], [0041] – [0042]), wherein the respective participants can send to the coordinator respective local model updates encrypted by the respective feature-dimension public keys and respective local datasets encrypted by the respective sample-dimension public keys (¶ [0009], [0030], [0071]). However, Angle does not teach using a private set intersection (PSI) algorithm to identify second feature data from multiple second data owners that intersect with first data held by first data owners with first data held by first data owners, and groups the intersecting data into training units. For each training unit, an iterative cooperative training process jointly trains neural network models, where first and second data owners train the first N layers using aligned first and second training sample, and a slave server trains the remaining layers. After training across multiple training units, a master server performs federated aggregation to produce global models, which are then used to update the local models of the first data owners and/or the slave server until a termination condition is met. Guttmann et al., Pub. No.: US20200202243A1. Guttmann teaches determining a global update to inference models using update information from a plurality of external systems are provided. A first and a second update information associated with an inference model may be received. The first update information may be based on an analysis of a first plurality of training examples by a first external system, and the second update information may be based on an analysis of a second plurality of training examples accessible by a second external system (¶[0012] – [0014], [0019], [0095], [0116] – [0117] and [0125]). However, Guttmann does not teach using a private set intersection (PSI) algorithm to identify second feature data from multiple second data owners that intersect with first data held by first data owners with first data held by first data owners, and groups the intersecting data into training units. For each training unit, an iterative cooperative training process jointly trains neural network models, where first and second data owners train the first N layers using aligned first and second training sample, and a slave server trains the remaining layers. After training across multiple training units, a master server performs federated aggregation to produce global models, which are then used to update the local models of the first data owners and/or the slave server until a termination condition is met. Cheng, et al., "Secureboost: A lossless federated learning framework." (2021). Cheng propose a novel lossless privacy-preserving tree-boosting system known as SecureBoost in the setting of federated learning. SecureBoost first conducts entity alignment under a privacy-preserving protocol and then constructs boosting trees across multiple parties with a carefully designed encryption strategy, which allows the learning process to be jointly conducted over multiple parties with common user samples but different feature sets, which corresponds to a vertically partitioned data set (pg. 4, (section 5), pg. 5 (section 6 – 7), pg. 7 (section 8.2)). However, Cheng does not teach using a private set intersection (PSI) algorithm to identify second feature data from multiple second data owners that intersect with first data held by first data owners with first data held by first data owners, and groups the intersecting data into training units. For each training unit, an iterative cooperative training process jointly trains neural network models, where first and second data owners train the first N layers using aligned first and second training sample, and a slave server trains the remaining layers. After training across multiple training units, a master server performs federated aggregation to produce global models, which are then used to update the local models of the first data owners and/or the slave server until a termination condition is met. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Truex, et al. "A hybrid approach to privacy-preserving federated learning.", 2019. Truex proposes and implement an FL system providing formal privacy guarantees and models with improved accuracy compared to existing approaches. Includes a tunable trust parameter which accounts for various trust scenarios while maintaining the improved ac curacy and formal privacy guarantees. Liu, et al. "Privacy-preserving traffic flow prediction: A federated learning approach." (2020). Liu proposes an ensemble clustering-based scheme for traffic f low prediction by grouping the organizations into clusters before applying FedGRU algorithm. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MATIYAS T MARU whose telephone number is (571)270-0902 or via email: matiyas.maru@uspto.gov. The examiner can normally be reached Monday 8:00am - Friday 4:00pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle Bechtold can be reached on (571)431-0762. The fax phone number for the organization were this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /M.T.M./ Examiner, Art Unit 2148 /MICHELLE T BECHTOLD/ Supervisory Patent Examiner, Art Unit 2148
Read full office action

Prosecution Timeline

Apr 12, 2023
Application Filed
Mar 05, 2026
Non-Final Rejection — §101 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586114
GENERATING DIGITAL RECOMMENDATIONS UTILIZING COLLABORATIVE FILTERING, REINFORCEMENT LEARNING, AND INCLUSIVE SETS OF NEGATIVE FEEDBACK
2y 5m to grant Granted Mar 24, 2026
Patent 12572796
METHODS AND SYSTEMS FOR GENERATING RECOMMENDATIONS FOR COUNTERFACTUAL EXPLANATIONS OF COMPUTER ALERTS THAT ARE AUTOMATICALLY DETECTED BY A MACHINE LEARNING ALGORITHM
2y 5m to grant Granted Mar 10, 2026
Patent 12567004
METHOD OF MACHINE LEARNING TRAINING FOR DATA AUGMENTATION
2y 5m to grant Granted Mar 03, 2026
Patent 12561588
Methods and Systems for Generating Example-Based Explanations of Link Prediction Models in Knowledge Graphs
2y 5m to grant Granted Feb 24, 2026
Patent 12561584
TEACHING DATA PREPARATION DEVICE, TEACHING DATA PREPARATION METHOD, AND PROGRAM
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
58%
Grant Probability
70%
With Interview (+12.5%)
4y 6m
Median Time to Grant
Low
PTA Risk
Based on 40 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month