Prosecution Insights
Last updated: April 18, 2026
Application No. 17/812,789

MODEL IMPROVEMENT USING FEDERATED LEARNING AND CANONICAL FEATURE MAPPING

Non-Final OA §101§103§112§DP
Filed
Jul 15, 2022
Examiner
PHAM, JESSICA THUY
Art Unit
2121
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
3 (Non-Final)
33%
Grant Probability
At Risk
3-4
OA Rounds
3y 3m
To Grant
0%
With Interview

Examiner Intelligence

Grants only 33% of cases
33%
Career Allow Rate
1 granted / 3 resolved
-21.7% vs TC avg
Minimal -33% lift
Without
With
+-33.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
38 currently pending
Career history
41
Total Applications
across all art units

Statute-Specific Performance

§101
26.8%
-13.2% vs TC avg
§103
35.5%
-4.5% vs TC avg
§102
11.0%
-29.0% vs TC avg
§112
22.7%
-17.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 3 resolved cases

Office Action

§101 §103 §112 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims/Response to Amendment Claims 1, 7, 8, 12, 13, 15, 19, and 20 were amended. Claims 1-20 are pending and examined herein. Claims 1-8 and 13 are rejected under a nonstatutory double patenting rejection over U.S. Patent No. 11,410,037 B2. Claims 1-20 are rejected under 35 U.S.C. 101. Claims 1-20 are rejected under 35 U.S.C. 103. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 2/16/2026 has been entered. Response to Arguments Applicant’s arguments, see page 10, filed 2/16/2026, with respect to the objections of claims 1, 7, 8, 13, and 20 have been fully considered and are persuasive. The objection of claims 1, 7, 8, 13, and 20 has been withdrawn. Applicant's arguments, see page 11, regarding the double patenting rejection of claims 1-8 and 13, filed 2/16/2026, have been fully considered but they are not persuasive. Applicant has not filed a terminal disclaimer or showed that the claims subject to the rejection are patentably distinct from the reference claims. The double patenting rejection of claims 1-8 is maintained. Applicant’s arguments, see page 11, filed 2/16/2026, with respect to the 35 U.S.C. 112(a) and 35 U.S.C. 112(b) rejections of claims 8-20 have been fully considered and are persuasive. The 35 U.S.C. 112(a) and 35 U.S.C. 112(b) rejections of claims 8-20 has been withdrawn. Note that a new 35 U.S.C. 112(b) rejection has been made, necessitated by amendment. Applicant's arguments filed 2/16/2026 regarding the 35 U.S.C. 101 rejection of claims 1-20 have been fully considered but they are not persuasive. Applicant argues, see page 21, "At least the above underlined features of amended claim 1 recite significantly more than generic computing. For example, as recited in claim 1, program instruction to generate a feature mapping module configured to map local input features of asset data of the plurality of client into the set of canonical input features; and program instruction to generate a local version of the seed model using the map of the local input features of the asset data, go beyond generic computing as specifying feature mapping and a set of canonical input features." Examiner respectfully disagrees. Regarding the first limitation, “map local input features of asset data of the plurality of client into the set of canonical input features” can be practically performed in the human mind, and is thus the abstract idea of a mental process. For example, one could map the features by figuring out which features in the asset data correspond to the canonical input features. “program instruction to generate a feature mapping module configured to” is mere instructions to apply an exception, as the feature mapping module does not have any additional elements that go beyond general computing. Regarding the second limitation, “program instruction to generate a local version of the seed model using the map of the local input features of the asset data”, generating a local version of the model is a generic process in federated learning, which would amount to mere instructions to apply an exception. Using the mapping of the features is a generic process of applying a computer-implemented function, which also amounts to mere instructions to apply an exception. Applicant further argues, "The invention provides a practical application by improving computer technology used in analytical modeling using federated learning. In the present invention, the augmented model benefits from the aggregate feature input data from all clients in the domain without having to share feature data considered proprietary or for internal use by clients (specification, paragraph 22; also paragraphs 35, 50, 59). Thereby, the invention improves federated learning by enabling heterogeneous client models with incompatible feature schemas to be trained, federated, and reused as a single augmented model without sharing client data" MPEP 2106.05(a) states “An important consideration in determining whether a claim improves technology is the extent to which the claim covers a particular solution to a problem or a particular way to achieve a desired outcome, as opposed to merely claiming the idea of a solution or outcome. McRO, 837 F.3d at 1314-15, 120 USPQ2d at 1102-03; DDR Holdings, 773 F.3d at 1259, 113 USPQ2d at 1107. In this respect, the improvement consideration overlaps with other considerations, specifically the particular machine consideration (see MPEP § 2106.05(b)), and the mere instructions to apply an exception consideration (see MPEP § 2106.05(f)). Thus, evaluation of those other considerations may assist examiners in making a determination of whether a claim satisfies the improvement consideration. “It is important to note, the judicial exception alone cannot provide the improvement. The improvement can be provided by one or more additional elements. See the discussion of Diamond v. Diehr, 450 U.S. 175, 187 and 191-92, 209 USPQ 1, 10 (1981)) in subsection II, below. In addition, the improvement can be provided by the additional element(s) in combination with the recited judicial exception. See MPEP § 2106.04(d) (discussing Finjan, Inc. v. Blue Coat Sys., Inc., 879 F.3d 1299, 1303-04, 125 USPQ2d 1282, 1285-87 (Fed. Cir. 2018)). Thus, it is important for examiners to analyze the claim as a whole when determining whether the claim provides an improvement to the functioning of computers or an improvement to other technology or technical field." The cited claim limitation, “map local input features of asset data of the plurality of client into the set of canonical input features”, as analyzed in regards to the previous argument, is an abstract idea and cannot provide the improvement. The remainder of the cited claim limitation, “program instruction to generate a feature mapping module configured to” is mere instructions to apply an exception, as analyzed in regards to the previous argument, and does not cover a particular solution to the problem. Regarding the second cited limitation, “program instruction to generate a local version of the seed model using the map of the local input features of the asset data” is mere instructions to apply an exception, as analyzed in regards to the previous argument, and does not cover a particular solution to the problem. Therefore, the claim does not represent an improvement, and is not directed to patent eligible subject matter. See amended 35 U.S.C. 101 rejection below. Applicant's arguments filed 2/16/2026 regarding the 35 U.S.C. 103 rejection of claims 1-20 have been fully considered but they are not persuasive. Applicant argues that the newly amended features added to claim 1 are not taught by the cited references. Examiner respectfully disagrees, and has found the amended features in the cited references. See amended 35 U.S.C. 103 rejection below. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-6, 8 and 13 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-6, and 8 of U.S. Patent No. 11,410,037 B2 in view of Verma (“Federated AI for the Enterprise: A Web Services based Implementation”, July 8, 2019). See table below. Instant Application U.S. Patent No. 11,410,037 B2 Verma 1. A computer program product for generating an artificial intelligence (Al) model, the computer program product comprising: at least one computer readable storage medium, and program instructions stored on the at least one computer readable storage medium, the program instructions comprising: (The difference is that claim 1 of the instant application is directed to a program product with program instructions to perform the method recited in claim 1 of U.S. Patent No. 11,410,037 B2. It would be obvious to a person of ordinary skill in the art that the method of claim 1 is U.S. Patent No. 11,410,037 B2 would be implemented in the form of program instructions in a computer readable medium as described in claim 1 of the instant application in order for a processor to perform the recited method of claim 1 in U.S. Patent No. 11,410,037 B2. Hereinafter, this is considered to be the explanation for the difference in “program instructions to” and “by the one or more processors”.) 1. A method for generating an artificial intelligence (AI) model, the method comprising: program instructions to receive information associated with respective existing models from a plurality of clients; receiving, by the one or more processors, information associated with respective existing models from a plurality of clients; program instructions to group the respective existing models from the plurality of clients into domains based on the received information of the respective existing models; grouping, by the one or more processors, the respective existing models from the plurality of clients into domains based on the received information of the respective existing models; program instructions to send a seed model to a first set of clients of the plurality of clients that correspond to existing models that are grouped into a first domain, the seed model specifying a set of canonical input features; sending, by the one or more processors, a seed model to a first set of clients of the plurality of clients that correspond to existing models that are grouped into a first domain; Page 24 states "On the receipt of the control information, each training service contacts the fusion service to get a set of transformation policies. The fusion server compares the data it has locally with the provided statistics, and uses it to generate a set of policies for transformation of data at the training service. The goal of these generated policies is to get data from each of the different training services into a common schema. The policies that will be sent to the training service will include instructions for changing the type of the raw data (e.g. convert.jpg images into.png, or convert.avi sound files into.wav files etc.), relabeling the features to a common set of names, and relabeling the output label values into a different common set. The algorithms for generating these policies are described in more detail in [11]. Upon receipt of the policies, the training service uses the policies to convert the local training data into a specific format. The previously received control information instructs the training server about the operations it should conduct before starting the fusion, e.g. in some types of fusion processes it may need to send a small sample of its data set or a generator for representative synthetic data." Therefore, the training data schema is interpreted as the input features, meaning that the canonical schema has canonical input features.) program instructions to receive confirmation of mapping feature data of the respective existing models of the first set of clients to a canonical schema of the seed model, using a schema transformation service enabling the first set of clients to generate rules mapping local client data into canonical input formats compatible with the seed model received by the first set of clients; receiving, by the one or more processors, confirmation of mapping feature data of the respective existing models of the first set of clients to a canonical schema of the seed model; Page 26 states "The mapping of the features is done initially by comparing the feature names provided in the schema of two different sites. By default, we assume that the schema of the first site joining into a training session is considered the common schema to be used. Each subsequent site that joins the training session needs to use a set of policies to translate its local schema to that of the common schema." Page 24 states "On the receipt of the control information, each training service contacts the fusion service to get a set of transformation policies. The fusion server compares the data it has locally with the provided statistics, and uses it to generate a set of policies for transformation of data at the training service. The goal of these generated policies is to get data from each of the different training services into a common schema." As the seed model is the first training site, and the common schema is from the first training site, the transformation policies, interpreted as the rules, are generated to map local client data (the training service) into canonical input formats compatible with the seed model. The fusion service is interpreted as the schema transformation service. program instruction to generate a feature mapping module configured to map local input features of input features of asset data of the plurality of client into the set of canonical input features of the seed model; (Page 26 states "The mapping of the features is done initially by comparing the feature names provided in the schema of two different sites. By default, we assume that the schema of the first site joining into a training session is considered the common schema to be used. Each subsequent site that joins the training session needs to use a set of policies to translate its local schema to that of the common schema." The mapping is interpreted as the feature mapping model. Page 24 states "On the receipt of the control information, each training service contacts the fusion service to get a set of transformation policies. The fusion server compares the data it has locally with the provided statistics, and uses it to generate a set of policies for transformation of data at the training service. The goal of these generated policies is to get data from each of the different training services into a common schema. The policies that will be sent to the training service will include instructions for changing the type of the raw data (e.g. convert.jpg images into.png, or convert.avi sound files into.wav files etc.), relabeling the features to a common set of names, and relabeling the output label values into a different common set. The algorithms for generating these policies are described in more detail in [11]. Upon receipt of the policies, the training service uses the policies to convert the local training data into a specific format. The previously received control information instructs the training server about the operations it should conduct before starting the fusion, e.g. in some types of fusion processes it may need to send a small sample of its data set or a generator for representative synthetic data." The training services are interpreted as the plurality of clients, and their training data is interpreted as the input features. The broadest reasonable interpretation of “asset” is anything owned by the client, meaning that the training data, owned by the client, is asset data. As the data is transformed into the schema of the first model, interpreted as the seed model, the canonical input features are from the seed model.) program instruction to generate a local version of the seed model using the map of the local input features of the asset data; Page 24 states "On the receipt of the control information, each training service contacts the fusion service to get a set of transformation policies. The fusion server compares the data it has locally with the provided statistics, and uses it to generate a set of policies for transformation of data at the training service. The goal of these generated policies is to get data from each of the different training services into a common schema. The policies that will be sent to the training service will include instructions for changing the type of the raw data (e.g. convert.jpg images into.png, or convert.avi sound files into.wav files etc.), relabeling the features to a common set of names, and relabeling the output label values into a different common set. The algorithms for generating these policies are described in more detail in [11]. Upon receipt of the policies, the training service uses the policies to convert the local training data into a specific format.” This is interpreted as how the model is generated using the map of features. Page 25 states "update model: used during the training phase of the federated learning. At each invocation of this interface, the fusion server receives the results of local training at each site, and returns the fused model that comes from integrating the other training sites." Therefore, as the seed model is from the first training site, receiving the fused model is receiving the seed model. program instructions to send a base model with the canonical schema associated with the seed model, respectively, to the first set of clients; sending, by the one or more processors, a base model with the canonical schema associated with the seed model, respectively, to the first set of clients; program instructions to receive from the first set of clients, respectively, the base model that is trained by the feature data of the respective existing models of the first set of clients; and receiving, by the one or more processors, from the first set of clients, respectively, the base model that is trained by the feature data of the respective existing models of the first set of clients; and program instructions to generate an augmented model by federation of attributes from the received base model of the first set of clients, trained by the feature data of the respective existing models of the first set of clients, and generating, by the one or more processors, an augmented model by federation of attributes from the received base models of the first set of clients, trained by the feature data of the respective existing models of the first set of clients. program instructions to distribute the augmented model to the plurality of clients. Page 35 states "update model: used during the training phase of the federated learning. At each invocation of this interface, the fusion server receives the results of local training at each site, and returns the fused model that comes from integrating the other training sites." The fused model is interpreted as the augmented model. 2. The computer program product of claim 1, further comprising: program instructions to send the augmented model to respective clients of the first set of clients, wherein a feature mapper of the respective clients of the first set of clients is prepended to the augmented model. 2. The method of claim 1, further comprising: sending, by the one or more processors, the augmented model to respective clients of the first set of clients, wherein a feature mapper of the respective clients of the first set of clients is prepended to the augmented model. 3. The computer program product of claim 1, wherein the feature data of respective clients of the first set of clients is private data that is unshared, remains with a respective client, and is used by the respective client to generate a feature mapper with the seed model and to train the base model. 3. The method of claim 1 wherein the feature data of respective clients of the first set of clients is private data that is unshared, remains with a respective client, and is used by the respective client to generate a feature mapper with the seed model and to train the base model., further comprising: 4. The computer program product of claim 1, wherein the program instructions to map the feature data of the existing model of a respective client to the canonical schema, further comprises: 4. The method of claim 1, the mapping of the feature data of the existing model of a respective client to the canonical schema, further comprises: program instructions to communicate a procedure to generate an algorithm to translate respective features of the feature data of the existing model of the respective client to the canonical schema of the seed model; and (To do a procedure by a processor, the procedure must be communicated to the processor. Thus, the instant application’s claim encompasses the conflicting claim.) generating, by the one or more processors, an algorithm to translate respective features of the feature data of the existing model of the respective client to the canonical schema of the seed model; and program instructions to communicate a procedure to apply the feature data of a respective client’s asset to the algorithm transforming the feature data to an input feature of the canonical schema of the augmented model. (See above.) applying, by the one or more processors, the feature data of a respective client’s asset to the algorithm transforming the feature data to an input feature of the canonical schema of the augmented model. 5. The computer program product of claim 1, wherein program instructions to generate the augmented model further comprises: 5. The method of claim 1, wherein generating the augmented model further comprises: program instructions to perform learning federation techniques on the received trained base models from the respective clients; and performing, by the one or more processors, learning federation techniques on the received trained base models from the respective clients; and program instructions to generate a single augmented model including attributes of the base model that is trained and received, respectively, from the first set of clients. generating, by the one or more processors, a single augmented model including attributes of the base model that is trained and received, respectively, from the first set of clients. 6. The computer program product of claim 1, wherein the canonical schema of the seed model and the base model include one or more input features and at least one output feature. 6. The method of claim 1, wherein the canonical schema of the seed model and the base model include one or more input features and at least one output feature. 8. A computer system for improving a model based on augmenting a plurality of models trained on private feature data, the method comprising: one or more computer processors; at least one computer readable storage medium; and program instructions stored on the at least one computer readable storage medium, the program instructions comprising: (The difference is that claim 1 of the instant application is directed to a computer system with at least one computer readable storage medium with program instructions to perform the method recited in claim 1 of U.S. Patent No. 11,410,037 B2. It would be obvious to a person of ordinary skill in the art that the method of claim 1 is U.S. Patent No. 11,410,037 B2 would be implemented in the form of program instructions in a computer readable medium in a computer system as described in claim 8 of the instant application in order for a processor to perform the recited method of claim 8 in U.S. Patent No. 11,410,037 B2. Hereinafter, this is considered to be the explanation for the difference in “program instructions to” and “by the one or more processors”.) 8. A method for improving a model based on augmenting a plurality of models trained on private feature data, the method comprising: program instructions to send an existing model of a first model type to a model augmentation service; (In order to receive an assignment for an existing model of a model type, the method of the conflicting claim must send the existing model to the service. Both the instant claim and the conflicting claim describe an interaction from the point of view of a client to a model augmentation service. Although the limitation in the conflicting claim does not describe the sender of the assignment of the domain, it is obvious that the sender must be the model augmentation service that has received the existing model. ) program instructions to receive a seed model that includes a canonical schema of data input and output, the seed model specifying a set of canonical input features; receiving, by the one or more processors, a seed model that includes a canonical schema of data input and output; Page 24 states "On the receipt of the control information, each training service contacts the fusion service to get a set of transformation policies. The fusion server compares the data it has locally with the provided statistics, and uses it to generate a set of policies for transformation of data at the training service. The goal of these generated policies is to get data from each of the different training services into a common schema. The policies that will be sent to the training service will include instructions for changing the type of the raw data (e.g. convert.jpg images into.png, or convert.avi sound files into.wav files etc.), relabeling the features to a common set of names, and relabeling the output label values into a different common set. The algorithms for generating these policies are described in more detail in [11]. Upon receipt of the policies, the training service uses the policies to convert the local training data into a specific format. The previously received control information instructs the training server about the operations it should conduct before starting the fusion, e.g. in some types of fusion processes it may need to send a small sample of its data set or a generator for representative synthetic data." Therefore, the training data schema is interpreted as the input features, meaning that the canonical schema has canonical input features.) program instructions to train the seed model by generating a feature mapper that maps private feature data of the existing model to the canonical schema of the seed model, using a schema transformation service enabling the first set of clients to generate rules mapping local client data into canonical input formats compatible with the seed model received by the first set of clients; training, by the one or more processors, the seed model by generating a feature mapper that maps private feature data of the existing model to the canonical schema of the seed model; Page 26 states "The mapping of the features is done initially by comparing the feature names provided in the schema of two different sites. By default, we assume that the schema of the first site joining into a training session is considered the common schema to be used. Each subsequent site that joins the training session needs to use a set of policies to translate its local schema to that of the common schema." Page 24 states "On the receipt of the control information, each training service contacts the fusion service to get a set of transformation policies. The fusion server compares the data it has locally with the provided statistics, and uses it to generate a set of policies for transformation of data at the training service. The goal of these generated policies is to get data from each of the different training services into a common schema." As the seed model is the first training site, and the common schema is from the first training site, the transformation policies, interpreted as the rules, are generated to map local client data (the training service) into canonical input formats compatible with the seed model. The fusion service is interpreted as the schema transformation service. program instruction to generate a local version of the seed model using the map of the local input features of the asset data; (Page 24 states "On the receipt of the control information, each training service contacts the fusion service to get a set of transformation policies. The fusion server compares the data it has locally with the provided statistics, and uses it to generate a set of policies for transformation of data at the training service. The goal of these generated policies is to get data from each of the different training services into a common schema. The policies that will be sent to the training service will include instructions for changing the type of the raw data (e.g. convert.jpg images into.png, or convert.avi sound files into.wav files etc.), relabeling the features to a common set of names, and relabeling the output label values into a different common set. The algorithms for generating these policies are described in more detail in [11]. Upon receipt of the policies, the training service uses the policies to convert the local training data into a specific format.” This is interpreted as how the model is generated using the map of features. Page 25 states "update model: used during the training phase of the federated learning. At each invocation of this interface, the fusion server receives the results of local training at each site, and returns the fused model that comes from integrating the other training sites." Therefore, as the seed model is from the first training site, receiving the fused model is receiving the seed model, and converting the local training data into a specific format to use with the model is interpreted as generating a local version if the seed model.) program instructions to receive a base model including the canonical schema of the seed model; receiving, by the one or more processors, a base model including the canonical schema of the seed model; Page 24 states "On the receipt of the control information, each training service contacts the fusion service to get a set of transformation policies. The fusion server compares the data it has locally with the provided statistics, and uses it to generate a set of policies for transformation of data at the training service. The goal of these generated policies is to get data from each of the different training services into a common schema. The policies that will be sent to the training service will include instructions for changing the type of the raw data (e.g. convert.jpg images into.png, or convert.avi sound files into.wav files etc.), relabeling the features to a common set of names, and relabeling the output label values into a different common set. The algorithms for generating these policies are described in more detail in [11]. Upon receipt of the policies, the training service uses the policies to convert the local training data into a specific format.” This is interpreted as how the model is generated using the map of features. Page 25 states "update model: used during the training phase of the federated learning. At each invocation of this interface, the fusion server receives the results of local training at each site, and returns the fused model that comes from integrating the other training sites." Therefore, as the seed model is from the first training site, receiving the fused model is receiving the seed model. program instructions to train the base model by applying the private feature data of the existing model to the canonical schema of inputs by use of the feature mapper; training, by the one or more processors, the base model by applying the private feature data of the existing model to the canonical schema of inputs by use of the feature mapper; program instructions to send the trained base model to a model augmentation service; (A model that is trained is the same as a trained model.) sending, by the one or more processors, the base model that is trained to a model augmentation service program instructions to receive from the model augmentation service, a single augmented model generated by application of federated learning applied to a plurality of base models of the first model type; and (In the conflicting claim, the augmented model is stated as received in the next limitation. “A single augmented model” is the same as “an augmented model.”) wherein the model augmentation service generates an augmented model by applying federated learning to a plurality of base models; and program instructions to prepend the feature mapper to the single augmented base model received from the model augmentation service, wherein the private feature data of the existing model is applied to the feature mapper prepended to the single augmented model, and (As noted above, a single augmented model is the same as an augmented model. The feature mapper in the conflicting claim is prepended to the augmented base model.) prepending, by the one or more processors, the feature mapper to the augmented base model received from the model augmentation service, wherein the private feature data of the existing model is applied to the feature mapper. program instructions to distribute the augmented model to the plurality of clients. Page 35 states "update model: used during the training phase of the federated learning. At each invocation of this interface, the fusion server receives the results of local training at each site, and returns the fused model that comes from integrating the other training sites." The fused model is interpreted as the augmented model. 13. A computer program product for improving a model based on augmenting a plurality of models trained on private feature data, the method comprising: (The difference is that claim 1 of the instant application is directed to a computer system with at least one computer readable storage medium with program instructions to perform the method recited in claim 1 of U.S. Patent No. 11,410,037 B2. It would be obvious to a person of ordinary skill in the art that the method of claim 1 is U.S. Patent No. 11,410,037 B2 would be implemented in the form of a computer program product as described in claim 8 of the instant application in order for a processor to perform the recited method of claim 8 in U.S. Patent No. 11,410,037 B2. Hereinafter, this is considered to be the explanation for the difference in “program instructions to” and “by the one or more processors”.) 8. A method for improving a model based on augmenting a plurality of models trained on private feature data, the method comprising: The remainder of claim 13 recites substantially similar subject matter to claim 8 and is rejected with the same rationale, mutatis mutandis. It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine U.S. Patent No. 11,410,037 B2 with the teachings of Verma (“Federated AI for the Enterprise: A Web Services based Implementation”, July 8, 2019) because, as Verma states on page 20, "Web Services provide a robust mechanism for the implementation of federated learning in an enterprise context. This is because, a web service based architecture provides service mix-and-match capability to federated learning so the learner can seamlessly utilize human-invoked functions along with automated service invocations. Additionally, such frameworks enable the traversal of enterprise firewalls using a protocol that is usually known, trusted and enabled across the enterprise boundaries. Thus, making the distributed learning process secure." Claim 13 recites substantially similar subject matter to claim 8 and is rejected with the same rationale, mutatis mutandis. Claim 7 is rejected on the ground of nonstatutory double patenting as being unpatentable over claims 7 of U.S. Patent No. 11,410,037 B2 in view of Verma (“Federated AI for the Enterprise: A Web Services based Implementation”, July 8, 2019) as applied to claim 1 above, and further in view of Hiessl (“Industrial Federated Learning—Requirements and System Design”, 2020). Instant Application U.S. Patent No. 11,410,037 B2 Hiessl 7. The computer program product of claim 1, wherein a domain of the respective existing models includes models performing analysis on the same asset type. 7. The method of claim 1, wherein a domain of the respective existing models includes models performing similar types of analysis. Page 46 states "To this end, we identify the requirement of evaluating models in regards to similarities of asset data influenced by operating and environmental conditions. This is the basis for building FL cohorts of FL tasks using asset data with similar characteristics. FL cohorts enable that FL clients only share updates within a subset of FL clients, whose submitted FL tasks belong to the same FL cohort." Therefore, models are grouped into cohorts (domains) with models using asset data with similar characteristics, interpreted as the asset type. It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine U.S. Patent No. 11,410,037 B2 and Verma with the teachings of Hiessl because, as Hiessl states on page 46, "As discussed in Sect. 3.2, FL client selection plays a role in FL to reduce duration of e.g., training or evaluation [12]. Furthermore, client selection based on evaluation using held-out validation data, can improve accuracy of the global model [1]." Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 8-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 8 and 13 recite the limitation "the first set of clients" in the fifth paragraph of the claims. There is insufficient antecedent basis for this limitation in the claim. Dependent claims 9-12 and 14-20 fail to resolve the issues and are rejected with the same rationales. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. MPEP § 2109(III) sets out steps for evaluating whether a claim is drawn to patent-eligible subject matter. The analysis of claims 1-20, in accordance with these steps, follows. Step 1 Analysis: Step 1 is to determine whether the claim is directed to a statutory category (process, machine, manufacture, or composition of matter). Claims 1-7 are directed to an article of manufacture, claims 8-12 are directed to a machine, and claims 13-20 are directed to an article of manufacture. All claims are directed to a statutory category and analysis continues. Step 2A Prong One, Step 2A Prong Two, and Step 2B Analysis: Step 2A Prong One asks if the claim recites a judicial exception (abstract idea, law of nature, or natural phenomenon). If the claim recites a judicial exception, analysis proceeds to Step 2A Prong Two, which asks if the claim recites additional elements that integrate the abstract idea into a practical application. If the claim does not integrate the judicial exception, analysis proceeds to Step 2B, which asks if the claim amounts to significantly more than the judicial exception. If the claim does not amount to significantly more than the judicial exception, the claim is not eligible subject matter under 35 U.S.C. 101. None of the claims represent an improvement to technology. Regarding claim 1, the following claim elements are abstract ideas: group the respective existing models from the plurality of clients into domains based on the received information of the respective existing models; (Grouping models into domains is a mental process of evaluation. One, given data, could group existing models into domains practically in the human mind with the aid of pen and paper.) mapping feature data of the respective existing models of the first set of clients to a canonical schema of the seed model (Mapping data can be practically performed in the human mind with the aid of pen and paper, and therefore is a mental process.) generate rules mapping local client data into canonical input formats compatible with the seed model received by the first set of clients; (Generating rules to map data into canonical input formats can be practically performed in the human mind, i.e. deciding which data corresponds to each format. This is a mental process.) map local input features of asset data of the plurality of clients into the set of canonical input features of the seed model; (One could practically map data to another type of the data in the human mind. This is a mental process.) generate an augmented model by federation of attributes (One could, in the human mind, combine multiple attributes to generate a conceptual augmented model. This is a mental process.) The following claim elements are additional elements which, taken alone or in combination with the other additional elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: A computer program product for generating an artificial intelligence (Al) model, the computer program product comprising: (This recites generic computer components and generic computer functions. This is mere instructions to apply an exception. See MPEP § 2106.05(f).) at least one computer readable storage medium, and program instructions stored on the at least one computer readable storage medium, the program instructions comprising: (This recites generic computer components and generic computer functions. This is mere instructions to apply an exception.) program instructions to receive information associated with respective existing models from a plurality of clients; (Program instructions are a generic computer function; this is mere instructions to apply an exception.) program instructions to … (Program instructions are a generic computer function; this is mere instructions to apply an exception.) program instructions to send a seed model to a first set of clients of the plurality of clients that correspond to existing models that are grouped into a first domain, the seed model specifying a set of canonical input features; (The broadest reasonable interpretation of this limitation is transmitting data, which is an existing process. This is mere instructions to apply an exception. See MPEP § 2106.05(f)(2). Specifying which data is sent is the insignificant extra-solution activity of selecting a particular type of data to be manipulated. See MPEP § 2106.05(g), ‘Selecting a particular data source or type of data to be manipulated’.) program instructions to receive confirmation of …, using a schema transformation service enabling the first set of clients to (The broadest reasonable interpretation of this limitation is receiving data, which is an existing process. A schema transformation service is a generic implementation of an abstract idea on a computer. This is mere instructions to apply an exception. See MPEP § 2106.05(f)(2). ) program instructions to generate a feature mapping module configured to … (The broadest reasonable interpretation of “feature mapping module” is a computer-implemented mapping of one type of data to another, which is a known and generic process on a computer. This amounts to mere instructions to apply an exception.) program instruction to generate a local version of the seed model using the map of the local input features of the asset data; (Generating a local version of the model is a generic process in federated learning. Using the mapping of the features is a generic process of applying a computer-implemented function. This amounts to mere instructions to apply an exception.) program instructions to send a base model with the canonical schema associated with the seed model, respectively, to the first set of clients; (The broadest reasonable interpretation of this limitation is transmitting data, which is an existing process. This is mere instructions to apply an exception.) program instructions to receive from the first set of clients, respectively, the base model that is trained by the feature data of the respective existing models of the first set of clients; (The broadest reasonable interpretation of this limitation is receiving data, which is an existing process. This is mere instructions to apply an exception.) program instructions to … from the received base models of the first set of clients, trained by the feature data of the respective existing models of the first set of clients; and (This recites generic training, which is an existing process on a computer; this is mere instructions to apply an exception.) program instructions to distribute the augmented model to the plurality of clients. (This describes a generic process in federated learning. This amounts to mere instructions to apply an exception.) Regarding claim 2, the rejection of claim 1 is incorporated herein. The following claim elements are additional elements which, taken alone or in combination with the other additional elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: program instructions to send the augmented model to respective clients of the first set of clients, wherein a feature mapper of the respective clients of the first set of clients is prepended to the augmented model. (The broadest reasonable interpretation of this limitation is transmitting data, which is an existing process. This is mere instructions to apply an exception.) Regarding claim 3, the rejection of claim 1 is incorporated herein. The following claim elements are additional elements which, taken alone or in combination with the other additional elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: wherein the feature data of respective clients of the first set of clients is private data that is unshared, remains with a respective client, (This is the insignificant extra-solution activity of selecting a particular data source or type of data to be manipulated. See MPEP § 2106.05(g), “Selecting a particular data source or type of data to be manipulated”.) and is used by the respective client to generate a feature mapper with the seed model and to train the base model. (This recites generic training, which is an existing process on a computer; this is mere instructions to apply an exception.) Regarding claim 4, the rejection of claim 1 is incorporated herein. The following claim elements are additional elements which, taken alone or in combination with the other additional elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: program instructions to communicate a procedure to generate an algorithm to translate respective features of the feature data of the existing model of the respective client to the canonical schema of the seed model; and (The broadest reasonable interpretation of this limitation is transmitting data, which is an existing process. This is mere instructions to apply an exception.) program instructions to communicate a procedure to apply the feature data of a respective client’s asset to the algorithm transforming the feature data to an input feature of the canonical schema of the augmented model. (The broadest reasonable interpretation of this limitation is transmitting data, which is an existing process. This is mere instructions to apply an exception.) Regarding claim 5, the rejection of claim 1 is incorporated herein. The following claim elements are additional elements which, taken alone or in combination with the other additional elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: program instructions to perform learning federation techniques on the received trained base models from the respective clients; and (This recites generic federated learning, which is an existing process on a computer; this is mere instructions to apply an exception.) program instructions to generate a single augmented model including attributes of the base model that is trained and received, respectively, from the first set of clients. (This recites generic training, which is an existing process on a computer; this is mere instructions to apply an exception.) Regarding claim 6, the rejection of claim 1 is incorporated herein. The following claim elements are additional elements which, taken alone or in combination with the other additional elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: wherein the canonical schema of the seed model and the base model include one or more input features and at least one output feature. (This is the insignificant extra-solution activity of selecting a particular data source or type of data to be manipulated. See MPEP § 2106.05(g), “Selecting a particular data source or type of data to be manipulated”.) Regarding claim 7, the rejection of claim 1 is incorporated herein. Further, claim 7 recites the following abstract idea: wherein a domain of the respective existing models includes models performing analysis on the same asset type. (This is sorting criteria of the grouping performed in claim 1. The grouping, as explained above, is an abstract idea, and, as this limitation is a part of the grouping, this limitation is also an abstract idea of mental process.) Claim 7 does not recite any additional elements. Regarding claim 8, the following are abstract ideas: maps private feature data of the existing model to the canonical schema of the seed model (Mapping data types can be practically performed in the human mind. This is a mental process.) generate rules mapping local client data into canonical input formats compatible with the seed model received by the first set of clients; (Generating rules to map data into canonical input formats can be practically performed in the human mind, i.e. deciding which data corresponds to each format. This is a mental process.) map local input features of asset data of the plurality of client into the set canonical input features; (One could practically map data to another type of the data in the human mind. This is a mental process.) applying the private feature data of the existing model to the canonical schema of inputs (Applying a mapping can be practically performed in the human mind. This is a mental process.) The following claim elements are additional elements which, taken alone or in combination with the other additional elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: A computer system for improving a model based on augmenting a plurality of models trained on private feature data, the system comprising: (This recites generic computer components and generic computer functions. This is mere instructions to apply an exception.) one of more computer processors; (This recites generic computer components. This is mere instructions to apply an exception.) at least one computer readable storage medium; and (This recites a generic computer component. This is mere instructions to apply an exception.) program instructions stored on the at least one computer readable storage medium, the program instructions comprising: (This recites generic computer components and generic computer functions. This is mere instructions to apply an exception.) program instructions to send an existing model of a first model type to a model augmentation service; (The broadest reasonable interpretation of this limitation is transmitting data, which is an existing process. This is mere instructions to apply an exception.) program instructions to receive a seed model that includes a canonical schema of data input and output, the seed model specifying a set of canonical input features; (The broadest reasonable interpretation of this limitation is receiving data, which is an existing process. This is mere instructions to apply an exception.) This is mere instructions to apply an exception. See MPEP § 2106.05(f)(2). Specifying which data is sent is the insignificant extra-solution activity of selecting a particular type of data to be manipulated. See MPEP § 2106.05(g), ‘Selecting a particular data source or type of data to be manipulated’.) program instructions to train the seed model by generating a feature mapper that … program instructions to receive confirmation of …, using a schema transformation service enabling the first set of clients to A schema transformation service is a generic implementation of an abstract idea on a computer. (This recites generic training, which is an existing process on a computer. A schema transformation service is a generic implementation of an abstract idea on a computer. This is mere instructions to apply an exception.) program instruction to generate a local version of the seed model using the map of the local input features of the asset data; (Generating a local version of the model is a generic process in federated learning. Using the mapping of the features is a generic process of applying a computer-implemented function. This amounts to mere instructions to apply an exception.) program instructions to send a confirmation of completion of mapping of the private feature data of the existing model to the canonical schema of the seed model to the model augmentation service; (The broadest reasonable interpretation of this limitation is transmitting data, which is an existing process. This is mere instructions to apply an exception.) program instructions to receive a base model including the canonical schema of the seed model; (The broadest reasonable interpretation of this limitation is receiving data, which is an existing process. This is mere instructions to apply an exception.) program instructions to train the base model by … by use of the feature mapper; (This recites generic training, which is an existing process on a computer; this is mere instructions to apply an exception.) program instructions to send the trained base model to a model augmentation service; (The broadest reasonable interpretation of this limitation is transmitting data, which is an existing process. This is mere instructions to apply an exception.) program instructions to receive from the model augmentation service, a single augmented model generated by application of federated learning applied to a plurality of base models of the first model type; and (The broadest reasonable interpretation of this limitation is receiving data, which is an existing process. This is mere instructions to apply an exception.) program instructions to prepend the feature mapper to the single augmented base model received from the model augmentation service, wherein the private feature data of the existing model is applied to the feature mapper prepended to the single augmented model, and (This is the insignificant extra-solution activity of sorting information. See MPEP § 2106.05(d), third list, (vi).) program instructions to distribute the augmented model to the plurality of clients. (This describes a generic process in federated learning. This amounts to mere instructions to apply an exception.) Regarding claim 9, the rejection of claim 8 is incorporated herein. The following claim elements are additional elements which, taken alone or in combination with the other additional elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: program instructions to communicate a technique of training the base model by applying the private feature data of the existing model to the canonical schema of the base model by use of the feature mapper. (The broadest reasonable interpretation of this limitation is transmitting data, which is an existing process. This is mere instructions to apply an exception.) Regarding claim 10, the rejection of claim 8 is incorporated herein. Further, claim 10 recites the following abstract idea: wherein a varying number of features of the private feature data of the existing model are mapped to a fixed number of canonical features of the seed model. (Mapping data can be practically performed in the human mind, given the data. This is a mental process.) Regarding claim 11, the rejection of claim 8 is incorporated herein. The following claim elements are additional elements which, taken alone or in combination with the other additional elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: wherein the seed model is used as the base model. (This is the insignificant extra-solution activity of selecting a particular data source or type of data to be manipulated. See MPEP § 2106.05(g), “Selecting a particular data source or type of data to be manipulated”.) Regarding claim 12, the rejection of claim 8 is incorporated herein. The following claim elements are additional elements which, taken alone or in combination with the other additional elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: wherein one or more base models of the plurality of base models used to generate the single augmented model are from historically received base models. (This is the insignificant extra-solution activity of selecting a particular data source or type of data to be manipulated. See MPEP § 2106.05(g), “Selecting a particular data source or type of data to be manipulated”) Claims 13 and 14 recite substantially similar subject matter to claims 8 and 9 respectively and are rejected with the same rationale, mutatis mutandis. Regarding claim 15, the rejection of claim 13 is incorporated herein. The following claim elements are additional elements which, taken alone or in combination with the other additional elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: generate the single augmented model by repeatedly averaging the weights of the plurality of base models of the first model type. (This is the insignificant extra-solution activity of performing repetitive calculations. See MPEP § 2106.05(d)(II), first list, (ii).) Regarding claim 16, the rejection of claim 13 is incorporated herein. Further, claim 16 recites the following abstract ideas: generate an algorithm to translate a feature of the private feature data of the existing model to the canonical schema of the seed model; and (Generating an algorithm can be practically be performed in the human mind, i.e. coming up with a procedure to translate data. This is a mental process.) apply the private feature data of the existing model to the algorithm transforming the a feature of the private feature of the existing model to an input feature of the canonical schema of the augmented model. (Applying an algorithm can be practically performed in the human mind with the aid of pen and paper, i.e. following a procedure to transform the data. This is a mental process.) Regarding claim 17, the rejection of claim 13 is incorporated herein. The following claim elements are additional elements which, taken alone or in combination with the other additional elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: generate the feature mapper based on the seed model received; and (The broadest reasonable interpretation of this limitation includes using the machine learning model, which is an existing process on a computer. This is mere instructions to apply an exception.) train the base model using the feature mapper to map the private feature data of the existing model to the canonical schema of the augmented model. (This recites generic training, which is an existing process on a computer; this is mere instructions to apply an exception.) Claims 18 and 19 recite substantially similar subject matter to claims 10 and 12 respectively and are rejected with the same rationale, mutatis mutandis. Regarding claim 20, the rejection of claim 13 is incorporated herein. Further, claim 20 recites the following abstract idea: wherein a domain of the respective existing models includes models performing analysis on the same asset type. (This is the insignificant extra-solution activity of selecting a particular data source or type of data to be manipulated. See MPEP § 2106.05(g), “Selecting a particular data source or type of data to be manipulated”.) Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1-6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Verma (“Federated AI for the Enterprise: A Web Services based Implementation”, July 8, 2019), Calo (“Federated Learning for Coalition Operations, October 14, 2019), and Samek (US 2022/0108177 A1). Regarding claim 1, Verma teaches A computer program product for generating an artificial intelligence (Al) model, the computer program product comprising: (The title states that the paper is a “Web Services based Implementation”. One of ordinary skill in the art would realize that a web service is implemented using a computer program product. The abstract states "In such situations, the enterprise can benefit from the concept of federated learning in which ML models are created at multiple different geographic sites. These are combined together at a federation server without the need to share data.") at least one computer readable storage medium, and program instructions stored on the at least one computer readable storage medium, the program instructions comprising: (One of ordinary skill in the art would realize that, in order to have a web service, a computer readable storage medium must be used and store program instructions to run the web service. Hereinafter, this is considered the explanation for “program instructions to”.) program instructions to receive information associated with respective [models] from a plurality of clients; (Fig. 4, page 25 shows “The invocation of fusion services once training service is called.” The training service, interpreted as a client, uses /report_stats, which, according to page 25, is “used by a training service to provide the aggregate statistics of its local data.” The statistic are interpreted as the information associated with respective models. Fig. 2, page 24 shows that there are a plurality of training services, so there are a plurality of clients, that each use /report_stats to send information to the fusion service, which receives the information.) a seed model …, the seed model specifying a set of canonical input features; (Page 26 states "By default, we assume that the schema of the first site joining into a training session is considered the common schema to be used. Each subsequent site that joins the training session needs to use a set of policies to translate its local schema to that of the common schema." The model of the first site is interpreted as the seed model, as its schema will be used as the canonical schema. Page 24 states "On the receipt of the control information, each training service contacts the fusion service to get a set of transformation policies. The fusion server compares the data it has locally with the provided statistics, and uses it to generate a set of policies for transformation of data at the training service. The goal of these generated policies is to get data from each of the different training services into a common schema. The policies that will be sent to the training service will include instructions for changing the type of the raw data (e.g. convert.jpg images into.png, or convert.avi sound files into.wav files etc.), relabeling the features to a common set of names, and relabeling the output label values into a different common set. The algorithms for generating these policies are described in more detail in [11]. Upon receipt of the policies, the training service uses the policies to convert the local training data into a specific format. The previously received control information instructs the training server about the operations it should conduct before starting the fusion, e.g. in some types of fusion processes it may need to send a small sample of its data set or a generator for representative synthetic data." Therefore, the training data schema is interpreted as the input features, meaning that the canonical schema has canonical input features.) program instructions to receive confirmation of mapping feature data of the respective existing models of the first set of clients to a canonical schema of the seed model, (Page 26 states "The mapping of the features is done initially by comparing the feature names provided in the schema of two different sites. By default, we assume that the schema of the first site joining into a training session is considered the common schema to be used. Each subsequent site that joins the training session needs to use a set of policies to translate its local schema to that of the common schema." Page 25 states "update model: used during the training phase of the federated learning. At each invocation of this interface, the fusion server receives the results of local training at each site, and returns the fused model that comes from integrating the other training sites." Therefore, as Fig. 4 shows that the /update_model is invoked after /get_policy, which tells the clients how to map the data, receiving the results of local training is receiving confirmation of mapping feature data.) using a schema transformation service enabling the first set of clients to generate rules mapping local client data into canonical input formats compatible with the seed model; (Page 26 states "The mapping of the features is done initially by comparing the feature names provided in the schema of two different sites. By default, we assume that the schema of the first site joining into a training session is considered the common schema to be used. Each subsequent site that joins the training session needs to use a set of policies to translate its local schema to that of the common schema." Page 24 states "On the receipt of the control information, each training service contacts the fusion service to get a set of transformation policies. The fusion server compares the data it has locally with the provided statistics, and uses it to generate a set of policies for transformation of data at the training service. The goal of these generated policies is to get data from each of the different training services into a common schema." As the seed model is the first training site, and the common schema is from the first training site, the transformation policies, interpreted as the rules, are generated to map local client data (the training service) into canonical input formats compatible with the seed model. The fusion service is interpreted as the schema transformation service.) program instruction to generate a feature mapping module configured to map local input features of input features of asset data of the plurality of clients into the set of canonical input features of the seed model; (Page 26 states "The mapping of the features is done initially by comparing the feature names provided in the schema of two different sites. By default, we assume that the schema of the first site joining into a training session is considered the common schema to be used. Each subsequent site that joins the training session needs to use a set of policies to translate its local schema to that of the common schema." Therefore, the mapping is interpreted as the feature mapping module. Page 24 states "On the receipt of the control information, each training service contacts the fusion service to get a set of transformation policies. The fusion server compares the data it has locally with the provided statistics, and uses it to generate a set of policies for transformation of data at the training service. The goal of these generated policies is to get data from each of the different training services into a common schema. The policies that will be sent to the training service will include instructions for changing the type of the raw data (e.g. convert.jpg images into.png, or convert.avi sound files into.wav files etc.), relabeling the features to a common set of names, and relabeling the output label values into a different common set. The algorithms for generating these policies are described in more detail in [11]. Upon receipt of the policies, the training service uses the policies to convert the local training data into a specific format. The previously received control information instructs the training server about the operations it should conduct before starting the fusion, e.g. in some types of fusion processes it may need to send a small sample of its data set or a generator for representative synthetic data." The training services are interpreted as the plurality of clients, and their training data is interpreted as the input features. The broadest reasonable interpretation of “asset” is anything owned by the client, meaning that the training data, owned by the client, is asset data. As the seed model is the first training site, and the common schema is from the first training site, mapping local input features into a common schema is mapping local input features into the set of canonical input features of the seed model.) program instructions to generate a local version of the seed model using the map of the local input features of the asset data; (Page 24 states "On the receipt of the control information, each training service contacts the fusion service to get a set of transformation policies. The fusion server compares the data it has locally with the provided statistics, and uses it to generate a set of policies for transformation of data at the training service. The goal of these generated policies is to get data from each of the different training services into a common schema. The policies that will be sent to the training service will include instructions for changing the type of the raw data (e.g. convert.jpg images into.png, or convert.avi sound files into.wav files etc.), relabeling the features to a common set of names, and relabeling the output label values into a different common set. The algorithms for generating these policies are described in more detail in [11]. Upon receipt of the policies, the training service uses the policies to convert the local training data into a specific format.” This is interpreted as how the model is generated using the map of features. Page 25 states "update model: used during the training phase of the federated learning. At each invocation of this interface, the fusion server receives the results of local training at each site, and returns the fused model that comes from integrating the other training sites." Therefore, as the seed model is from the first training site, receiving the fused model is receiving the seed model, and converting the local training data into a specific format to use with the model is interpreted as generating a local version if the seed model.) a base model with the canonical schema associated with the seed model (The base model is interpreted to be the seed model. As they are the same, the base model has the canonical schema associated with the seed model.) program instructions to receive from the first set of clients, respectively, [a model] that is trained by the feature data of the respective existing models of the first set of clients; and (Note that in this combination of methods, all clients taught by Verma are interpreted to be in a group (first set of clients), as taught by Samek. Page 25 states "update_model: used during the training phase of the federated learning. At each invocation of this interface, the fusion server receives the results of local training at each site, and returns the fused model that comes from integrating the other training sites." The fusion server receives the results of local training, which is a model.) program instructions to generate an augmented model by federation of attributes from the received [models] of the first set of clients, trained by the feature data of the respective existing models of the first set of clients; and (As stated above, update_model receives the trained models from the clients and “returns the fused model that comes from integrating the other training sites.” As the models have different attributes, as proven by the need for transformation policies, integrating the data is federation of attributes. Page 24 states "Upon receipt of the policies, the training service uses the policies to convert the local training data into a specific format. The previously received control information instructs the training server about the operations it should conduct before starting the fusion, e.g. in some types of fusion processes it may need to send a small sample of its data set or a generator for representative synthetic data." Page 24 further states "With the receipt of the control parameters, the training service goes through the training stage, working with the fusion service in the fusion stage." Therefore, the clients use their own feature data to train the model.) program instructions to distribute the augmented model to the plurality of clients (Page 35 states "update model: used during the training phase of the federated learning. At each invocation of this interface, the fusion server receives the results of local training at each site, and returns the fused model that comes from integrating the other training sites." The fused model is interpreted as the augmented model.) Verma does not appear to explicitly teach [receiving information associated with] existing models program instructions to group the respective existing models from the plurality of clients into domains based on the received information of the respective existing models; program instructions to send a seed model to a first set of clients of the plurality of clients that correspond to existing models that are grouped into a first domain; program instructions to send [a model], respectively, to the first set of clients; [receiving a model trained by the feature data] [generating an augmented model using] the base model However, Calo—directed to analogous art—teaches [receiving information associated with] existing models (Page 2, “Model Sharing Mode” states "In the model sharing mode shown in Fig. 3, the coalition partners do not share the training data with each other. Instead, they each train their models on the local data they have, and exchange models with each other." Fig. 3 shows that the fusion server receives the models, meaning it receives information associated with the existing models.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Verma and Calo because, as Calo states on page 2, “Model Sharing Mode”, "This mode is useful when the training data sets are too large, or the training data sets can not be exchanged for any reason, e.g. they may reveal sensitive details about the attributes of the equipment used to collect the data. Model sharing may use approaches for fusion of models [5] which can be optimized for performance [6]." The combination of Verma and Calo do not appear to explicitly teach [receiving information associated with] existing models program instructions to group the respective existing models from the plurality of clients into domains based on the received information of the respective [models]; program instructions to send [a model] to a first set of clients of the plurality of clients that correspond to existing models that are grouped into a first domain; program instructions to send a base model with the canonical schema associated with the seed model, respectively, to the first set of clients; [receiving] the base model [trained by the feature data] [generating an augmented model using] the base model However, Samek—directed to analogous art—teaches program instructions to group the respective existing models from the plurality of clients into domains based on the received information of the respective existing models; ([0081] states "The apparatus 80 uses these parameterization updates 90 in order to perform Federated Learning of the neural network depending on similarities between these parameterization updates 90. In particular, as illustrated in FIG. 7, the parameterization updates 90, are subject to a similarity determination 92 yielding the similarities between similarity determination 92 yielding the similarities between the parameterization updates 90 and depending on these similarities a Federated Learning 94 of the neural network is performed. Similarity determination 92 and Federated Learning 94 are performed by the processor 84." The similarity determination is based on the update. [0082] states "Based on the similarities thus represented by the correlation matrix 96, the parameterization updates and thus, the clients 14, relating thereto, are clustered so as and thus, the clients 14, relating thereto, are clustered so as of client groups." Client groups are interpreted as domains.) program instructions to send a seed model to a first set of clients of the plurality of clients that correspond to existing models that are grouped into a first domain; ([0135] states "Each client 14 is provided with the parametrization P j of the cluster it belongs to." [0056] states "The clients 14 are not only able to parameterize an internal instantiation of the neural network 16 accordingly, i.e., according to this setting, but the clients 14 are also able to train this neural network 16 thus parametrized using training data available to the respective client." As the clients already have an instantiation of a neural network, the parameterization allows the client to have the same model as the server, effectively sending a seed model.) program instructions to send a base model, respectively, to the first set of clients; (As above, the seed model is interpreted to be the base model. Therefore, the above explanation is also the explanation for this limitation.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Verma and Calo with the teachings of Samek because, as Samek states in [0012], "In accordance with the first aspect of the present application, the acceptance of Federated Learning difficulties is overcome based on the insight that parameterization updates suffice to deduce similarities between local training data resources." Regarding claim 2, the rejection of claim 1 is incorporated herein. Further, Verma teaches program instructions to send the augmented model to respective clients of the first set of clients, wherein a feature mapper of the respective clients of the first set of clients is prepended to the augmented model. (In Fig. 4, the invocation of services is shown. /get_policy which is "called to get the transformation policies from the Fusion Server. The Fusion Server controls how the data from different training sites ought to be transformed so that it can correspond to a common schema and format for input training data. The policy may also indicate an approach to change the labels to get the canonical data for training" (page 25) is invoked before the fused model (augmented model) is returned by /update_model, which is "used during the training phase of the federated learning. At each invocation of this interface, the fusion server receives the results of local training at each site, and returns the fused model that comes from integrating the other training sites" (page 25). Page 26 states "A third type of policy that needs to be generated is the policy scheme that maps different labels in the output corpus among each other, i.e. to deal with the situation where the output labels are different at different sites." This policy is interpreted as a feature mapper, which will be set to the clients before the augmented model.) Regarding claim 3, the rejection of claim 1 is incorporated herein. Verma teaches wherein the feature data of respective clients of the first set of clients is private data that is unshared, remains with a respective client, and is used by the respective client to generate a feature mapper with the seed model and to train the base model. (Page 23, section F states "Data collected at different clinical groups may be stored separately depending on the type of access controls required by the system. An AI model in these cases would be run by the research organization. Yet the research organization would not be allowed to access the raw data. One way to address these requirements would be to transform the data to hide any personal information before creating an AI model from it. However, implementing a variation of Fusion AI, which would be able to process raw data at each of the clinical site, is more likely to provide a more accurate model." Page 26 states "The mapping of the features is done initially by comparing the feature names provided in the schema of two different sites. By default, we assume that the schema of the firs site joining into a training session is considered the common schema to be used. Each subsequent site that joins the training session needs to use a set of policies to translate its local schema to that of the common schema." As the schema is associated with the seed model, using the schema to generate the feature mapper will be using the seed model to generate the feature mapper.) Regarding claim 4, the rejection of claim 1 is incorporated herein. Verma teaches program instructions to communicate a procedure to generate an algorithm to translate respective features of the feature data of the existing model of the respective client to the canonical schema of the seed model; and (Page 26 states "Another type of policy that needs to be generated is that for converting different features that may be called different names at different sites. These policies would be of the format if col. name = xyz, then rename it as abc. The mapping from the different column names of each site to a canonical format needs to be determined." Policies are interpreted as algorithms. The model in the first training site is again interpreted as the seed model because, as page 26 states, "By default, we assume that the schema of the first site joining into a training session is considered the common schema to be used.” In order to perform the procedure, the procedure must be communicated by the processor. Thus, the communication is inherent.) program instructions to communicate a procedure to apply the feature data of a respective client’s asset to the algorithm transforming the feature data to an input feature of the canonical schema of the augmented model. (Page 24 states "Upon receipt of the policies, the training service uses the policies to convert the local training data into a specific format." One of ordinary skill in the art would realize that the “specific format” is the canonical schema. Page 24 further states "The previously received control information instructs the training server about the operations it should conduct before starting the fusion, e.g. in some types of fusion processes it may need to send a small sample of its data set or a generator for representative synthetic data. The control information may also contain information about batch-sizes and number of iterations the training service may need in order to conduct a successful model training exercise. With the receipt of the control parameters, the training service goes through the training stage, working with the fusion service in the fusion stage." In order for the fused model to be trained, the transformed feature data must be input to the fused model, and the transformed feature data must match the schema of the fused model, interpreted as the augmented model.) Regarding claim 5, the rejection of claim 1 is incorporated herein. Verma teaches program instructions to perform learning federation techniques [on models from the respective clients]; and (Page 26 states "In order to deal with schema differences, the data available at each of the sites needs to be converted into a single common format over which federated learning can be performed." Page 25 states that /update_model is "used during the training phase of the federated learning. At each invocation of this interface, the fusion server receives the results of local training at each site, and returns the fused model that comes from integrating the other training sites.") program instructions to generate a single augmented model including attributes of the base model that is trained and received, respectively, from the first set of clients. (As stated above, the fused model is interpreted as the single augmented model. Page 26 states "By default, we assume that the schema of the first site joining into a training session is considered the common schema to be used. Each subsequent site that joins the training session needs to use a set of policies to translate its local schema to that of the common schema." The first site model is interpreted as the base model, which, as the schema of the base model is used as the common schema, the augmented model will have the attributes of the base model.) Verma does not appear to explicitly teach [performing learning federation techniques] on the received trained base models from the respective clients However, Samek—directed to analogous art—teaches [performing learning federation techniques] on the received trained base models from the respective clients ([0135] states "Each client 14 is provided with the parametrization P j of the cluster it belongs to." [0056] states "The clients 14 are not only able to parameterize an internal instantiation of the neural network 16 accordingly, i.e., according to this setting, but the clients 14 are also able to train this neural network 16 thus parametrized using training data available to the respective client." As the clients already have an instantiation of a neural network, the parameterization allows the client to have the same model as the server, meaning that they have a base model. Therefore, the parameterization is interpreted as the base model. [0135] further states "The clients 14 receive same at 32, the clients 14 decrypt it at 154; update their own version of the cluster specific parametrization P, at 34 using the downloaded difference signal, whereupon the clients perform the local training at 36 to update their local parametrization which they then encrypt at 152 and upload at 38. At server/apparatus side, the updates 104 are gathered at 172, merged at 36 and the updated parametrizations P, are broadcast at 32." Merging the updates is interpreted as learning federation techniques.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Verma and Samek for the reasons given above in regards to claim 1. Regarding claim 6, the rejection of claim 1 is incorporated herein. Verma teaches wherein the canonical schema of the seed model and the base model include one or more input features and at least one output feature. (Page 26 states "Another type of policy that needs to be generated is that for converting different features that may be called different names at different sites. These policies would be of the format if col. name = xyz, then rename it as abc." This is interpreted as the input features, which are in a schema because, as page 26 states "The mapping of the features is done initially by comparing the feature names provided in the schema of two different sites. By default, we assume that the schema of the first site joining into a training session is considered the common schema to be used." Page 26 further states "A third type of policy that needs to be generated is the policy scheme that maps different labels in the output corpus among each other, i.e. to deal with the situation where the output labels are different at different sites. In order to deal with this approach, an AI model that is trained for classification into the labels defined for the first training site is used. Each site uses this model to classify its own feature data, and compares the label provided by the model with the original label, creating a matrix that counts the output labels of the common model against the labels of the original data. A policy that matches the label in the original data to the most frequent label output by the model is generated." The labels of the common model, are interpreted as the output features of the schema, as the schema is based on the common model and the other labels are mapped to the common labels.) Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Verma (“Federated AI for the Enterprise: A Web Services based Implementation”, July 8, 2019), Calo (“Federated Learning for Coalition Operations, October 14, 2019), and Samek (US 2022/0108177 A1) as applied to claim 1 above, further in view of Hiessl (“Industrial Federated Learning—Requirements and System Design”, 2020). Regarding claim 7, the rejection of claim 1 is incorporated herein. Verma does not appear to explicitly teach wherein a domain of the respective existing models includes models performing analysis on the same asset type. However, Hiessl—directed to analogous art—teaches wherein a domain of the respective existing models includes models performing analysis on the same asset type. (Page 46 states "To this end, we identify the requirement of evaluating models in regards to similarities of asset data influenced by operating and environmental conditions. This is the basis for building FL cohorts of FL tasks using asset data with similar characteristics. FL cohorts enable that FL clients only share updates within a subset of FL clients, whose submitted FL tasks belong to the same FL cohort." Therefore, models are grouped into cohorts (domains) with models using asset data with similar characteristics, interpreted as the asset type.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Verma, Calo, and Samek with the teachings of Hiessl because, as Hiessl states on page 46, "As discussed in Sect. 3.2, FL client selection plays a role in FL to reduce duration of e.g., training or evaluation [12]. Furthermore, client selection based on evaluation using held-out validation data, can improve accuracy of the global model [1]." Claim(s) 8-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Verma (“Federated AI for the Enterprise: A Web Services based Implementation”, July 8, 2019), Calo (“Federated Learning for Coalition Operations, October 14, 2019), Samek (US 2022/0108177 A1), and Choudhury (“Anonymizing Data for Privacy-Preserving Federated Learning”, February 2020). Regarding claim 8, Verma teaches A computer system for improving a model based on augmenting a plurality of models trained on private feature data, the method comprising: (The title states that the paper is a “Web Services based Implementation”. One of ordinary skill in the art would realize that a web service is implemented using a computer program product. The abstract states "In such situations, the enterprise can benefit from the concept of federated learning in which ML models are created at multiple different geographic sites. These are combined together at a federation server without the need to share data.") one of more computer processors; (One of ordinary skill in the art would realize that, in order to have a web service, a processor must be used in order to execute the instructions on the computer readable medium.) at least one computer readable storage medium; and program instructions stored on the at least one computer readable storage medium, the program instructions comprising: (One of ordinary skill in the art would realize that, in order to have a web service, a computer readable storage medium must be used and store program instructions to run the web service. Hereinafter, this is considered the explanation for “program instructions to”.) a seed model that includes a canonical schema of data input and output, the seed model specifying a set of canonical input features; (Page 26 states "By default, we assume that the schema of the first site joining into a training session is considered the common schema to be used. Each subsequent site that joins the training session needs to use a set of policies to translate its local schema to that of the common schema." The model of the first site is interpreted as the seed model, as its schema will be used as the canonical schema. Page 26 states "Another type of policy that needs to be generated is that for converting different features that may be called different names at different sites. These policies would be of the format if col. name = xyz, then rename it as abc." This is interpreted as the input features, which are in a schema because, as page 26 states "The mapping of the features is done initially by comparing the feature names provided in the schema of two different sites. By default, we assume that the schema of the first site joining into a training session is considered the common schema to be used." Page 26 further states "A third type of policy that needs to be generated is the policy scheme that maps different labels in the output corpus among each other, i.e. to deal with the situation where the output labels are different at different sites. In order to deal with this approach, an AI model that is trained for classification into the labels defined for the first training site is used. Each site uses this model to classify its own feature data, and compares the label provided by the model with the original label, creating a matrix that counts the output labels of the common model against the labels of the original data. A policy that matches the label in the original data to the most frequent label output by the model is generated." The labels of the common model, are interpreted as the output features of the schema, as the schema is based on the common model and the other labels are mapped to the common labels. Page 24 states "On the receipt of the control information, each training service contacts the fusion service to get a set of transformation policies. The fusion server compares the data it has locally with the provided statistics, and uses it to generate a set of policies for transformation of data at the training service. The goal of these generated policies is to get data from each of the different training services into a common schema. The policies that will be sent to the training service will include instructions for changing the type of the raw data (e.g. convert.jpg images into.png, or convert.avi sound files into.wav files etc.), relabeling the features to a common set of names, and relabeling the output label values into a different common set. The algorithms for generating these policies are described in more detail in [11]. Upon receipt of the policies, the training service uses the policies to convert the local training data into a specific format. The previously received control information instructs the training server about the operations it should conduct before starting the fusion, e.g. in some types of fusion processes it may need to send a small sample of its data set or a generator for representative synthetic data." Therefore, the training data schema is interpreted as the input features, meaning that the canonical schema has canonical input features.)) program instructions to train the seed model by generating a feature mapper that maps [feature data] of the existing model to the canonical schema of the seed model. (Page 26 states "The mapping of the features is done initially by comparing the feature names provided in the schema of two different sites. By default, we assume that the schema of the first site joining into a training session is considered the common schema to be used. Each subsequent site that joins the training session needs to use a set of policies to translate its local schema to that of the common schema.") using a schema transformation service enabling the first set of clients to generate rules mapping local client data into canonical input formats compatible with the seed model; (Page 26 states "The mapping of the features is done initially by comparing the feature names provided in the schema of two different sites. By default, we assume that the schema of the first site joining into a training session is considered the common schema to be used. Each subsequent site that joins the training session needs to use a set of policies to translate its local schema to that of the common schema." Page 24 states "On the receipt of the control information, each training service contacts the fusion service to get a set of transformation policies. The fusion server compares the data it has locally with the provided statistics, and uses it to generate a set of policies for transformation of data at the training service. The goal of these generated policies is to get data from each of the different training services into a common schema." As the seed model is the first training site, and the common schema is from the first training site, the transformation policies, interpreted as the rules, are generated to map local client data (the training service) into canonical input formats compatible with the seed model. The fusion service is interpreted as the schema transformation service.) program instructions to generate a local version of the seed model using a map of the local input features of the asset data; (Page 24 states "On the receipt of the control information, each training service contacts the fusion service to get a set of transformation policies. The fusion server compares the data it has locally with the provided statistics, and uses it to generate a set of policies for transformation of data at the training service. The goal of these generated policies is to get data from each of the different training services into a common schema. The policies that will be sent to the training service will include instructions for changing the type of the raw data (e.g. convert.jpg images into.png, or convert.avi sound files into.wav files etc.), relabeling the features to a common set of names, and relabeling the output label values into a different common set. The algorithms for generating these policies are described in more detail in [11]. Upon receipt of the policies, the training service uses the policies to convert the local training data into a specific format.” This is interpreted as how the model is generated using the map of features. Page 25 states "update model: used during the training phase of the federated learning. At each invocation of this interface, the fusion server receives the results of local training at each site, and returns the fused model that comes from integrating the other training sites." Therefore, as the seed model is from the first training site, receiving the fused model is receiving the seed model.) program instructions to send a confirmation of completion of mapping of the private feature data of the existing model to the canonical schema of the seed model to the model augmentation service; (It would have been obvious to one of ordinary skill in the art for the program to send confirmation of the completion of this step in order to correctly proceed with the federated learning process.) a base model including the canonical schema of the seed model; (The base model is interpreted to be the seed model. As they are the same, the base model has the canonical schema associated with the seed model.) program instructions to train [a model] by applying the private feature data of the existing model to the canonical schema of inputs by use of the feature mapper; (Page 24 states "Upon receipt of the policies, the training service uses the policies to convert the local training data into a specific format." Page 24 further states "With the receipt of the control parameters, the training service goes through the training stage, working with the fusion service in the fusion stage." Page 25 states "update_model: used during the training phase of the federated learning. At each invocation of this interface, the fusion server receives the results of local training at each site, and returns the fused model that comes from integrating the other training sites." Page 23, section F states "Data collected at different clinical groups may be stored separately depending on the type of access controls required by the system. An AI model in these cases would be run by the research organization. Yet the research organization would not be allowed to access the raw data. One way to address these requirements would be to transform the data to hide any personal information before creating an AI model from it. However, implementing a variation of Fusion AI, which would be able to process raw data at each of the clinical site, is more likely to provide a more accurate model.") program instructions to send the trained [model] to a model augmentation service; (Page 25 states "update_model: used during the training phase of the federated learning. At each invocation of this interface, the fusion server receives the results of local training at each site, and returns the fused model that comes from integrating the other training sites." The fusion server receives the results of local training, which is a model.) program instructions to receive from the model augmentation service, a single augmented model generated by application of federated learning applied to a plurality of [models]; and ()Page 25 states "update_model: used during the training phase of the federated learning. At each invocation of this interface, the fusion server receives the results of local training at each site, and returns the fused model that comes from integrating the other training sites." Integrating the other training sites into a fused model is federated learning. As the server returns the fused model, interpreted as the augmented model, the client receives the model.) program instructions to distribute the augmented model to the plurality of clients (Page 35 states "update model: used during the training phase of the federated learning. At each invocation of this interface, the fusion server receives the results of local training at each site, and returns the fused model that comes from integrating the other training sites." The fused model is interpreted as the augmented model.) Verma does not appear to explicitly teach program instructions to send an existing model of a first model type to a model augmentation service; program instructions to receive a seed model program instructions to receive a base model including the canonical schema of the seed model; [training] the base model [and sending the model to a model augmentation service] program instructions to prepend the feature mapper to the single augmented base model received from the model augmentation service, wherein the private feature data of the existing model is applied to the feature mapper prepended to the single augmented model. However, Calo—directed to analogous art—teaches program instructions to send an existing model of a first type to a model augmentation service; (Page 2, “Model Sharing Mode” states "In the model sharing mode shown in Fig. 3, the coalition partners do not share the training data with each other. Instead, they each train their models on the local data they have, and exchange models with each other." Fig. 3 shows that the fusion server receives the models, meaning it receives information associated with the existing models. The fusion server combines models (section VII. Federated Learning in Model Sharing Mode). All models have a type.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Verma and Calo because, as Calo states on page 2, “Model Sharing Mode”, "This mode is useful when the training data sets are too large, or the training data sets can not be exchanged for any reason, e.g. they may reveal sensitive details about the attributes of the equipment used to collect the data. Model sharing may use approaches for fusion of models [5] which can be optimized for performance [6]." The combination of Verma and Calo does not appear to explicitly teach program instructions to receive a seed model private feature data program instructions to receive a base model [training] the base model [and sending the model to a model augmentation service] program instructions to prepend the feature mapper to the single augmented base model received from the model augmentation service, wherein the [feature data] of the existing model is applied to the feature mapper prepended to the single augmented model. However, Samek—directed to analogous art—teaches program instructions to receive a seed model ([0135] states "Each client 14 is provided with the parametrization P j of the cluster it belongs to." [0056] states "The clients 14 are not only able to parameterize an internal instantiation of the neural network 16 accordingly, i.e., according to this setting, but the clients 14 are also able to train this neural network 16 thus parametrized using training data available to the respective client." As the clients already have an instantiation of a neural network, the parameterization allows the client to have the same model as the server, effectively sending a seed model to the client, which receives the seed model.) program instructions to receive a base model (As above, the seed model is interpreted to be the base model. Therefore, the above explanation is also the explanation for this limitation.) [training] the base model [and sending the model to a model augmentation service] ([0080] states "That is, each client 14, trains the parameterization P 0 received on the basis of its local training data 88 i to yield a locally trained or adapted parameterization P t ; and sends as the parameterization update merely a difference between this locally trained parameterization t i and the initial parameterization updates P 0 , namely ∆ P i , back to apparatus 80." The apparatus, according to [0081], performs federated learning, and is therefore interpreted as the model augmentation service.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Verma and Calo with the teachings of Samek because, as Samek states in [0012], "In accordance with the first aspect of the present application, the acceptance of Federated Learning difficulties is overcome based on the insight that parameterization updates suffice to deduce similarities between local training data resources." The combination of Verma, Calo, and Samek does not appear to explicitly teach program instructions to prepend the feature mapper to the single augmented base model received from the model augmentation service, wherein the [feature data] of the existing model is applied to the feature mapper prepended to the single augmented model. However, Choudhury—directed to analogous art—teaches program instructions to prepend the feature mapper to the single augmented base model received from the model augmentation service, wherein the [feature data] of the existing model is applied to the feature mapper prepended to the single augmented model. (The caption of Fig. 2 states "When the aggregator server (or site) receives a new dataset ( D T ), the samples are mapped to an appropriate equivalence class prior to using the federated model for predictive analysis." The mapping is interpreted as the feature mapper. The trained global FL model is interpreted as the augmented model. Page 5, section 3.6 states "After training the FL model we can use it to perform predictions on new test data, which can be received at the server or at the local sites. The new data samples are in the form of the original data, while the FL model has been trained on anonymized data. As a result, we need to map each new sample to its most similar equivalence class from M, which is known to the global model." As the mapping is applied before using the data on the model, the mapping is prepended.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Verma, Calo, and Samek with the teachings of Choudhury because, as Choudhury states on page 5, section 3.6, "After training the FL model we can use it to perform predictions on new test data, which can be received at the server or at the local sites. The new data samples are in the form of the original data, while the FL model has been trained on anonymized data. As a result, we need to map each new sample to its most similar equivalence class from M, which is known to the global model.") Regarding claim 9, the rejection of claim 8 is incorporated herein. Verma teaches program instructions to communicate a technique of training the base model by applying the private feature data of the existing model to the canonical schema of the base model by use of the feature mapper. (Page 24 states "Upon receipt of the policies, the training service uses the policies to convert the local training data into a specific format. The previously received control information instructs the training server about the operations it should conduct before starting the fusion, e.g. in some types of fusion processes it may need to send a small sample of its data set or a generator for representative synthetic data. The control information may also contain information about batch-sizes and number of iterations the training service may need in order to conduct a successful model training exercise. With the receipt of the control parameters, the training service goes through the training stage, working with the fusion service in the fusion stage." One of ordinary skill in the art would realize that the specific format is the canonical schema of the base model, as page 26 states "By default, we assume that the schema of the first site joining into a training session is considered the common schema to be used. Each subsequent site that joins the training session needs to use a set of policies to translate its local schema to that of the common schema." In order for a processor to perform the technique, the technique must be communicated to the processor, and the communication is thus inherent.) Regarding claim 10, the rejection of claim 8 is incorporated herein. Verma teaches wherein a varying number of features of the private feature data of the existing model are mapped to a fixed number of canonical features of the seed model. (Page 26 states "The initial feature name mapping is done by matching each of the names to a hashed vector from a corpus of documents, with each feature name being mapped to the name in the common schema that is the closest in the hashed vector representation. For the case where a sample is available, a document hashing approach [14] is used to calculate the closest vector." One of ordinary skill would realize that, when executed, this procedure will cause a varying number of features to be mapped to the fixed number of schema features, as several feature names may be closest to a feature name in the schema, interpreted as the canonical features of the seed model.) Regarding claim 11, the rejection of claim 8 is incorporated herein. Verma teaches wherein the seed model is used as the base model. (The base model is interpreted to be the seed model. As they are the same, the base model has the canonical schema associated with the seed model.) Regarding claim 12, the rejection of claim 8 is incorporated herein. The combination of Verma and Calo does not appear to explicitly teach wherein one or more base models of the plurality of base models used to generate the signal augmented model are from historically received base models. However, Samek—directed to analogous art—teaches wherein one or more base models of the plurality of base models used to generate the single augmented model are from historically received base models having a similar first model type. ([0082] states "Here, mutual similarity between the parameterization updates manifests itself in a correlation matrix 96, the components C i j of which indicate the similarity between the parameterization updates 90 i and 90 j , respectively." [0082] further states "Based on the similarities thus represented by the correlation matrix 96, the parameterization updates and thus, the clients 14, relating thereto, are clustered so as and thus, the clients 14, relating thereto, are clustered so as of client groups." As the client groups are clustered by similarities in their updates, the domains include models performing similar first model type. In order for a group to be formed (a group consisting of two or more models), there must be historically received base models in the group.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Verma and Calo with the teachings of Samek for the reasons given above in regards to claim 8. Claims 13 and 14 recite substantially similar subject matter to claims 8 and 9 respectively and are rejected with the same rationale, mutatis mutandis. Regarding claim 15, the rejection of claim 13 is incorporated herein. The combination of Verma and Calo does not appear to explicitly teach generate the single augmented model by repeatedly averaging the weights of the plurality of base models of the first model type. However, Samek—directed to analogous art—teaches generate the single augmented model by repeatedly averaging the weights of the plurality of base models of the first model type. ([0058] states "In step 38, the server 12 then merges all the parameterization updates received from the clients 14, the merging representing a kind of averaging such as by use of merging representing a kind of averaging such as by use of merging representing a kind of averaging such as by use of a weighted average with the weights considering, for instance, the amount of training data using which the param step 34. The parameterization update thus obtained at step 38 at this end of cycle i indicates the parameterization setting for the download 32 at the beginning of the subsequent cycle i + 1." As there is a cycle, the step is repeated. As above, the parameterizations are interpreted as the base models.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Verma and Calo with the teachings of Samek for the reasons given above in regards to claim 8. Regarding claim 16, the rejection of claim 13 is incorporated herein. Verma teaches generate an algorithm to translate a feature of the private feature data of the existing model to the canonical schema of the seed model; and (Page 26 states "Another type of policy that needs to be generated is that for converting different features that may be called different names at different sites. These policies would be of the format if col. name = xyz, then rename it as abc. The mapping from the different column names of each site to a canonical format needs to be determined." Policies are interpreted as algorithms. The model in the first training site is again interpreted as the seed model because, as page 26 states, "By default, we assume that the schema of the first site joining into a training session is considered the common schema to be used.”) apply the private feature data of the existing model to the algorithm transforming the a feature of the private feature of the existing model to an input feature of the canonical schema of the augmented model. (Page 24 states "Upon receipt of the policies, the training service uses the policies to convert the local training data into a specific format." One of ordinary skill in the art would realize that the “specific format” is the canonical schema. Page 24 further states "The previously received control information instructs the training server about the operations it should conduct before starting the fusion, e.g. in some types of fusion processes it may need to send a small sample of its data set or a generator for representative synthetic data. The control information may also contain information about batch-sizes and number of iterations the training service may need in order to conduct a successful model training exercise. With the receipt of the control parameters, the training service goes through the training stage, working with the fusion service in the fusion stage." In order for the fused model to be trained, the transformed feature data must be input to the fused model, and the transformed feature data must match the schema of the fused model, interpreted as the augmented model.) Regarding claim 17, the rejection of claim 13 is incorporated herein. Verma teaches generate the feature mapper based on [the seed model]; and ((Page 26 states "Another type of policy that needs to be generated is that for converting different features that may be called different names at different sites. These policies would be of the format if col. name = xyz, then rename it as abc. The mapping from the different column names of each site to a canonical format needs to be determined." The model in the first training site is again interpreted as the seed model because, as page 26 states, "By default, we assume that the schema of the first site joining into a training session is considered the common schema to be used.” As the feature mapper is determined using the schema of the seed model, the feature mapper is based on the feature mapper.) train the base model using the feature mapper to map the private feature data of the existing model to the canonical schema of the augmented model. (Page 24 states "Upon receipt of the policies, the training service uses the policies to convert the local training data into a specific format." One of ordinary skill in the art would realize that the “specific format” is the canonical schema. Page 24 further states "The previously received control information instructs the training server about the operations it should conduct before starting the fusion, e.g. in some types of fusion processes it may need to send a small sample of its data set or a generator for representative synthetic data. The control information may also contain information about batch-sizes and number of iterations the training service may need in order to conduct a successful model training exercise. With the receipt of the control parameters, the training service goes through the training stage, working with the fusion service in the fusion stage." In order for the fused model to be trained, the transformed feature data must be input to the fused model, and the transformed feature data must match the schema of the fused model, interpreted as the augmented model.) The combination of Verma and Calo does not appear to explicitly teach the seed model received However, Samek—directed to analogous art—teaches the seed model received ([0135] states "Each client 14 is provided with the parametrization P j of the cluster it belongs to." [0056] states "The clients 14 are not only able to parameterize an internal instantiation of the neural network 16 accordingly, i.e., according to this setting, but the clients 14 are also able to train this neural network 16 thus parametrized using training data available to the respective client." As the clients already have an instantiation of a neural network, the parameterization allows the client to have the same model as the server, effectively sending a seed model to the client, which receives the seed model.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Verma and Calo with the teachings of Samek for the reasons given above in regards to claim 8. Claims 18 and 19 recite substantially similar subject matter to claims 10 and 12 respectively and are rejected with the same rationale, mutatis mutandis. Claim(s) 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Verma (“Federated AI for the Enterprise: A Web Services based Implementation”, July 8, 2019), Calo (“Federated Learning for Coalition Operations, October 14, 2019), Samek (US 2022/0108177 A1), and Choudhury (“Anonymizing Data for Privacy-Preserving Federated Learning”, February 2020) as applied to claim 13 above, further in view of Hiessl (“Industrial Federated Learning—Requirements and System Design”, 2020). Regarding claim 20, the rejection of claim 13 is incorporated herein. The combination of Verma, Calo, Samek, and Choudhury does not appear to explicitly teach wherein a domain of the respective existing models includes models performing analysis on the same asset type. However, Hiessl—directed to analogous art—teaches wherein a domain of the respective existing models includes models performing analysis on the same asset type. (Page 46 states "To this end, we identify the requirement of evaluating models in regards to similarities of asset data influenced by operating and environmental conditions. This is the basis for building FL cohorts of FL tasks using asset data with similar characteristics. FL cohorts enable that FL clients only share updates within a subset of FL clients, whose submitted FL tasks belong to the same FL cohort." Therefore, models are grouped into cohorts (domains) with models using asset data with similar characteristics, interpreted as the asset type.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Verma and Calo, and Samek with the teachings of Hiessl because, as Hiessl states on page 46, "As discussed in Sect. 3.2, FL client selection plays a role in FL to reduce duration of e.g., training or evaluation [12]. Furthermore, client selection based on evaluation using held-out validation data, can improve accuracy of the global model [1]." Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JESSICA THUY PHAM whose telephone number is (571)272-2605. The examiner can normally be reached Monday - Friday, 9 A.M. - 5:00 P.M.. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Li Zhen can be reached at (571) 272-3768. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J.T.P./Examiner, Art Unit 2121 /Li B. Zhen/Supervisory Patent Examiner, Art Unit 2121
Read full office action

Prosecution Timeline

Jul 15, 2022
Application Filed
Jun 11, 2025
Non-Final Rejection — §101, §103, §112
Sep 12, 2025
Interview Requested
Sep 16, 2025
Response Filed
Sep 22, 2025
Applicant Interview (Telephonic)
Sep 22, 2025
Examiner Interview Summary
Dec 09, 2025
Final Rejection — §101, §103, §112
Jan 23, 2026
Interview Requested
Feb 02, 2026
Applicant Interview (Telephonic)
Feb 02, 2026
Response after Non-Final Action
Feb 02, 2026
Examiner Interview Summary
Feb 16, 2026
Request for Continued Examination
Feb 24, 2026
Response after Non-Final Action
Mar 30, 2026
Non-Final Rejection — §101, §103, §112 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
33%
Grant Probability
0%
With Interview (-33.3%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 3 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month