Prosecution Insights
Last updated: April 19, 2026
Application No. 18/063,813

TRAINING A FEDERATED GENERATIVE ADVERSARIAL NETWORK

Final Rejection §103§112
Filed
Dec 09, 2022
Examiner
KWON, JUN
Art Unit
2127
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
2 (Final)
38%
Grant Probability
At Risk
3-4
OA Rounds
4y 3m
To Grant
84%
With Interview

Examiner Intelligence

Grants only 38% of cases
38%
Career Allow Rate
26 granted / 68 resolved
-16.8% vs TC avg
Strong +46% interview lift
Without
With
+46.2%
Interview Lift
resolved cases with interview
Typical timeline
4y 3m
Avg Prosecution
34 currently pending
Career history
102
Total Applications
across all art units

Statute-Specific Performance

§101
31.8%
-8.2% vs TC avg
§103
41.4%
+1.4% vs TC avg
§102
7.6%
-32.4% vs TC avg
§112
18.1%
-21.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 68 resolved cases

Office Action

§103 §112
Detailed Action This Office Action is in response to the remarks entered on 12/15/2025. Claims 21-26 have been added. Claims 2, 4-5, 7, 15-16 have been cancelled. Claims 1, 3, 6, 8-14, and 17-26 are currently pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 35 U.S.C. 112(b) rejections regarding the previous claims have been withdrawn. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3, and 24 are rejected under 35 U.S.C. 103 as being unpatentable over Nguyen et al. (Nguyen et al, “Federated Learning for COVID-19 Detection With Generative Adversarial Networks in Edge Cloud Computing”, JUNE 2022, hereinafter ‘Nguyen’) in view of Chopra et al. (US 20220261697 A1, hereinafter ‘Chopra’) and further in view of Chang et al. (US 20230186098 A1, hereinafter ‘Chang’). Regarding claim 1, Nguyen teaches: A computer-implemented method, the method comprising: ([Nguyen, page 10265, left col, line 36-38] discloses that the simulations were implemented in Pytorch on a desktop server with an Intel Core i7 and 128GB memory with Nvidia Pascal Titan X and CUDA 8.0.) training a federated generative adversarial network (GAN) using private data using an aggregator system having a generator and a discriminator, wherein the aggregator system is in communication with multiple participant systems each having a local [Nguyen, 10259, right col, A. Network Model, line 1-9] discloses that the model is a FedGAN model (federated GAN). [Nguyen, page 10260, left col, line 5-21; Fig. 1] discloses each institution having and training a generator (local feature extractor) and a discriminator (local discriminator). [Nguyen, page 10262, left col, line 9-18] discloses uploading the trained generator and the trained discriminator to the cloud server (aggregator system) for model aggregation. The local data for each hospital is interpreted as private data) generating, by the generator ([Nguyen, page 10260, left col, line 1-33] and [Nguyen, page 10265, left col, B. Performance Evaluations on FedGAN, line 10 – right col, line 5] collectively disclose optimizing a generator by comparing the real data distribution and the synthetic data (fake data) distribution and optimizing the discriminator and generator losses. The D n ( G n ( z ) ) and D n ( x ) denotes the discriminator loss which is the probability that D n distinguishes x as real data samples and D n   determines the data generated by G n . Both x and G n ( z ) (i.e., fake data) are input to the discriminator) receiving, from the local public data of participating systems, and the fake data from the generator ; ([Nguyen, page 10260, left col, line 5-21; Fig. 1] discloses each institution having and training a generator (neural network) and a discriminator (local discriminator) to generate model data for the generator that will be input to the aggregator to generate a discriminator. Even though the feature extractor is not explicitly disclosed in Nguyen, the paper implies receiving from a neural network (generators and discriminators) at the participant system a set of input data input to the discriminator at the aggregator system. Additionally, the generator learns to generate a fake COVID-19 image data point to learn the discriminator. [Nguyen, page 10262, left col, line 9-18] discloses uploading the trained generator and the trained discriminator (model features) to the cloud server (aggregator system) for model aggregation. Nguyen implies that the discriminator is at the aggregator system and the discriminator parameter θ n d is input to the discriminator in the aggregator. [Nguyen, page 10263, left col, A. Working Procedure of Blockchain-Based FedGAN, line 3-7] discloses that the data may include public key (public data) and private key) receiving, from the one or more local discriminators of the multiple participant systems, discriminator parameter updates to update the discriminator at the aggregator system, wherein the one or more local discriminators are trained at the participant systems; ([Nguyen, page 10262, left col, line 9-18; Fig. 1] discloses combining discriminator parameter updates from multiple institutions using averaging approach. After the model aggregation process, the cloud server broadcasts the new global updates for the discriminator and the generator to all institutions for the next round of GAN training) combining the discriminator parameter updates from multiple participant systems; and ([Nguyen, page 10262, left col, line 9-18] discloses combining discriminator parameter updates from multiple institutions using averaging approach. After the model aggregation process, the cloud server broadcasts the new global updates for the discriminator and the generator to all institutions for the next round of GAN training) broadcasting combined parameter updates to the local discriminators at each of the multiple participant systems. ([Nguyen, page 10262, left col, line 9-18] After the model aggregation process, the cloud server broadcasts the new global updates for the discriminator and the generator to all institutions for the next round of GAN training) However, Nguyen does not specifically disclose: wherein the aggregator system is in communication with multiple participant systems each having a local feature extractor generating, by the generator of the aggregator system, fake data and passing the fake data to the multiple participant systems for input to one or more local discriminators; receiving, from the local feature extractor at a participant system of the multiple participant systems, a set of features for input to the discriminator at the aggregator system, wherein the features include features extracted from private data that is private to the participant system, and the fake data from the generator of the aggregator system; Chopra teaches: wherein the aggregator system is in communication with multiple participant systems each having a local feature extractor [Chopra, Fig. 4, block 302 and 304; 0078] discloses receiving satellite analysis artifacts and transmitting the set of satellite analysis artifacts to the central system to update the central machine learning model. The satellite site is the participant system, and the central authority is the aggregator system) receiving, from a feature extractor at a participant system of the multiple participant systems, a set of features for input to the [Chopra, Fig. 4, block 302 and 304; 0078] discloses receiving satellite analysis artifacts and transmitting the set of satellite analysis artifacts to the central system to update the central machine learning model. The satellite site is the participant system, and the central authority is the aggregator system. [Chopra, 0005 and 0016] The satellite site systems are interpreted as the feature extractor that generates satellite analytics artifacts. [Chopra, 0045, 0083] collectively disclose that the central machine learning model may be a deep neural network, a recurrent neural network, or a convolutional neural network which can be utilized to perform classification (discriminate) task. Updating discriminator at global system is taught by Nguyen) It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to use the method of receiving input from multiple systems for input to the neural network at the global system of Chopra to implement the federated machine learning method of Nguyen. The suggestion and/or motivation for doing so is to improve the efficiency of system by combining participant models, and running the combined model in the global system as soon as the parameters are combined. However, Nguyen in view of Chopra does not specifically disclose: generating, by the generator of the aggregator system, fake data and passing the fake data to the multiple participant systems for input to one or more local discriminators; receiving, the fake data from the generator at the aggregator system; Chang teaches: generating, by the generator of the aggregator system, fake data and passing the fake data to the multiple participant systems for input to one or more local discriminators; ([Chang, 0041] discloses sending fake images generated using a model (i.e., generator) to at least one of a plurality of discriminator nodes) receiving, the fake data from the generator at the aggregator system; ([Chang, 0041] discloses sending fake images generated using a model (i.e., generator) to at least one of a plurality of discriminator nodes) It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to use the method of providing both real data and fake data provided by the generator to the local discriminator of Chang to implement the federated machine learning method of Nguyen. The suggestion and/or motivation for doing so is to improve the accuracy of the federated machine learning method by allowing the model to compare real data and fake data. Regarding claim 3, Nguyen teaches: The method as claimed in claim 1, further comprising updating the generator at the aggregator system with gradients obtained by feeding features into the local discriminators. ([Nguyen, page 10261, right col, B. Training of FedGAN for COVID-19 Detection, line 13 – page 10262, line 14] discloses utilizing gradients obtained by feeding features into the local discriminator by descending its stochastic gradient to the generator Gn, and updating the generator at the aggregator by combining the parameter based on averaging approach) Regarding claim 24, Nguyen teaches: The method of claim 1, wherein the generator combined as inputs for the discriminator which is a classifier, and wherein an accuracy of the classifier is obtained by training the discriminator with the discriminator loss. ([Nguyen, page 10260, left col, line 1-33] and [Nguyen, page 10265, left col, B. Performance Evaluations on FedGAN, line 10 – right col, line 5] collectively disclose optimizing a generator by comparing the real data distribution and the synthetic data (fake data) distribution and optimizing the discriminator and generator losses. The D n ( G n ( z ) ) and D n ( x ) denotes the discriminator loss which is the probability that D n distinguishes x as real data samples and D n   determines the data generated by G n . Both x and G n ( z ) (i.e., fake data) are input to the discriminator) Nguyen does not specifically disclose: wherein the generator of the aggregator system optimizes a generator loss function wherein the real data and the fake data is passed through the local feature extractor the participant system and the features combined as inputs for the discriminator which is a classifier. Chopra teaches: wherein [Chopra, Fig. 4, block 302 and 304; 0078] discloses receiving satellite analysis artifacts and transmitting the set of satellite analysis artifacts to the central system to update the central machine learning model. The satellite site is the participant system, and the central authority is the aggregator system. [Chopra, 0005 and 0016] The satellite site systems are interpreted as the feature extractor that generates satellite analytics artifacts. [Chopra, 0045, 0083] collectively disclose that the central machine learning model may be a deep neural network, a recurrent neural network, or a convolutional neural network which can be utilized to perform classification (discriminate) task) However, Nguyen in view of Chopra does not specifically disclose: wherein the generator of the aggregator system optimizes a generator loss function Chang teaches: wherein the generator of the aggregator system optimizes a generator loss function ([Chang, 0032] discloses the discriminator 104 producing a generator loss 114 and sending the gradients to the generator 102 to adjust the weights for the generator neural network. [Chang, 0041] discloses sending fake images generated using a model (i.e., generator) to at least one of a plurality of discriminator nodes. [Chang, 0045-0046] discloses that the generator is centralized, and the local discriminators are localized for each local medical center) Claims 6 is rejected under 35 U.S.C. 103 as being unpatentable over Nguyen in view of Chopra and further in view of Arnon et al. (US 20230196115 A1, hereinafter ‘Arnon’). Regarding claim 6, Nguyen teaches: A computer-implemented method, the method comprising: ([Nguyen, page 10265, left col, line 36-38] discloses that the simulations were implemented in Pytorch on a desktop server with an Intel Core i7 and 128GB memory with Nvidia Pascal Titan X and CUDA 8.0) training a federated generative adversarial network (GAN) using [Nguyen, 10259, right col, A. Network Model, line 1-9] discloses that the model is a FedGAN model (federated GAN). [Nguyen, page 10260, left col, line 5-21; Fig. 1] discloses each institution having and training a generator and a discriminator (local discriminator). [Nguyen, page 10262, left col, line 9-18] discloses uploading the trained generator and the trained discriminator to the cloud server (aggregator system) for model aggregation) training the local at the participant system to extract a set of [Nguyen, page 10260, left col, line 5-21; Fig. 1] discloses each institution having and training a generator (neural network) and a discriminator (local discriminator) to generate model features for the generator that will be input to the aggregator to generate a discriminator. [Nguyen, page 10262, left col, line 9-18] discloses uploading the trained generator and the trained discriminator parameter θ n d to the aggregator system for model aggregation, which implies that the discriminator at the aggregator receives the discriminator parameter. [Nguyen, page 10263, left col, A. Working Procedure of Blockchain-Based FedGAN, line 3-7] discloses that the data may include public key (public data) and private key. Even though the feature extractor is not explicitly disclosed in Nguyen, the paper implies receiving from a neural network (generators and discriminators) at the participant system is trained to generate a set of input data for the discriminator at the aggregator system) training the local discriminator to produce discriminator parameter updates to update the discriminator at the aggregator system, . ([Nguyen, page 10262, left col, line 9-18; Fig. 1] discloses combining discriminator parameter updates from multiple institutions using averaging approach. The trained discriminator parameter θ n d are input to the aggregator system for model aggregation. After the model aggregation process, the cloud server broadcasts the new global updates for the discriminator and the generator to all institutions for the next round of GAN training) However, Nguyen does not specifically disclose: input to the discriminator at the aggregator system; wherein the training of the local discriminator includes sharing the features between participant systems for training each of the local discriminators Chopra teaches: training a feature extractor to extract a set of features for input to the [Chopra, Fig. 4, block 302 and 304; 0078] discloses receiving satellite analysis artifacts and transmitting the set of satellite analysis artifacts to the central system to update the central machine learning model. The satellite site is the participant system, and the central authority is the aggregator system. [Chopra, 0005 and 0016] The satellite site systems are interpreted as the feature extractor that generates satellite analytics artifacts. [Chopra, 0080] discloses generating a central weight matrix value in the local satellite computing device (Satellite Site, SS), which implies that the matrices are trained in the local device. [Chopra, 0045, 0083] collectively disclose that the central machine learning model may be a deep neural network, a recurrent neural network, or a convolutional neural network which can be utilized to perform classification (discriminate) task) It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to use the method of receiving input from multiple systems for input to the discriminator (classifier) at the global system of Chopra to implement the federated machine learning method of Nguyen. The suggestion and/or motivation for doing so is to improve the efficiency of system by combining participant models, and running the combined model in the global system as soon as the parameters are combined. However, Nguyen in view of Chopra does not specifically disclose: wherein the training of the local discriminator includes sharing the features between participant systems for training each of the local discriminators Arnon teaches: wherein the training of the local discriminator includes sharing the features between participant systems for training each of the local discriminators ([Arnon, 0023] discloses sharing features across other remote data domains (i.e., other remote nodes) that were extracted from local domain database 212. [Arnon, 0021] discloses that each node includes a learning model store, a node network interface, GAN trainer, and a generator model store) It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to use the method of including sharing features between participant systems for training the local discriminators of Arnon to implement the federated machine learning method of Nguyen. The suggestion and/or motivation for doing so is to improve the accuracy of machine learning system by allowing the participant systems to receive additional data from another participant systems when needed. Claims 8-14, 17-20, 22-23, 25-26 are rejected under 35 U.S.C. 103 as being unpatentable over Nguyen in view of Chopra in view of Arnon and further in view of Chang. Regarding claim 8, Nguyen teaches: The method of claim 6, including receives inputs at the local discriminator of real public data of the participant systems and fake data from the generator [Nguyen, page 10261, right col, B. Training of FedGAN for COVID-19 Detection, line 11 – page 10262, left col, line 18] discloses updating local generators and local discriminators using fake data generated by the local generators and real data sampled by the local discriminators, uploading the local generators and local discriminators to the cloud system, and combining the local generator parameters and the local discriminator parameters. The stochastic gradient calculations are interpreted as the features) Nguyen does not specifically disclose: receives fake data from generator at the aggregator system; Chang teaches: receives fake data from the generator at the aggregator system ([Chang, 0032] discloses the discriminator 104 producing a generator loss 114 (i.e., gradients) and sending the gradients to the generator 102 to adjust the weights for the generator neural network. [Chang, 0045-0046] discloses that the generator is centralized, and the local discriminators are localized for each local medical center) Regarding claim 9, Nguyen teaches: The method as claimed in claim 6, including sending gradients obtained by feeding the features into the local discriminators to the generator at the aggregator system. ([Nguyen, page 10261, right col, B. Training of FedGAN for COVID-19 Detection, line 13 – page 10262, line 14] discloses utilizing gradients obtained by feeding features into the local discriminator by descending its stochastic gradient to the generator Gn, and updating the generator at the aggregator by combining the parameter based on averaging approach) Regarding claim 10, Nguyen teaches: The method of claim 6, including receiving fake data generated by the generator the fake data from the generator [Nguyen, page 10261, right col, B. Training of FedGAN for COVID-19 Detection, line 11 – page 10262, left col, line 18] discloses updating local generators and local discriminators using fake data generated by the local generators and real data sampled by the local discriminators, uploading the local generators and local discriminators to the cloud system, and combining the local generator parameters and the local discriminator parameters. [Nguyen, page 10263, left col, A. Working Procedure of Blockchain-Based FedGAN, line 3-7] discloses that the data may include public key (public data) and private key) However, Nguyen does not specifically disclose: receiving fake data generated by the generator at the aggregator system for input to the local (model) wherein the set of features include fake features extracted from the fake data from the generator at the aggregator system. Chang teaches: receiving fake data generated by the generator at the aggregator system for input to the local (model) ([Chang, 0032] discloses the discriminator 104 producing a generator loss 114 (i.e., gradients) and sending the gradients to the generator 102 to adjust the weights for the generator neural network. [Chang, 0045-0046] discloses that the generator is centralized, and the local discriminators are localized for each local medical center) wherein the set of features include fake features extracted from the fake data from the generator at the aggregator system. ([Chang, 0032] discloses the discriminator 104 producing a generator loss 114 (i.e., gradients) and sending the gradients to the generator 102 to adjust the weights for the generator neural network. [Chang, 0045-0046] discloses that the generator is centralized, and the local discriminators are localized for each local medical center) Regarding claim 11, Nguyen teaches: The method as claimed in claim 6, including: receiving combining discriminator parameter updates from the aggregator system for updating the local discriminator. ([Nguyen, page 10262, left col, line 9-18] discloses combining discriminator parameter updates from multiple institutions using averaging approach. After the model aggregation process, the cloud server broadcasts the new global updates for the discriminator and the generator to all institutions for the next round of GAN training) Regarding claim 12, Nguyen teaches: A system comprising: a processor; and a memory in communication with the processor, the memory containing program instructions that, when executed by the processor, are configured as one or more components to cause the processor to perform a method, the one or more components comprising: ([Nguyen, page 10265, left col, line 36-38] discloses that the simulations were implemented in Pytorch on a desktop server with an Intel Core i7 and 128GB memory with Nvidia Pascal Titan X and CUDA 8.0.) an aggregator system having a generator and a discriminator with an input collector wherein the input collector is in communication with multiple participant systems each having a local , wherein the local feature extractor extracts a set of features for input to the discriminator at the aggregator system and wherein the local discriminator is trained to produce discriminator parameter updates to update the discriminator at the aggregator system; ([Nguyen, 10259, right col, A. Network Model, line 1-9] discloses that the model is a FedGAN model (federated GAN). [Nguyen, page 10260, left col, line 5-21; Fig. 1] discloses each institution having and training a generator (neural network) and a discriminator (local discriminator). [Nguyen, page 10262, left col, line 9-18] discloses uploading the trained generator and the trained discriminator to the cloud server (aggregator system) for model aggregation) the input collector including a feature component for receiving, from the the set of ; ([Nguyen, page 10261, right col, B. Training of FedGAN for COVID-19 Detection, line 11 – page 10262, left col, line 18] discloses updating local generators and local discriminators using fake data generated by the local generators and real data sampled by the local discriminators (input collection), uploading the local generators and local discriminator parameters θ n d to the aggregator system to be input to the discriminator at the aggregator system. The stochastic gradient calculations process also implies that data is being sent to the aggregator. [Nguyen, page 10263, left col, A. Working Procedure of Blockchain-Based FedGAN, line 3-7] discloses that the data may include public key (public data) and private key) the input collector including a discriminator update component for receiving parameter updates from the local discriminators at the participant systems that are trained using local real data and fake data[Nguyen, page 10261, right col, B. Training of FedGAN for COVID-19 Detection, line 11 – page 10262, left col, line 18] discloses updating local generators and local discriminators using fake data generated by the local generators and real data sampled by the local discriminators, uploading the local generators and local discriminators to the cloud system, and combining the local generator parameters and the local discriminator parameters) Nguyen does not specifically disclose: an aggregator system the input collector including a feature component for receiving, from the feature extractors at participant systems, a set of features for input to the discriminator, wherein the features include features extracted from private data that is private to a participant system, wherein the participant system includes a local input collector including a feature sharing component for sharing features between the participant systems for training the local discriminators; receiving parameter updates from the local discriminators at participant systems that are trained using local real data and fake data, with the fake data provided by the generator. Chopra teaches: an aggregator system havinglocal feature extractor; ([Chopra, Fig. 4, block 302 and 304; 0078] discloses receiving satellite analysis artifacts and transmitting the set of satellite analysis artifacts to the central system to update the central machine learning model. The satellite site is the participant system, and the central authority is the aggregator system) the input collector including a feature component for receiving, from the feature extractors at participant systems, a set of features for input to the [Chopra, Fig. 4, block 302 and 304; 0078] discloses receiving satellite analysis artifacts and transmitting the set of satellite analysis artifacts to the central system to update the central machine learning model. The satellite site is the participant system, and the central authority is the aggregator system. [Chopra, 0005 and 0016] The satellite site systems are interpreted as the feature extractor that generates satellite analytics artifacts. [Chopra, 0045, 0083] collectively disclose that the central machine learning model may be a deep neural network, a recurrent neural network, or a convolutional neural network which can be utilized to perform classification task) It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to use the method of receiving input from multiple systems for input to the neural network at the global system of Chopra to implement the federated machine learning method of Nguyen. The suggestion and/or motivation for doing so is to improve the efficiency of system by combining participant models, and running the combined model in the global system as soon as the parameters are combined. However, Nguyen in view of Chopra does not specifically disclose: wherein the participant system includes a local input collector including a feature sharing component for sharing features between the participant systems for training the local discriminators; receiving parameter updates from the local discriminators at participant systems that are trained using local real data and fake data, with the fake data provided by the generator. Arnon teaches: wherein the participant system includes a local input collector including a feature sharing component for sharing features between the participant systems for training the local discriminators; ([Arnon, 0023] discloses sharing features across other remote data domains (i.e., other remote nodes) that were extracted from local domain database 212. [Arnon, 0021] discloses that each node includes a learning model store, a node network interface, GAN trainer, and a generator model store) It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to use the method of including sharing features between participant systems for training the local discriminators of Arnon to implement the federated machine learning method of Nguyen. The suggestion and/or motivation for doing so is to improve the accuracy of machine learning system by allowing the participant systems to receive additional data from another participant systems when needed. However, Nguyen in view of Chopra and further in view of Arnon does not specifically disclose: receiving parameter updates from the local discriminators at participant systems that are trained using local real data and fake data, with the fake data provided by the generator. Chang teaches: receiving parameter updates from the local discriminators at participant systems that are trained using local real data and fake data, with the fake data provided by the generator. ([Chang, 0031] discloses that the discriminators are trained using real data 110 and synthetic data 108. [Chang, 0034] discloses that the synthetic data 128 is received from the generator 122, the discriminators evaluate the synthetic data 128 and return a gradient 130-1 … 130-n to adjust the weight of the generator 122. [Chang, 0035] discloses that the synthetic data are the fake data) It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to use the method of providing both real data and fake data provided by the generator to the local discriminator of Chang to implement the federated machine learning method of Nguyen. The suggestion and/or motivation for doing so is to improve the accuracy of the federated machine learning method by allowing the model to compare real data and fake data. Regarding claim 13, Nguyen teaches: The system of claim 12, including: a generator update component for updating the generator with gradients obtained by feeding features into the local discriminators. ([Nguyen, page 10261, right col, B. Training of FedGAN for COVID-19 Detection, line 13 – page 10262, line 14] discloses utilizing gradients obtained by feeding features into the local discriminator by descending its stochastic gradient to the generator Gn, and updating the generator at the aggregator by combining the parameter based on averaging approach) However, Nguyen in view of Chopra and further in view of Arnon does not specifically disclose: a generator output component for passing the fake data generated by the generator to the participant systems for input to the local discriminators; Chang teaches: a generator output component for passing the fake data generated by the generator to the participant systems for input to the local discriminators; ([Chang, 0032] discloses the discriminator 104 producing a generator loss 114 (i.e., gradients) and sending the gradients to the generator 102 to adjust the weights for the generator neural network. [Chang, 0045-0046] discloses that the generator is centralized, and the local discriminators are localized for each local medical center) Regarding claim 14, Nguyen teaches: The system of claim 12, wherein the discriminator update component includes: an update combining component for combining the discriminator parameter updates from multiple participating systems; and ([Nguyen, page 10262, left col, line 9-18] discloses combining discriminator parameter updates from multiple institutions using averaging approach. After the model aggregation process, the cloud server broadcasts the new global updates for the discriminator and the generator to all institutions for the next round of GAN training) an update broadcasting component for broadcasting combined parameter updates to the local discriminators at the participating systems. ([Nguyen, page 10262, left col, line 9-18] After the model aggregation process, the cloud server broadcasts the new global updates for the discriminator and the generator to all institutions for the next round of GAN training) Regarding claim 17, Nguyen teaches: The system of claim 12, wherein the local input collector includes a real public data input component for receiving inputs for the local discriminator of real public data of the participant systems and a fake data input component for receiving the fake data from the generator [Nguyen, page 10261, right col, B. Training of FedGAN for COVID-19 Detection, line 11 – page 10262, left col, line 18] discloses updating local generators and local discriminators using fake data generated by the local generators and real data sampled by the local discriminators, uploading the local generators and local discriminators to the cloud system, and combining the local generator parameters and the local discriminator parameters) However, Nguyen in view of Chopra and further in view of Arnon does not specifically disclose: a fake data input component for receiving the fake data from the generator at the aggregator system. Chang teaches: a fake data input component for receiving the fake data from the generator at the aggregator system. ([Chang, 0032] discloses the discriminator 104 producing a generator loss 114 (i.e., gradients) and sending the gradients to the generator 102 to adjust the weights for the generator neural network. [Chang, 0041] discloses sending fake images generated using a model (i.e., generator) to at least one of a plurality of discriminator nodes. [Chang, 0045-0046] discloses that the generator is centralized, and the local discriminators are localized for each local medical center) Regarding claim 18, Nguyen in view of Chopra in view of Arnon and further in view of Chang teaches: The system of claim 12, wherein the local discriminator includes a gradient output component for sending gradients obtained by feeding the features into the local discriminators to the generator at the aggregator system. ([Chang, 0032] discloses the discriminator 104 producing a generator loss 114 (i.e., gradients) and sending the gradients to the generator 102 to adjust the weights for the generator neural network. [Chang, 0045-0046] discloses that the generator is centralized, and the local discriminators are localized for each local medical center) Regarding claim 19, Nguyen teaches: The system of claim 12, wherein the local feature extractor includes receiving the fake data generated by the generator the fake data from the generator [Nguyen, page 10261, right col, B. Training of FedGAN for COVID-19 Detection, line 11 – page 10262, left col, line 18] discloses updating local generators and local discriminators using fake data generated by the local generators and real data sampled by the local discriminators, uploading the local generators and local discriminators to the cloud system, and combining the local generator parameters and the local discriminator parameters. [Nguyen, page 10263, left col, A. Working Procedure of Blockchain-Based FedGAN, line 3-7] discloses that the data may include public key (public data) and private key) However, Nguyen in view of Chopra and further in view of Arnon does not specifically disclose: receiving the fake data generated by the generator at the aggregator system; real features extracted from fake features extracted from the fake data from the generator at the aggregator system; Chang teaches: receiving the fake data generated by the generator at the aggregator system; ([Chang, 0041] discloses sending fake images generated using a model (i.e., generator) to at least one of a plurality of discriminator nodes. [Chang, 0045-0046] discloses that the generator is centralized, and the local discriminators are localized for each local medical center) real features extracted from [Chang, 0046] discloses that the discriminator receives fake images from the generator, and differentiate local real images and local synthetic (fake) images. In this way, the medical images will be kept privately) fake features extracted from the fake data from the generator at the aggregator system ([Chang, 0032] discloses the discriminator 104 producing a generator loss 114 (interpreted as ‘features’) and sending the gradients to the generator 102 to adjust the weights for the generator neural network. [Chang, 0041] discloses sending fake images generated using a model (i.e., generator) to at least one of a plurality of discriminator nodes. [Chang, 0045-0046] discloses that the generator is centralized, and the local discriminators are localized for each local medical center) Regarding claim 20, Nguyen teaches: The system of claim 12, wherein the local discriminator includes an update component for receiving combining discriminator parameter updates from the aggregator system for updating the local discriminator. ([Nguyen, page 10262, left col, line 9-18] discloses combining discriminator parameter updates from multiple institutions using averaging approach. After the model aggregation process, the cloud server broadcasts the new global updates for the discriminator and the generator to all institutions for the next round of GAN training) Regarding claim 22, Nguyen teaches: The method of claim 6, wherein the training of the local [Nguyen, page 10260, left col, line 5-21; Fig. 1] discloses each institution (i.e., participant systems) having and training a generator (neural network) and a discriminator (i.e., local discriminator) to generate model features for the generator that will be input to the aggregator to generate a discriminator) Nguyen in view of Chopra and further in view of Arnon does not specifically disclose: training of the feature extractor … is performed utilizing an auto-encoder model through an unsupervised learning approach, wherein an output of the auto-encoder model is a fixed-length vector at a bottleneck providing a compressed representation of input data as a feature extraction. Chang teaches: training of the feature extractor … is performed utilizing an auto-encoder model through an unsupervised learning approach, wherein an output of the auto-encoder model is a fixed-length vector at a bottleneck providing a compressed representation of input data as a feature extraction. ([Chang, 0073] discloses that the generator is an encoder-decoder network (auto-encoder) and all convolutional layers in the encoder-decoder network use 3x3 kernels except the first and last layers that use 7x7 kernels. The output of the encoder-decoder network is 7x7 fixed-length vector and the 3x3 kernel is the bottleneck that compresses the 7x7 input representations) Regarding claim 23, Nguyen teaches: The method of claim 22, wherein each of the participant systems includes a local trainer and a local input collector, wherein the local input collector feeds data to the local discriminator trained on real public data of the participant systems and fake data from the generator [Nguyen, page 10261, right col, B. Training of FedGAN for COVID-19 Detection, line 11 – page 10262, left col, line 18] discloses updating (i.e., a local trainer) local generators and local discriminators using fake data generated by the local generators and real data sampled (i.e., the local input collector) by the local discriminators (i.e., real public data), uploading the local generators and local discriminators to the cloud system, and combining the local generator parameters and the local discriminator parameters. The stochastic gradient calculations are interpreted as the features) Nguyen in view of Chopra and further in view of Arnon does not specifically disclose: the local discriminator trained on real public data of the participant systems and fake data from the generator of the aggregator system. Chang teaches: the local discriminator trained on real public data of the participant systems and fake data from the generator of the aggregator system ([Chang, 0032] discloses the discriminator 104 producing a generator loss 114 (interpreted as ‘features’) and sending the gradients to the generator 102 to adjust the weights for the generator neural network. [Chang, 0041] discloses sending fake images generated using a model (i.e., generator) to at least one of a plurality of discriminator nodes. [Chang, 0045-0046] discloses that the generator is centralized, and the local discriminators are localized for each local medical center) Regarding claim 25, Nguyen teaches: The system of claim 12, wherein each of the participant systems are comprised of a processor, a memory, [Nguyen, page 10259, right col, A. Network Model, line 1 – page 10260, left col, line 16] discloses that each EN is a cloud server, local computers or powerful IoT devices that includes one or more processors or memories. [Nguyen, page 10263, left col, A. Working Procedure of Blockchain-Based FedGAN, line 1-15] discloses that the Edge Node performs the training of the GAN model (i.e., a generator and a discriminator) using its own local COVID-19 X-ray image data set (local input collector) ) However, Nguyen does not specifically disclose: wherein each of the participant systems are comprised of … a local feature extractor Chopra teaches: wherein each of the participant systems are comprised of … a local feature extractor ([Chopra, Fig. 4, block 302 and 304; 0078] discloses receiving satellite analysis artifacts and transmitting the set of satellite analysis artifacts to the central system to update the central machine learning model. The satellite site is the participant system, and the central authority is the aggregator system. [Chopra, 0005 and 0016] The satellite site systems are interpreted as the feature extractor that generates satellite analytics artifacts. [Chopra, 0080] discloses generating a central weight matrix value in the local satellite computing device (Satellite Site, SS), which implies that the matrices are trained in the local device. [Chopra, 0045, 0083] collectively disclose that the central machine learning model may be a deep neural network, a recurrent neural network, or a convolutional neural network which can be utilized to perform classification (discriminate) task) Regarding claim 26, Nguyen teaches: The system of claim 25, [Nguyen, page 10261, right col, B. Training of FedGAN for COVID-19 Detection, line 13 – page 10262, line 14] discloses utilizing gradients obtained by feeding features into the local discriminator by descending its stochastic gradient to the generator Gn (gradient output component), and updating the generator at the aggregator by combining the parameter based on averaging approach (parameter output component). [Nguyen, page 10263, left col, A. Working Procedure of Blockchain-Based FedGAN, line 1-15] discloses that the Edge Node performs the training of the GAN model (i.e., a generator and a discriminator) using its own local COVID-19 X-ray image data set (local input collector) ) However, Nguyen does not specifically disclose: wherein the local feature extractor is further comprised of a feature output component wherein the local input collector is further comprised of a feature sharing component, a real public data input component Chopra teaches: wherein the local feature extractor is further comprised of a feature output component ([Chopra, Fig. 4, block 302 and 304; 0078] discloses receiving satellite analysis artifacts and transmitting the set of satellite analysis artifacts to the central system to update the central machine learning model. The satellite site is the participant system, and the central authority is the aggregator system. [Chopra, 0005 and 0016] The satellite site systems are interpreted as the feature extractor with feature output component that generates satellite analytics artifacts and sends the output to the central model) However, Nguyen in view of Chopra does not specifically disclose: wherein the local input collector is further comprised of a feature sharing component Arnon teaches: wherein the local input collector is further comprised of a feature sharing component ([Arnon, 0023] discloses sharing features across other remote data domains (i.e., other remote nodes) that were extracted from local domain database 212. [Arnon, 0021] discloses that each node includes a learning model store, a node network interface (feature sharing component), GAN trainer, and a generator model store) Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Nguyen in view of Chopra in view of Chang and further in view of Arnon. Regarding claim 21, Nguyen teaches: The method as claimed in claim 1, wherein the training of the federated GAN further utilizes the participant system comprised of the multiple participant systems, wherein each participant system includes the local [Nguyen, 10259, right col, A. Network Model, line 1-9] discloses that the model is a FedGAN model (federated GAN). [Nguyen, page 10260, left col, line 5-21; Fig. 1] discloses each institution having and training a generator and a discriminator (local discriminator). [Nguyen, page 10262, left col, line 9-18] discloses uploading the trained generator and the trained discriminator to the cloud server (aggregator system) for model aggregation) training the local [Nguyen, page 10260, left col, line 5-21; Fig. 1] discloses each institution having and training a generator (neural network) and a discriminator (local discriminator) to generate model features for the generator that will be input to the aggregator to generate a discriminator. [Nguyen, page 10262, left col, line 9-18] discloses uploading the trained generator and the trained discriminator parameter θ n d to the aggregator system for model aggregation, which implies that the discriminator at the aggregator receives the discriminator parameter. [Nguyen, page 10263, left col, A. Working Procedure of Blockchain-Based FedGAN, line 3-7] discloses that the data may include public key (public data) and private key. Even though the feature extractor is not explicitly disclosed in Nguyen, the paper implies receiving from a neural network (generators and discriminators) at the participant system is trained to generate a set of input data for the discriminator at the aggregator system) training the local discriminator to produce discriminator parameter updates to update the discriminator at the aggregator system, [Nguyen, page 10261, right col, B. Training of FedGAN for COVID-19 Detection, line 11 – page 10262, left col, line 18] discloses updating local generators and local discriminators using fake data generated by the local generators and real data sampled by the local discriminators, uploading the local generators and local discriminators to the cloud system, and combining the local generator parameters and the local discriminator parameters (aggregator system) ) However, Nguyen does not specifically disclose: training the local feature extractor at the participant system to extract a set of features for input to the discriminator at the aggregator system, wherein the features include features extracted from private data that is private to the participant system; wherein the training of the local discriminator includes sharing the features between participant systems for training each of the local discriminators. Chopra teaches: training the local feature extractor at the participant system to extract a set of features for input to the discriminator at the aggregator system, wherein the features include features extracted from private data that is private to the participant system; ([Chopra, Fig. 4, block 302 and 304; 0078] discloses receiving satellite analysis artifacts and transmitting the set of satellite analysis artifacts to the central system to update the central machine learning model. The satellite site is the participant system, and the central authority is the aggregator system. [Chopra, 0005 and 0016] The satellite site systems are interpreted as the feature extractor that generates satellite analytics artifacts. [Chopra, 0080] discloses generating a central weight matrix value in the local satellite computing device (Satellite Site, SS), which implies that the matrices are trained in the local device. [Chopra, 0045, 0083] collectively disclose that the central machine learning model may be a deep neural network, a recurrent neural network, or a convolutional neural network which can be utilized to perform classification (discriminate) task) However, Nguyen in view of Chopra does not specifically disclose: wherein the training of the local discriminator includes sharing the features between participant systems for training each of the local discriminators. Arnon teaches: wherein the training of the local discriminator includes sharing the features between participant systems for training each of the local discriminators. ([Arnon, 0023] discloses sharing features across other remote data domains (i.e., other remote nodes) that were extracted from local domain database 212. [Arnon, 0021] discloses that each node includes a learning model store, a node network interface, GAN trainer, and a generator model store) It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to use the method of including sharing features between participant systems for training the local discriminators of Arnon to implement the federated machine learning method of Nguyen. The suggestion and/or motivation for doing so is to improve the accuracy of machine learning system by allowing the participant systems to receive additional data from another participant systems when needed. Response to Arguments Response to Arguments under 35 U.S.C. 112 35 U.S.C. 112(b) rejections regarding the previous claims have been withdrawn. Response to Arguments under 35 U.S.C. 103 Arguments: Applicant asserts that (a) Nguyen fails to disclose the fake data and the utilization of the fake data is central to the training of the federated generative adversarial network (GAN) using private data, and Nguyen fails to disclose the newly added combining and broadcasting limitations in claim 1 [Remarks, page 10], (b) Nguyen fails to disclose “wherein the training of the local discriminator includes sharing the features between participant systems for training each of the local discriminators” in claim 6 [Remarks, page 11], and (c) Nguyen fails to disclose the amended claim 12 [Remarks, page 12]. Examiner’s Response: Examiner respectfully disagrees. Regarding (a), examiner notes that the Nguyen at least teaches the utilization of the fake data and “generating, by the generator, fake data and passing the fake data for input to local discriminators.” [Nguyen, page 10260, left col, line 1-33] and [Nguyen, page 10265, left col, B. Performance Evaluations on FedGAN, line 10 – right col, line 5] disclose optimizing a generator by comparing the real data distribution and the synthetic data (fake data) distribution and optimizing the discriminator and generator losses. The D n ( G n ( z ) ) and D n ( x ) denotes the discriminator loss which is the probability that D n distinguishes x as real data samples and D n   determines the synthetic data generated by G n (generator). These paragraphs clearly indicates that both x (real data) and G n ( z ) (i.e., fake data) are input to the discriminator. Second, Nguyen discloses the broadcasting limitations in [Nguyen, page 10262, left col, line 9-18]. Regarding (b) and (c), although Nguyen does not specifically disclose “sharing the features between participant systems for training each of the local discriminators”, “the local feature extractor”, and “a local input collector including a feature sharing component for sharing features between the participant systems”, the examiner introduced Chopra et al. (US 20220261697 A1) and Arnon et al. (US 20230196115 A1) as suggested in the Office Action. Accordingly, arguments regarding claims 1, 6 and 12 are fully considered but not persuasive. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Xia et al, “Local and Global Perception Generative Adversarial Network for Facial Expression Synthesis”, March 2022 (This prior art is pertinent as it discloses utilizing two local generators and discriminators to generate input data for a global generator and a global discriminator) THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JUN KWON whose telephone number is (571)272-2072. The examiner can normally be reached Monday – Friday 7:30AM – 4:30PM ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abdullah Kawsar can be reached at (571)270-3169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JUN KWON/Examiner, Art Unit 2127 /ABDULLAH AL KAWSAR/Supervisory Patent Examiner, Art Unit 2127
Read full office action

Prosecution Timeline

Dec 09, 2022
Application Filed
Sep 11, 2025
Non-Final Rejection — §103, §112
Nov 18, 2025
Interview Requested
Dec 10, 2025
Examiner Interview Summary
Dec 10, 2025
Applicant Interview (Telephonic)
Dec 15, 2025
Response Filed
Feb 18, 2026
Final Rejection — §103, §112
Mar 06, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602569
EXTRACTING ENTITY RELATIONSHIPS FROM DIGITAL DOCUMENTS UTILIZING MULTI-VIEW NEURAL NETWORKS
2y 5m to grant Granted Apr 14, 2026
Patent 12602609
UPDATING MACHINE LEARNING TRAINING DATA USING GRAPHICAL INPUTS
2y 5m to grant Granted Apr 14, 2026
Patent 12579436
Tensorized LSTM with Adaptive Shared Memory for Learning Trends in Multivariate Time Series
2y 5m to grant Granted Mar 17, 2026
Patent 12572777
Policy-Based Control of Multimodal Machine Learning Model via Activation Analysis
2y 5m to grant Granted Mar 10, 2026
Patent 12493772
LAYERED MULTI-PROMPT ENGINEERING FOR PRE-TRAINED LARGE LANGUAGE MODELS
2y 5m to grant Granted Dec 09, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
38%
Grant Probability
84%
With Interview (+46.2%)
4y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 68 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month