DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on January 30, 2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Election/Restrictions
Claims 1-9 and 27-31 are withdrawn from further consideration pursuant to 37 CFR 1.142(b) as being drawn to a nonelected invention, there being no allowable generic or linking claim. Election was made without traverse in the reply filed on November 21, 2025.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 17, 18, 20, and 32 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 17 recites the limitation "the increase" in line 2 of the claim;
claim 18 recites the limitation "the increase" in line 2 of the claim and recites “the master node” in line 3 of the claim;
claim 20 recites the limitation "the increase" in line 2 of the claim; and
claim 32 recites the limitation "the increase" in line 2 of the claim and recites “the master node” in line 3 of the claim.
There is insufficient antecedent basis for this limitation in the claim.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 16-20 and 32 are rejected under 35 U.S.C. 103 as being unpatentable over Yu et al, U.S. Patent 12,147,879 in view of Zhang et al, “Poisoning Attack in Federated Learning using General Adversarial Nets” (cited in the Information Disclosure Statement filed on January 20, 2024).
As per claim 16, it is taught by Yu et al of a method performed by a network node (col. 11, lines 18-23) for defending against attacks on a global federated learning model, the method comprising:
receiving an updated weight matrix from a client node of the global federated learning model (updates from each group (i.e., client node) is aggregated to generate group update aggregations (i.e., updated weight matrix) to update a federated model (i.e., federated learning model), col. 17, lines 30-37);
passing the updated weight matrix through a weight statistics filter having a variable weight statistics threshold that adapts during training of the global federated learning model (samples are collected from participate updates, or generated update aggregations (i.e., updated weight matrix), to produce reference updates that are compared to determine levels of deviation (i.e., weight statistics filter) from a reference update wherein threshold level of deviation (i.e., variable weight statistics threshold) from the reference update, col. 17, lines 30-37 and col. 18, lines 34-40, wherein the levels of deviation (i.e., weight statistics filter) performs filtering or eliminating operations, col. 17, lines 41-46 and col. 18, lines 40-44); and
identifying the updated weight matrix as a benign update or a malicious update based on a value of the variable weight statistics threshold (malicious participants are identified as outliers based upon update aggregations (i.e., updated weight matrix), and accordingly filtered out minimizing disrupting the federated learning process and improves the quality of the federated model by updating based upon benign participants since threshold level of deviations (i.e., variable weight statistics threshold) exist based upon detected levels of deviation (i.e., weight statistics filter), col. 17, lines 37-46 and col. 18, lines 37-44).
Yu et al fails to disclose of defending against a generative adversarial network (GAN) based attack on a global federated learning model wherein the updated weight matrix generated the GAN. Zhang et al teaches of defending against a generative adversarial network (GAN) based attack on a global federated learning model wherein the updated weight matrix generated the GAN (poisoning attacks in federated learning systems (i.e., global federated learning model) is based on generative adversarial nets (GAN), the GAN is trained using generated samples (i.e., updated weight matrix), see abstract on page 374).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have been motivated to apply means of detecting poisoning attacks on the global federated learning model. Zhang et al discloses that an attacker can successfully generated samples of other benign participants using GAN and the global model performing 80% accuracy on both poisoning and main tasks, see abstract on page 374. Zhang et al additionally discloses that its impossible to verify the authenticity of certain participant updates wherein multiple updates generated by multiple participants may be very different from each other, section I, page 374, bottom of column 2. Although the teachings of Yu et al disclose of defending against attacks on a global federated learning model, the teachings of Zhang et al offer higher forms are accuracy to verify the authenticity of certain participant updates, wherein the updated weight matrix generated the GAN (poisoning attacks in federated learning systems (i.e., global federated learning model)) are based on generative adversarial nets (GAN).
As per claim 17, it is disclosed Yu et al wherein the variable weight statistics threshold is set to an initial value (reference updates that are compared to determine levels of deviation from a reference update wherein threshold level of deviation (i.e., variable weight statistics threshold) from the reference update (i.e., initial value that is first detected, col. 9, lines 25-27), col. 17, lines 30-37 and col. 18, lines 34-40), and wherein the increase to the variable weight statistics threshold is increased according to a scheduling rule (the threshold can be set according to the distribution of distances to center (interpreted as a scheduling rule since the distance dictate increasing/decreasing the threshold value)(col. 17, lines 19-29), and threshold level of deviation (i.e., variable weight statistics threshold) col. 17, lines 30-37 and col. 18, lines 34-40).
As per claim 18, it is taught by Yu et al wherein the variable weight statistics threshold is set to an initial value (reference updates that are compared to determine levels of deviation from a reference update wherein threshold level of deviation (i.e., variable weight statistics threshold) from the reference update (i.e., initial value that is first detected, col. 9, lines 25-27), col. 17, lines 30-37 and col. 18, lines 34-40), and wherein the increase to the variable weight statistics threshold is increased based on a learning of the master node that a value of the weight statistics threshold either successfully identified the updated weight as benign or failed to identify the updated weight as malicious (the threshold can be set according to the distribution of distances to center since the distance dictate increasing/decreasing the threshold value, col. 17, lines 19-29, and threshold level of deviation (i.e., variable weight statistics threshold) col. 17, lines 30-37 and col. 18, lines 34-40, wherein the teachings further disclose of flagging a potential malicious participant (interpreted as a master node) within a group that factors into the threshold values, col. 18, lines 34-44).
As per claim 19, it is disclosed by Yu et al of a network node for defending against attacks on a global federated learning model, the network node comprising:
at least one processor (col. 11, lines 18-23);
at least one memory connected to the at least one processor and storing program code that is executed by the at least one processor to perform operations (col. 11, lines 18-23) comprising:
receive an updated weight matrix from a client node of the global federated learning model (updates from each group (i.e., client node) is aggregated to generate group update aggregations (i.e., updated weight matrix) to update a federated model (i.e., federated learning model), col. 17, lines 30-37), the updated weight matrix generated by the GAN;
pass the updated weight matrix through a weight statistics filter having a variable weight statistics threshold that adapts during training of the global federated learning model (samples are collected from participate updates, or generated update aggregations (i.e., updated weight matrix), to produce reference updates that are compared to determine levels of deviation (i.e., weight statistics filter) from a reference update wherein threshold level of deviation (i.e., variable weight statistics threshold) from the reference update, col. 17, lines 30-37 and col. 18, lines 34-40, wherein the levels of deviation (i.e., weight statistics filter) performs filtering or eliminating operations, col. 17, lines 41-46 and col. 18, lines 40-44); and
identify the updated weight matrix as a benign update or a malicious update based on a value of the variable weight statistics threshold (malicious participants are identified as outliers based upon update aggregations (i.e., updated weight matrix), and accordingly filtered out minimizing disrupting the federated learning process and improves the quality of the federated model by updating based upon benign participants since threshold level of deviations (i.e., variable weight statistics threshold) exist based upon detected levels of deviation (i.e., weight statistics filter), col. 17, lines 37-46 and col. 18, lines 37-44).
Yu et al fails to disclose of defending against a generative adversarial network (GAN) based attack on a global federated learning model wherein the updated weight matrix generated the GAN. Zhang et al teaches of defending against a generative adversarial network (GAN) based attack on a global federated learning model wherein the updated weight matrix generated the GAN (poisoning attacks in federated learning systems (i.e., global federated learning model) is based on generative adversarial nets (GAN), the GAN is trained using generated samples (i.e., updated weight matrix), see abstract on page 374).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have been motivated to apply means of detecting poisoning attacks on the global federated learning model. Zhang et al discloses that an attacker can successfully generated samples of other benign participants using GAN and the global model performing 80% accuracy on both poisoning and main tasks, see abstract on page 374. Zhang et al additionally discloses that it’s impossible to verify the authenticity of certain participant updates wherein multiple updates generated by multiple participants may be very different from each other, section I, page 374, bottom of column 2. Although the teachings of Yu et al disclose of defending against attacks on a global federated learning model, the teachings of Zhang et al offer higher forms are accuracy to verify the authenticity of certain participant updates, wherein the updated weight matrix generated the GAN (poisoning attacks in federated learning systems (i.e., global federated learning model)) are based on generative adversarial nets (GAN).
As per claim 20, it is taught by Yu et al wherein the variable weight statistics threshold is set to an initial value (reference updates that are compared to determine levels of deviation from a reference update wherein threshold level of deviation (i.e., variable weight statistics threshold) from the reference update (i.e., initial value that is first detected, col. 9, lines 25-27), col. 17, lines 30-37 and col. 18, lines 34-40), and wherein the increase to the variable weight statistics threshold is increased according to a scheduling rule (the threshold can be set according to the distribution of distances to center (interpreted as a scheduling rule since the distance dictate increasing/decreasing the threshold value)(col. 17, lines 19-29), and threshold level of deviation (i.e., variable weight statistics threshold) col. 17, lines 30-37 and col. 18, lines 34-40).
As per claim 32, it is disclosed by Yu et al wherein the variable weight statistics threshold is set to an initial value (reference updates that are compared to determine levels of deviation from a reference update wherein threshold level of deviation (i.e., variable weight statistics threshold) from the reference update (i.e., initial value that is first detected, col. 9, lines 25-27), col. 17, lines 30-37 and col. 18, lines 34-40), and wherein the increase to the variable weight statistics threshold is increased based on a learning of the master node that a value of the weight statistics threshold either successfully identified the updated weight as benign or failed to identify the updated weight as malicious (the threshold can be set according to the distribution of distances to center since the distance dictate increasing/decreasing the threshold value, col. 17, lines 19-29, and threshold level of deviation (i.e., variable weight statistics threshold) col. 17, lines 30-37 and col. 18, lines 34-40, wherein the teachings further disclose of flagging a potential malicious participant (interpreted as a master node) within a group that factors into the threshold values, col. 18, lines 34-44).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Banipal, US 2022/0414174 is relied upon for disclosing of feedback loops enabled by federated learning, and of the use of GAN methods, see paragraph 0055.
Demir et al, US 12,248,556 is relied upon for disclosing of a GAN filter may be an authenticator-integrated GAN including a generator neural network, a discriminator neural network, an authenticator neural network, and a datastore. Additional filters may be applied to the generated content from GAN filter resulting in a final output from the flow. The authenticator code is fed as input to the authenticator. The authenticator code may be hashed and stored in content metadata maintained in a change history datastore. The filtered content that is output by the GAN filter may also contain an embedded representation of the authentication code. At the end of the flow, the synthetic content's embedded authentication code can be fed to the authenticator, or correlated with the metadata in change history datastore, thereby creating a verifiable mechanism regarding the use of generative AI in the content, see column 20, lines 23-45.
Light, US 2025/0335876 is relied upon for disclosing of using filters to scan input images and to highlight certain patterns. Generative adversarial networks include two neural networks, a generator and a discriminator. The generator attempts to create realistic content that can fool the discriminator, while the discriminator attempts to distinguish between real and fake content, see paragraph 0266.
Naili et al, US 2025/0356170 is relied upon for disclosing of a generative adversarial network, GAN, based system for outputting an estimated adversarial data, EAD, of an attack on an artificial intelligence, AI, model is provided. The method includes classifying a data point from an input data as (i) a real data point, or (ii) a manipulated data point. The method further includes, when the classification is a manipulated data point, outputting the estimated adversarial data including a difference between the manipulated data point and the data point from the input data, see abstract.
Liang, CN 116232656 A is relied upon for disclosing of a vehicle network intrusion detection model training method based on generative adversarial network, continuously inputting a training data set to a privacy protection performing local differential privacy gradient calculation to the training data set by using the privacy protection, obtaining the privacy calculation result, and inputting the privacy calculation result into the discriminator D and the classifier C, using the discriminator D to perform authenticity discrimination based on the calculation result, obtaining the authenticity judging result, and using a classifier to perform category prediction to the training data set, obtaining the category prediction result, based on the authenticity judging result, calculating the minimum loss function, obtaining the loss value, and performing reverse propagation according to the loss value, updating the parameter of the generator G, according to the authenticity judging result, updating the parameter of the discriminator D, updating the classifier C according to the classification prediction result, returning to generate the analogue data set by the generator G, and continuously executing the step of taking the real traffic data set of the analogue data set and the vehicle network as the training data set until reaching the preset ending condition, taking the obtained model as the vehicle network intrusion detection model based on generative adversarial network, avoiding the internal difference and the non-balance of the data to cause IDS to lack of characteristic training of the abnormal type, improving the detection accuracy.
Niu et al, WO 2022/192568 A1 is relied upon for disclosing of performing the minimax optimization of the quantum generative adversarial network loss function comprises: fixing generator network parameters to values determined at a previous iteration and maximizing the quantum generative adversarial network loss function with respect to discriminator network parameters to determine updated values of the discriminator network parameters for the iteration; and fixing the discriminator network parameters to the updated values of the discriminator network parameters for the iteration and minimizing the quantum generative adversarial network loss function with respect to generator network parameters to determine updated values of the generator network parameters for the iteration, see paragraph 0014.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTOPHER REVAK whose telephone number is (571)272-3794. The examiner can normally be reached 5:30am - 3:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Catherine Thiaw can be reached at 571-270-1138. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHRISTOPHER A REVAK/Primary Examiner, Art Unit 2407