DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (“IDS”) filed on 02/27/2024 has been reviewed and the listed references have been considered.
Drawings
The 9-page drawings have been considered and placed on record in the file.
Status of Claims
Claims 1-20 are pending.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-3, 10-13, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Marathe et al. (US 2023/0047092 A1) in view of Xue et al. ("An Accuracy-Lossless Perturbation Method for Defending Privacy Attacks in Federated Learning" - Published 2022).
Regarding claim 11, Marathe teaches “An apparatus (Marathe paragraph [0040] "federated machine learning system that enables multiple users to cooperatively train a Machine Learning (ML) model without sharing private training data"), comprising:
one or more network interfaces (Marathe Figure 5 and paragraph [0066] "Various embodiments may include fewer or additional components not illustrated in FIG. 5 (e.g., video cards, audio cards, additional network interfaces, peripheral devices, a network interface such as an ATM interface, an Ethernet interface, a Frame Relay interface, etc.)");
a processor coupled to the one or more network interfaces and configured to execute one or more processes; and a memory configured to store a process that is executable by the processor (Marathe Figure 5 and paragraph [0067] "The one or more processors 1210, the storage device(s) 1270, and the memory subsystem 1220 may be coupled to the I/0 interface 1230. The memory subsystem 1220 may contain application data 1224 and program code 1223"), the process when executed configured to:
PNG
media_image1.png
825
597
media_image1.png
Greyscale
Marathe Figure 5
receive, a global model from an aggregation node in a federated learning system (Marathe Figure 1 and paragraph [0053] "The process begins at step 300 where a current version of a machine learning model may be distributed from an aggregation server to a sampled portion of a plurality of clients, such as the model 112 shown in FIG. 1");
PNG
media_image2.png
855
627
media_image2.png
Greyscale
Marathe Figure 1
provide, via a network, the local model to the aggregation node for aggregation with other local models trained in the federated learning system (Marathe paragraph [0055] "client may then proceed to step 360 where the aggregation server may aggregate the sets of model parameter updates from the respective clients, such as the aggregated model parameter updates 114 shown in FIG. 1, and apply the aggregated parameter updates to the machine learning model to generate a new version of the machine learning model" ).”
However, Marathe does not teach “apply noise to the global model, to form a noise-augmented model; perform local training using the noise-augmented model and a local training dataset, to form a local model”.
Xue teaches “apply noise to the global model, to form a noise-augmented model (Xue Figure 1 and page 5 left hand column paragraph 1 "With the random selected noises, the global parameters
w
^
=
w
^
l
l
=
1
L
are perturbed");
perform local training using the noise-augmented model and a local training dataset, to form a local model (Xue Figure 1 and page 5 left hand column paragraph 3 "After receiving the perturbed parameters
w
^
=
w
^
l
l
=
1
L
, each client
c
k
conducts local model training with local training data
D
k
")”.
PNG
media_image3.png
738
1109
media_image3.png
Greyscale
Xue Figure 1
It would have been obvious to a person having ordinary skill in the art before
effective filing date of the claimed invention of the instant application to combine a federated learning system receiving global model and adding noise to the global model at the client server as taught by Marathe to add noise before training the local model as taught by Xue.
The suggestion/motivation for doing so would have been “Unfortunately, the defense ability and learning accuracy of these schemes are still unsatisfactory, especially for medical or financial institutions that require both high accuracy and strong privacy guarantee. The main reason is that the perturbed noises added in gradients or model parameters cannot be eliminated, which will inevitably reduce the learning accuracy compared with the non-private training. Meanwhile, the defence ability is not strong enough due to the utility consideration in practice (i.e., It should avoid reducing the learning accuracy too much to make the trained model unusable). Therefore, designing an efficient privacy-preserving scheme applied to FL, which ensures robustness to inference attacks while maintaining a high learning accuracy remains an open problem [13, 16]” as noted by the Xue disclosure in page 3 left hand column paragraph 3. Marathe also suggests that “In local enforcement of DP, from the federation server's perspective, each user may enforce DP independently on its respective parameter updates. Item level local DP ensures that the contribution of each data item is hidden from the federation server” in paragraph 39.
Therefore, it would have been obvious to combine the disclosure of Marathe with
the Xue disclosure to obtain the invention as specified in claim 11 as there is a
reasonable expectation of success and/or because doing so merely combines prior art
elements according to known methods to yield predictable results.
Claim 1 recites a method with steps corresponding to the apparatus elements
recited in claim 11. Therefore, the recited steps of this claim are mapped to the
proposed combination in the same manner as the corresponding elements of apparatus
claim 11. Additionally, the rationale and motivation to combine the Marathe and Xue
references, presented in rejection of claim 11 apply to this claim.
Claim 20 recites a computer readable medium including computer executable instructions corresponding to the elements of the apparatus recited in claim 11. Therefore, the recited instructions of the computer readable medium of claim 20 are mapped to the proposed combination in the same manner as the corresponding elements of the apparatus claim 11. Additionally, the rationale and motivation to combine Marathe and Xue presented in rejection of claim 11, apply to this claim. The combination of Marathe and Xue teaches “A tangible, non-transitory, computer-readable medium storing program instructions that cause a device in a federated learning system to execute a process (for example Marathe paragraph [0064] "a non-transitory, computer-readable storage medium having stored thereon instructions which may be used to program a computer system 1200 ( or other electronic devices) to perform a process according to various embodiments")”.
Regarding claim 2 (similarly claim 12), the combination of Marathe and Xue teaches “The method as in claim 1, wherein the local training dataset comprises images (Xue page 7 right hand column paragraph 1 "Datasets and Metrics. Experiments are conducted on two privacy sensitive datasets and one image dataset") or video (Marathe paragraph [0043] "Individual ones of the federation users 120 may independently alter, by clipping and apply noise, to their local model parameter updates to generate modified model parameter updates 128, where the altering provides or ensures privacy of their local datasets 124").”
The proposed combination as well as the motivation for combining Marathe and Xue references presented in the rejection of claim 11, applies to claim 2. Finally the method recited in claim 2 is met by Marathe and Xue.
Regarding claim 3 (similarly claim 13), the combination of Marathe and Xue teaches “The method as in claim 1, wherein the aggregation node aggregates the local model with the other local models to update the global model (Marathe paragraph [0055] "client may then proceed to step 360 where the aggregation server may aggregate the sets of model parameter updates from the respective clients, such as the aggregated model parameter updates 114 shown in FIG. 1, and apply the aggregated parameter updates to the machine learning model to generate a new version of the machine learning model").”
The proposed combination as well as the motivation for combining Marathe and Xue references presented in the rejection of claim 11, applies to claim 3. Finally the method recited in claim 3 is met by Marathe and Xue.
Regarding claim 10, the combination of Marathe and Xue teaches “The method as in claim 1, wherein the global model is configured to classify sensor data (Xue Table 2 and page 7 right hand column paragraph 1 "Lesion Disease Classification (LDC) [27] [5] provides 8k training and 2k test skin images for the classification of lesion disease").”
PNG
media_image4.png
253
1438
media_image4.png
Greyscale
Xue Table 2
The proposed combination as well as the motivation for combining Marathe and Xue references presented in the rejection of claim 11, applies to claim 10. Finally the method recited in claim 10 is met by Marathe and Xue.
Claims 4-8, 14-18 are rejected under 35 U.S.C. 103 as being unpatentable over Marathe and Xue in view of Fu et al. ("Adap DP-FL: Differentially Private Federated Learning with Adaptive Noise" - Published 2022).
Regarding claim 4 (similarly claim 14), the combination of Marathe and Xue teaches the method of claim 1.However, the combination of Marathe and Xue does not teach “wherein the device and one or more other trainer nodes in the federated learning system apply different levels of noise to the global model”.
Fu teaches “wherein the device and one or more other trainer nodes in the federated learning system apply different levels of noise to the global model (Fu page 1 right hand column paragraph 2 and page 2 left hand column paragraph 1 and 2 "The intuition is that at the beginning of training, the model is far from optimization, and the gradient magnitudes are normally large, so greater noise is allowed. As the training proceeds, the model is approaching optimization and the gradient magnitudes are converging, smaller noise is required […] We perform adaptive gradient clipping for different clients and different rounds, and decrease the noise scale adaptively as the training proceeds").”.
It would have been obvious to a person having ordinary skill in the art before
effective filing date of the claimed invention of the instant application to combine a federated learning system that adds noise for local data privacy as taught by Marathe and Xue to add different noise levels to the global model as taught by Fu.
The suggestion/motivation for doing so would have been "we can find that if we use a large constant noise scale with
σ
=
6
, it can make the model perform well in the early stage of training, and when it reaches about 93% accuracy (approaching convergence), it cannot further improve the accuracy or even the performance decreases in the subsequent training. In contrast, a small noise scale with
σ
=
3
or
σ
=
4
wastes more privacy budget in the early stage, but has the ability to approach convergence in the later stage of training. Our adaptive noise scale reduction method enables to save privacy budget to obtain higher model accuracy performance in the early stage, and further improve the model accuracy as the noise scale reduce in the later training stage" as noted by the Fu disclosure in page 6 and page 7 paragraph 1 left hand column.
Therefore, it would have been obvious to combine the disclosure of Marathe and Xue with the Fu disclosure to obtain the invention as specified in claim 4 as there is a
reasonable expectation of success and/or because doing so merely combines prior art
elements according to known methods to yield predictable results.
Regarding claim 5 (similarly claim 15), the combination of Marathe, Xue, and Fu teaches “The method as in claim 1, wherein the device applies noise to the global model according to a noise profile specified (Fu page 1 right hand column paragraph 2 and page 2 left hand column paragraph 1 and 2 "The intuition is that at the beginning of training, the model is far from optimization, and the gradient magnitudes are normally large, so greater noise is allowed. As the training proceeds, the model is approaching optimization and the gradient magnitudes are converging, smaller noise is required […] We perform adaptive gradient clipping for different clients and different rounds, and decrease the noise scale adaptively as the training proceeds") via a user interface (Marathe paragraph [0066] "Various embodiments may include fewer or additional components not illustrated in FIG. 5 (e.g., video cards, audio cards, additional network interfaces, peripheral devices, a network interface such as an ATM interface, an Ethernet interface, a Frame Relay interface, etc.)").”
The proposed combination as well as the motivation for combining Marathe, Xue, and Fu references presented in the rejection of claim 4, applies to claim 5. Finally the method recited in claim 5 is met by Marathe, Xue, and Fu.
Regarding claim 6 (similarly claim 16), the combination of Marathe, Xue, and Fu teaches “The method as in claim 5, wherein the noise profile comprises a decreasing step function across a plurality of training rounds (Fu page 4 left hand column paragraph 4 "we initialize
C
0
k
by training on random noise for one round and extracting the mean
l
2
norm").”
The proposed combination as well as the motivation for combining Marathe, Xue, and Fu references presented in the rejection of claim 4, applies to claim 6. Finally the method recited in claim 6 is met by Marathe, Xue, and Fu.
Regarding claim 7 (similarly claim 17), the combination of Marathe, Xue, and Fu teaches “The method as in claim 5, wherein the noise profile comprises a linearly decaying noise function across a plurality of training rounds (Fu page 7 right hand column paragraph 1 "We set the initial noise sacle
σ
=
4
and noise reduction factor, B=0.9998. As in Fig. 6, the noise scale decreases from 4 to 2.11 when the privacy budget E=2.0. So we compare adaptive noise scale reduction method and constant noise scale method with
σ
=
2,3,4 in FashionMnist dataset").”
The proposed combination as well as the motivation for combining Marathe, Xue, and Fu references presented in the rejection of claim 4, applies to claim 7. Finally the method recited in claim 7 is met by Marathe, Xue, and Fu.
Regarding claim 8 (similarly claim 18), the combination of Marathe, Xue, and Fu teaches “The method as in claim 5, wherein the noise profile comprises an exponentially decaying noise function across a plurality of training rounds (Fu page 1 right hand column paragraph 2 and page 2 left hand column paragraph 1 and 2 "The intuition is that at the beginning of training, the model is far from optimization, and the gradient magnitudes are normally large, so greater noise is allowed. As the training proceeds, the model is approaching optimization and the gradient magnitudes are converging, smaller noise is required […] We perform adaptive gradient clipping for different clients and different rounds, and decrease the noise scale adaptively as the training proceeds").”
The proposed combination as well as the motivation for combining Marathe, Xue, and Fu references presented in the rejection of claim 4, applies to claim 8. Finally the method recited in claim 8 is met by Marathe, Xue, and Fu.
Claims 9 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Marathe and Xue in view of Chen et al. ("Federated Learning Over Multihop Wireless Networks With In-Network Aggregation" - 2022).
Regarding claim 9 (similarly claim 19), the combination of Marathe and Xue teaches the method of claim 1.However, the combination of Marathe and Xue does not teach “wherein the aggregation node is an intermediate aggregation node in the federated learning system”.
Chen teaches “wherein the aggregation node is an intermediate aggregation node in the federated learning system (Chen page 3 left hand column paragraph 5 and right hand column paragraph 1 "Each router aggregates the received models, from clients and other routers, into one model by summing up the model parameters, then forwards to the next hop. Finally, the central server aggregates the models from routers, and scales the aggregated model").”
It would have been obvious to a person having ordinary skill in the art before
effective filing date of the claimed invention of the instant application to combine a federated learning system that adds noise for local data privacy as taught by Marathe and Xue to include an intermediate aggregation server as taught by Chen.
The suggestion/motivation for doing so would have been "key observation about FL is that, a central server is only interested in the aggregated model, and hence it may be unnecessary to transmit every local model to the central site if they can be aggregated beforehand. By leveraging in-network edge computing resource, a multihop network can perform model aggregation at each intermediate router before sending data to the next hop. This process, called "in-network aggregation" in this paper, can significantly reduce the outgoing data traffic from routers, and hence enable more efficient model aggregation over multi-hop networks under limited communication resources" as noted by the Fu disclosure in page 1 right hand column 3.
Therefore, it would have been obvious to combine the disclosure of Marathe and Xue with the Fu disclosure to obtain the invention as specified in claim 9 as there is a
reasonable expectation of success and/or because doing so merely combines prior art
elements according to known methods to yield predictable results.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JASPREET KAUR whose telephone number is (571)272-5534. The examiner can normally be reached Monday - Friday 7:30 am - 4:00 PST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amandeep Saini can be reached at (571)272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JASPREET KAUR/Examiner, Art Unit 2662
/AMANDEEP SAINI/Supervisory Patent Examiner, Art Unit 2662