DETAILED ACTION
This action is written in response to the application filed 3/6/23. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Subject Matter Eligibility
Independent claims 1 and 8 each recite a “learning system” comprising various ‘devices’ and ‘servers’. However, none of these components is defined in the Applicant’s written description. The broadest reasonable interpretation of these components (as well as the “learning system” as a whole) encompasses software per se which is not a process, machine, manufacture, or a composition of matter, and therefore is nonstatutory subject matter. Therefore, claims 1 and 8 are rejected under §101. Dependent claims 2-7 and 9-12 are rejected for the same reason.
Claims 13-15 recite a method, and are not rejected.
Allowable Claims and Allowable Subject Matter
Claims 13-15 are allowed. Claims 1-12 are allowable over the prior art (but are rejected under §101). Below are the closest cited references, each of which disclose various aspects of the claimed invention:
Papadaki discloses a traditional federated learning system featuring one central server for training (and updating) a global model, as well as a plurality of client devices which managing (and update) local models which are subsequently used to update the global model. See excerpts below. However, it does not disclose classifying an individual model at a central/global server (eg at a ‘training server’). (Afroditi Papadaki, et al., "Federating for Learning Group Fair Models", 35th Conference on Neural Information Processing Systems Workshop, 2021. <https://arxiv.org/abs/2110.01999>. Cited by Applicant on IDS dated 3/6/23.)
PNG
media_image1.png
396
382
media_image1.png
Greyscale
P. 3, fig. 1 (excerpt).
See also p. 2, “The clients do not share data with one another or with the server; instead the clients only share focused updates with the server, the server then updates a global model, and distributes the updated model to the clients, with the process carried out over multiple rounds or iterations.”
Sattler discloses a federated learning system which classifies the individual client models as either benign or adversarial. However, it does not disclose classifying a global model received from client devices. (Sattler, Felix, Klaus-Robert Müller, Thomas Wiegand, and Wojciech Samek. "On the byzantine robustness of clustered federated learning." In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 8861-8865. IEEE, 2020.)
P. 8863, “However, as we assume in the byzantine
setting that the majority of clients belongs to one single benign
cluster and all other clients are considered adversarial, we can
of course save computation effort by excluding all clients from training, which do not belong to the largest cluster (10).”
Ye presents a survey of techniques to address heterogeneity in federated learning. Although Ye is not prior art under §102, it presents the state of the art around the time of filing. Topics discussed include different types of data heterogeneity (label skew, features skew, quality skew and quantity skew, see p. 8) as well as device heterogeneity (see p. 10). (Ye, Mang, Xiuwen Fang, Bo Du, Pong C. Yuen, and Dacheng Tao. "Heterogeneous federated learning: State-of-the-art and research challenges." ACM Computing Surveys 56, no. 3 (2023): 1-44.)
However, none of the prior art references of record—alone or in combination—disclose or suggest the combined features recited in the independent claims, including specifically (for claim 1):
receive the common model from the training server, update the common model and the individual model on the basis of the individual data, and transmit the updated common model and individual model to the training server, and
the training server classifies the common model and the individual model transmitted from the plurality of client devices on the basis of the individual model transmitted from the training data management server, and
updates the common model and the individual model in accordance with a classification result.
Independent claims 8 and 13 are allowable for the same reason as claim 1.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Vincent Gonzales whose telephone number is (571) 270-3837. The examiner can normally be reached on Monday-Friday 7 a.m. to 4 p.m. MT. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Miranda Huang, can be reached at (571) 270-7092.
Information regarding the status of an application may be obtained from the USPTO Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov.
/Vincent Gonzales/Primary Examiner, Art Unit 2124