DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This Office Action has been issued in response to Applicant’s Communication of application S/N 18/266,569 filed on June 9, 2023. Claims 1-14 and 1-23 are currently pending with the application.
Remarks
This communication is responsive to the claim amendments filed on 06/9/2023.
Priority
Acknowledgment is made of applicant's claim for foreign priority under 35 U.S.C. 119(a)-(d). The certified copy has been filed in parent Application No. KR10-2020-0173557, filed on 12/11/2020, and relationship to 371 PCT/KR2021/018356 filed on 12/06/2021.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 1, 8 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over XU et al. (US 2021/0143987) in view of Balakrishnan (US 2023/0177349).
As per Claims 1, A method of operating a terminal in a wireless communication system, the method comprising:
receiving, configuration information; (See para.53 and 67-69, wherein configuration information is provided by the server; as taught by XU)
receiving, a request message; (See para.33 and 56, wherein the request is received by local models; as taught by XU)
transmitting a first response message based on the received request message; (See para, 57, wherein a response is received from the local models participants based on the request; as taught by XU)
receiving information based on the first response message; ( See para.35-37 and 60-65 Aggregator generates an aggregated/global model from all participants’ encrypted responses, It states the aggregator “can then transmit the Aggregate Model … to the participant(s); as taught by XU)
and performing federated learning based on the received resource allocation-related
information and a federated learning-related group which is determined based on the
second response message, wherein the configuration information includes federated
learning-related information, (See Fig. 3 describing the workflow of a federated learning, para.35-41 and 50-62 the aggregation vector generally includes a set of values, where each value corresponds to a particular Participant 120A-N. Additionally, each value can be a weight (e.g., from zero to one) indicating how heavily to weigh the parameters provided by a particular Participant 120A-N when generating the overall aggregate parameters. For example, the aggregation vector may indicate to give each Participant 120A-N equal weight (e.g., with an equal value for each participant), or to weight some Participants 120A-N more highly than others. In one embodiment, for each Participant 120A-N that did not return model parameters, the Aggregator 110 assigns a zero to the corresponding value in the aggregation vector. In another embodiment, the Aggregator 110 creates the aggregation vector of size correspondent to the number of participants that returned model parameters and sets each of the vector's values to be non-zero weights. The Aggregator 110 then transmits the aggregation vector to the Key Server 105 at block 350; as taught by XU)
wherein the request message is for requesting a local model weight which is learned based on the configuration information; (See para.33 and 56, wherein the request is for internal weights of the local models; as taught by XU)
wherein the received information is associated with a total local model including local model information of other terminals participating in the federated learning. (See para.35-37 and 60-65 Aggregator generates an aggregated/global model from all participants’ encrypted responses, It states the aggregator “can then transmit the Aggregate Model … to the participant(s); “Aggregate/Global Model” is “information associated with a total local model including local model information of other terminals” (i.e., a composite of local models; as taught by XU).
XU fails to teach transmitting a second response message based on the received information;
receiving resource allocation-related information based on the second response message.
On the other hand, Balakrishnan teaches transmitting a second response message based on the received information; receiving resource allocation-related information based on the second response message; (See para.244-246, the server issues a METADATA request; clients reply with METADATA responses (state), and the server then samples actions (allocations) from the policy, also see para.215, describing resource allocation; as taught Balakrishnan)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teachings of the cited references and modify the invention as taught by XU, by including the teachings of Balakrishnan relating to the resource allocation because the both are directed to the art of federated learning, the resource allocation improves performance taking into account system capabilities (as taught by Balakrishnan)
As per Claim 8, The claim is rejected under the same rational of claim 1.
As per Claim 18, The claim is rejected under the same rational of claim 1.
Claims 2-6, 9-13 and 19-22 are rejected under 35 U.S.C. 103 as being unpatentable over XU et al. (US 2021/0143987) in view of Balakrishnan (US 2023/0177349) and further in view of Park et al. (20 20/0272898)
As per Claim 2, The method of claim 1, The combination of XU and Balakrishnan teaches wherein the first response message includes information; ( See para.35-37 and 60-65 Aggregator generates an aggregated/global model from all participants’ encrypted responses, It states the aggregator “can then transmit the Aggregate Model … to the participant(s); as taught by XU)
The combination of XU and Balakrishnan fails to teach information on a split local model.
On the other hand Park teaches information on a split local model; (See para.37 describing the splitting of layers of the learning models; as taught by Park)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teachings of the cited references and modify the invention as taught by XU and Balakrishnan, by including the teachings of Park relating to the splitting of models because the both are directed to the art of deep learning, the splitting an layering process improves accuracy of the learning process (see para.11 in Park)
As per Claim 3, The method of claim 1, The combination of XU, Balakrishnan and Park teaches The method of claim 1, wherein the received information associated with the total local model includes the information; ( See para.35-37 and 60-65 Aggregator generates an aggregated/global model from all participants’ encrypted responses, It states the aggregator “can then transmit the Aggregate Model … to the participant(s); as taught by XU)
on the split local model; (See para.37 describing the splitting of layers of the learning models; as taught by Park)
As per Claim 4, The method of claim 3, The combination of XU, Balakrishnan and Park teaches The method of claim 3, further comprising: modifying a part of a layer of a local model of the terminal to the split local model based on the received information associated with the total local model. (See para.8, first aggregation vector defines a respective weighting for each of the plurality of participants. This enables the aggregator to dynamically weight each participant according to any number of factors. This also enables the aggregator to adapt to new participants and to account for prior participants, by modifying the weights assigned to each in the vector; as taught by XU)
As per Claim 5, The method of claim 1, The combination of XU, Balakrishnan and Park teaches The method of claim 1, wherein the second response message includes comparison information between local model-related data of another terminal participating in the federated learning and local model-related data of the terminal; (See para.36, a Participant 120 that has a large amount of training data may be afforded a relatively higher weight, as compared to a Participant 120 with less training data, thus the different participants data is compared; as taught by XU)
As per Claim 6, The method of claim 1, The combination of XU, Balakrishnan and Park teaches wherein performing federated learning based on the received resource allocation-related information comprises that the terminal and terminals of a group, to which the terminal belongs,( See Para.172, the central server utilizes a clustering algorithm based on the clients' data probability distributions to determine cluster groups. The clustering may be performed using any suitable algorithm, e.g., those described above. The central server also assigns each client to a cluster group. Each cluster group is identified by its nominal data distribution qi. Thus, clients with similar data distributions are likely to belong to the same cluster; as taught by Balakrishnan)
all perform federated learning based on a same resource; (See para.187, wherein the clustered group perform the federated learning, central server may select K clients for learning randomly or based on various factors, e.g., communication, compute, and/or other client device abilities. Selecting only the fastest K clients in federated learning can lead to certain issues; as taught by Balakrishnan)
As per Claims 9-13 and 19-22, The claim is rejected under the same rational of claims 2-6.
Claims 7, 14 and 23 and are rejected under 35 U.S.C. 103 as being unpatentable over XU et al. (US 2021/0143987) in view of Balakrishnan (US 2023/0177349) and in view of Park et al. (20 20/0272898) and further in view of CHU et al. (20210374617)
As per Claim 7, The method of claim 5, The combination of XU, Balakrishnan and Park teaches wherein the group is determined based on the comparison information between the local model-related data of another terminal participating in the federated learning and the local model-related data of the terminal, (See para.36, a Participant 120 that has a large amount of training data may be afforded a relatively higher weight, as compared to a Participant 120 with less training data, thus the different participants data is compared; as taught by XU)
The combination of XU, Balakrishnan and Park fails to teach and wherein a difference of data distribution between terminals within the determined group is larger than a difference of data distribution between the determined group.
On the other hand CHU teaches nd wherein a difference of data distribution between terminals within the determined group is larger than a difference of data distribution between the determined group; (See para.93, In various example embodiments, the present disclosure describes methods and systems for performing horizontal federated learning. The disclosed example embodiments enable collaboration among clients yet maintain data privacy of each client. Local models that are learned using the horizontal federated learning technique discussed herein may achieve relatively high accuracy performance for all clients having non-IID data distribution. Group-wise collaboration (e.g., implicitly via calculation of collaboration coefficients between pairs of sets of model parameters) is leveraged to enable collaboration among non-IID clients; similar to the applicant’s specification which describes the result of having non-IID between group member to result in intra group divergence higher than inter group divergence “The base station or server may determine data of terminals within the group to have a characteristic of non-IID between them. Accordingly, a difference of data distribution between terminals within the determined group may be larger than a difference of data distribution between the determined groups.” as taught by Chu)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teachings of the cited references and modify the invention as taught by XU, Balakrishnan and Park, by including the teachings of CHU relating to the clustering of data based on higher divergence because the both are directed to the art of federated learning, such process enables better accuracy (see para.93 in CHU)
As per Claims 14 and 23, The claim is rejected under the same rational of claim 7.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHERIEF BADAWI whose telephone number is (571)272-9782. The examiner can normally be reached Monday - Friday, 8:00am - 5:30pm, Alt Friday, EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Cordelia Zecher can be reached on 571-272-7771. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SHERIEF BADAWI/Supervisory Patent Examiner, Art Unit 2169