Prosecution Insights
Last updated: April 19, 2026
Application No. 18/016,470

FEDERATED LEARNING METHOD, APPARATUS AND SYSTEM, ELECTRONIC DEVICE AND STORAGE MEDIUM

Non-Final OA §101§103
Filed
Apr 11, 2023
Examiner
COULSON, JESSE CHEN
Art Unit
2122
Tech Center
2100 — Computer Architecture & Software
Assignee
ZTE CORPORATION
OA Round
1 (Non-Final)
25%
Grant Probability
At Risk
1-2
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants only 25% of cases
25%
Career Allow Rate
1 granted / 4 resolved
-30.0% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
33 currently pending
Career history
37
Total Applications
across all art units

Statute-Specific Performance

§101
30.6%
-9.4% vs TC avg
§103
29.8%
-10.2% vs TC avg
§102
22.6%
-17.4% vs TC avg
§112
17.1%
-22.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 4 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The action is in response to the application filed on 1/17/2023. Claims 1-15 are pending and have been examined. Information Disclosure Statement The information disclosure statement (IDS) submitted on 8/10/2024 is in compliance with the provisions of 37 CFR 1.97, 1.98, and MPEP § 609. It has been placed in the application file, and the information referred to therein has been considered as to the merits. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-15 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding Claim 1: Step 1: The claim recites a method which is one of the four statutory categories of patentable subject matter. Step 2A prong 1: The claim recites an abstract idea A federated learning method applied to a layer-i node, with i being any integer greater than or equal to 2 and less than or equal to (N-1), and (N-1) being the number of layers of federated learning which amounts to a mental process as it can be performed in a human mind. The claim recites an abstract idea calculating an updated layer-(i-1) global gradient corresponding to the layer-i node according to the first gradient corresponding to the at least one layer-(i-1) node and a layer-(i-1) weight index corresponding to the layer-i node, wherein the layer-(i-1) weight index is a communication index which is a mathematical concept. Step 2A prong 2: The additional element of receiving a first gradient corresponding to and reported by at least one layer-(i-1) node under a layer-i node does not integrate the abstract idea into practical application because receiving data is considered an insignificant extra solution activity of “mere data gathering” MPEP 2106.05(g). Step 2B: The additional element of receiving a first gradient corresponding to and reported by at least one layer-(i-1) node under a layer-i node does not amount to significantly more because the additional element is an insignificant extra solution activity and further is a well understood routine and conventional activity. See MPEP 2106.05(d)(II)(i), (buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network)). Therefore, the claim is ineligible. Regarding Claim 2: Claim 2 incorporates the rejection of Claim 1. The claim further recites a description of the communication index from the calculating step and is ineligible for the same reasons as set forth in Claim 1. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. The claim is ineligible. Regarding Claim 3: Claim 3 incorporates the rejection of Claim 1. The claim further recites a description of the weight indexes from the calculating step and is ineligible for the same reasons as set forth in Claim 1. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. The claim is ineligible. Regarding Claim 4: Claim 4 incorporates the rejection of Claim 1. The claim further recites a description of the first gradient from the receiving and calculating steps and is ineligible for the same reasons as set forth in Claim 1. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. The claim is ineligible. Regarding Claim 5: Claim 5 incorporates which the rejection of Claim 1, recites a further abstract idea calculating a weighted average of the first gradient corresponding to the at least one layer-(i-1) node with the layer-(i-1) weight index value corresponding to the at least one layer-(i-1) node taken as a weight, and obtaining the updated layer-(i-1) global gradient corresponding to the layer-i node which is a mathematical concept. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. Specifically, the claim recites a further additional element acquiring a layer-(i-1) weight index value corresponding to the at least one layer-(i-1) node according to the layer-(i-1) weight index corresponding to the layer-I node which is an insignificant extra solution activity of “mere data gathering” MPEP 2106.05(g) and further is a well understood routine and conventional activity. See MPEP 2106.05(d)(II)(i), (buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network)). The claim is ineligible. Regarding Claim 6: Claim 6 incorporates the rejection of Claim 1. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. Specifically, the claim recites a further additional element issuing the updated layer-(i-1) global gradient corresponding to the layer-i node to the layer-(i-1) node which is an insignificant extra solution activity of “mere data gathering” MPEP 2106.05(g) and further is a well understood routine and conventional activity. See MPEP 2106.05(d)(II)(i), (buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network)). The claim is inelgibile. Regarding Claim 7: Claim 7 incorporates which the rejection of Claim 1, recites a further abstract idea reporting the updated layer-(i-1) global gradient corresponding to the layer-i node to a layer-(i+1) node which is a mental process as it can be performed in a human mind. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. Specifically, the claim recites a further additional element receiving any one of an updated layer-i global gradient to an updated layer-(N-1) global gradient sent by the layer-(i+1) node, and issuing the any one of the updated layer-i global gradient to the updated layer-(N-1) global gradient to the layer-(i-1) node which is an insignificant extra solution activity of “mere data gathering” MPEP 2106.05(g) and further is a well understood routine and conventional activity. See MPEP 2106.05(d)(II)(i), (buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network)). The claim is ineligible. Regarding Claim 8: Step 1: The claim recites a method which is one of the four statutory categories of patentable subject matter. Step 2A prong 1: The claim recites an abstract idea reporting an updated gradient corresponding to the layer-1 node to a lyer-2 node which amounts to a mental process as it can be performed in a human mind. The claim recites an additional abstract idea …the layer-j global gradient is obtained through calculation according to a first gradient corresponding to at least one layer-j node and a layer-j weight index corresponding to a layer-(j+1) node; the layer-j weight index is a communication index; and j is any integer greater than or equal to 1 and less than or equal to (N-1), and (N-1) is the number of layers of federated learning which amounts to a mental process as it can be performed in a human mind. Step 2A prong 2: The additional element of receiving an updated layer-j global gradient sent from layer-2 node… does not integrate the abstract idea into practical application because receiving data is considered an insignificant extra solution activity of “mere data gathering” MPEP 2106.05(g). Step 2B: The additional element of receiving an updated layer-j global gradient sent from layer-2 node… does not amount to significantly more because the additional element is an insignificant extra solution activity and further is a well understood routine and conventional activity. See MPEP 2106.05(d)(II)(i), (buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network)). Therefore, the claim is ineligible. Regarding Claim 9: Claim 9 incorporates the rejection of Claim 8. The claim further recites a description of the first gradient from the receiving step and is ineligible for the same reasons as set forth in Claim 8. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. The claim is ineligible. Regarding Claim 10: Claim 10 incorporates the rejection of Claim 8. The claim further recites a description of the communication index from the calculation step and is ineligible for the same reasons as set forth in Claim 8. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. The claim is ineligible. Regarding Claim 11: Claim 11 incorporates the rejection of Claim 8. The claim further recites a description of the weight indexes from the calculation step and is ineligible for the same reasons as set forth in Claim 8. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. The claim is ineligible. Regarding Claim 12: Step 1: The claim recites a method which is one of the four statutory categories of patentable subject matter. Step 2A prong 1: The claim recites an abstract idea A federated learning method applied to a layer-N node or a layer-N subsystem, with (N-1) being the number of layers of federated learning which amounts to a mental process as it can be performed in a human mind. The claim recites an abstract idea calculating a layer-(N-1) global gradient corresponding to the layer-N node or the layer-N subsystem according to the layer-(N-2) global gradient corresponding to the at least one layer-(N-1) node and a layer-(N-1) weight index, wherein the layer-(N-1) weight index is a communication index which is a mathematical concept. Step 2A prong 2: The additional element of receiving a layer-(N-2) global gradient corresponding to and reported by at least one layer-(N-1) node under the layer-N node or the layer-N subsystem does not integrate the abstract idea into practical application because receiving data is considered an insignificant extra solution activity of “mere data gathering” MPEP 2106.05(g). Step 2B: The additional element of receiving a layer-(N-2) global gradient corresponding to and reported by at least one layer-(N-1) node under the layer-N node or the layer-N subsystem does not amount to significantly more because the additional element is an insignificant extra solution activity and further is a well understood routine and conventional activity. See MPEP 2106.05(d)(II)(i), (buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network)). Therefore, the claim is ineligible. Regarding Claim 13: Claim 13 incorporates the rejection of Claim 12. The claim further recites a description of the communication index from the calculating step and is ineligible for the same reasons as set forth in Claim 1. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. The claim is ineligible. Regarding Claim 14: Claim 14 incorporates the rejection of Claim 1. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. Specifically, the claim recites further additional elements at least one processor and a memory having stored thereon at least one program, which when executed by the at least one processor, causes the at least one processor to implement the federated learning method of claim 1 which are generic computer components used to implement the abstract idea MPEP 2106.05(f) . The claim is ineligible. Regarding Claim 15: Claim 15 incorporates the rejection of Claim 1. As the specification is silent about the term computer-readable storage medium, the BRI of the medium as claimed covers signal per se therefore this claim does not fall within a statutory category in step 1. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. Specifically, the claim recites a further additional element a computer-readable storage medium having a computer program stored thereon, wherein, when the computer program is executed by a processor, the federated learning method of claim 1 is implemented which is a generic computer components used to implement the abstract idea MPEP 2106.05(f) . The claim is ineligible. Claims 15 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claims do not fall within at least one of the four categories of patent eligible subject matter because it recites signals per se, not a process, machine, article of manufacture, nor composition of matter. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-15 are rejected under 35 U.S.C. 103 as being unpatentable over Abad et al. “Hierarchical Federated Learning Across Heterogeneous Cellular Networks”, from applicant IDS, hereinafter “Abad”, in view of Chen et al. “Communication-Efficient Federated Deep Learning with Asynchronous Model Update and Temporally Weighted Aggregation”, from applicant IDS, hereinafter “Chen”. Regarding Claim 1, Abad teaches: A federated learning method applied to a layer-i node, with i being any integer greater than or equal to 2 and less than or equal to (N-1), and (N-1) being the number of layers of federated learning (Abad, p. 5, Fig 1 shows 3 N layers MU, SBS, and MBS, i=2 is SBS layer), comprising: receiving a first gradient corresponding to and reported by at least one layer-(i-1) node under a layer-i node (a MU is a layer-(i-1) node under SBS which is layer-i node, gn is first gradient, Abad, p. 4, col. 1, paragraph 5, “each MU… computes the local gradient estimate, denoted by gn,k,t = 1 |Ik| P i∈Ik ∇fi(wn,t), and transmits it to the SBS”); and calculating an updated layer-(i-1) global gradient corresponding to the layer-i node according to the first gradient corresponding to the at least one layer-(i-1) node (Abad, p. 4, col. 1, paragraph 5, “the SBS n aggregates the gradients simply by taking the average”) and… Abad does not expressly teach: …a layer-(i-1) weight index corresponding to the layer-i node, wherein the layer-(i-1) weight index is a communication index. However, Chen teaches: …a layer-(i-1) weight index corresponding to the layer-i node, wherein the layer-(i-1) weight index is a communication index (Chen, p. 4, col. 2, paragraph 1, “the following model aggregation method taking into account of timeliness of the local models is proposed”, p. 4, equation 1, nk/n ∗ (e/2)−(t−timestampk) shows delay and is the communication index, p. 5, col. 1, paragraph 3, “Timestamps are stored and to be used to weight the timeliness of corresponding parameters in aggregation”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Chen temporally weighting aggregation of local models on a server with the aggregation of mobile users and small cell base stations of Abad. The motivation to do so would be to enhance the accuracy and convergence of the SBS and MBS models (Chen, p. 1, Abstract, “temporally weighted aggregation strategy is introduced on the server to make use of the previously trained local models, thereby enhancing the accuracy and convergence of the central model”). Regarding Claim 2, Abad in view of Chen teaches the method of Claim 1 as referenced above. Chen further teaches: wherein the communication index comprises at least one of: an average delay, traffic, uplink and downlink traffic, or a weighted average of the traffic and the uplink and downlink traffic (Chen shows average delay as weight index, Chen, p. 4, Equation 1 shows timestamps(delay) averaged over local clients with summation and division over n clients during aggregation). Regarding Claim 3, Abad in view of Chen teaches the method of Claim 1 as referenced above. In the combination as set forth above, Abad in view of Chen further teaches: wherein weight indexes corresponding to different nodes in a same layer are the same or different, and weight indexes corresponding to different nodes in different layers are the same or different (Values of timestamps in weighted aggregation are per client, Chen, p. 6, Algorithm 2 shows timestamps being updated lines 4-5 and 18-19, weight indexes being the same or different in the same and different layers covers any value of timestamps of a given device/server). Regarding Claim 4, Abad in view of Chen teaches the method of Claim 1 as referenced above. Abad further teaches: wherein if i is 2, the first gradient corresponding to the layer-(i-1) node is an updated gradient obtained by performing model training by the layer-(i-1) node(updated gradient is obtained from training as shown by ∇fi(wn,t) which is gradient with respect to loss function, Abad, p. 4, col. 1, paragraph 5, “each MU k, k ∈ Cn for n = 1, . . . , N computes the local gradient estimate, denoted by gn,k,t = 1 |Ik| P i∈Ik ∇fi(wn,t)”); and if i is greater than 2 and less than or equal to (N-1), the first gradient corresponding to the layer-(i-1) node is an updated layer-(i-2) global gradient corresponding to the layer-(i-1) node. Regarding Claim 5, Abad in view of Chen teaches the method of Claim 1 as referenced above. In the combination as set forth above, Abad in view of Chen further teaches: wherein calculating the updated layer-(i-1) global gradient corresponding to the layer-i node according to the first gradient corresponding to the at least one layer-(i-1) node and the layer-(i-1) weight index corresponding to the layer-i node comprises: acquiring a layer-(i-1) weight index value corresponding to the at least one layer-(i-1) node according to the layer-(i-1) weight index corresponding to the layer-i node (Chen, p. 5, col. 1, paragraph 3, “In initialization (Algorithm 2, Lines 2-6 ), the central model ω0, timestamps timestampg and timestamps are initialized. Timestamps are stored and to be used to weight the timeliness of corresponding parameters in aggregation”); and calculating a weighted average of the first gradient corresponding to the at least one layer-(i-1) node with the layer-(i-1) weight index value corresponding to the at least one layer-(i-1) node taken as a weight, and obtaining the updated layer-(i-1) global gradient corresponding to the layer-i node (in the combination above wt+1 aggregates the updates from the global gradient and wk is computed from gn, wt+1 corresponding to server layer is calculated in Chen, p. 4, Equation 1 where wk is multiplied by weighted average of delay). Regarding Claim 6, Abad in view of Chen teaches the method of Claim 1 as referenced above. Abad further teaches: after calculating the updated layer-(i-1) global gradient corresponding to the layer-i node according to the first gradient corresponding to the at least one layer-(i-1) node and the layer-(i-1) weight index corresponding to the layer-i node, further comprising: issuing the updated layer-(i-1) global gradient corresponding to the layer-i node to the layer-(i-1) node (Abad, p. 4, col. 1, paragraph 5, “each MU… computes the local gradient estimate… and transmits it to the SBS in cluster n. Then, the SBS n aggregates the gradients simply by taking the average… This average is then sent back by the SBS to the MUs in its cluster, and the model at cluster n = 1, . . . , N is updated as wn,t+1 = wn,t − ηtgn,t”). Regarding Claim 7, Abad in view of Chen teaches the method of Claim 1 as referenced above. Abad further teaches: after calculating the updated layer-(i-1) global gradient corresponding to the layer-i node according to the first gradient corresponding to the at least one layer-(i-1) node and the layer-(i-1) weight index corresponding to the layer-i node, further comprising: reporting the updated layer-(i-1) global gradient corresponding to the layer-i node to a layer-(i+1) node (Abad, p. 4, col. 2, paragraph 1, “After H iterations, all SBSs transmit their models to the MBS”); and receiving any one of an updated layer-i global gradient to an updated layer-(N-1) global gradient sent by the layer-(i+1) node (Abad, p. 4, Algorithm 3, line 12, “MBS transmit w to all SBSs”), and issuing the any one of the updated layer-i global gradient to the updated layer-(N-1) global gradient to the layer-(i-1) node (Abad, p. 4, Algorithm 3, line 13, “SBSs transmit w to their Mus”). Regarding Claim 8, Abad teaches: A federated learning method applied to a layer-1 node (Abad, p. 5, Fig. 1: Hierarchical FL shows MU which is a layer-1 node), comprising: reporting an updated gradient corresponding to the layer-1 node to a layer- 2 node (SBS is layer 2 node and gn is updated gradient, Abad, p. 4, col. 1, paragraph 5, “each MU k, k ∈ Cn for n = 1, . . . , N computes the local gradient estimate, denoted by gn,k,t = 1 |Ik| P i∈Ik ∇fi(wn,t), and transmits it to the SBS”); and receiving an updated layer-j global gradient sent from layer-2 node (Abad, p. 4, col. 2, paragraph 1, “This average is then sent back by the SBS to the MUs in its cluster, and the model at cluster n = 1, . . . , N is updated as wn,t+1 = wn,t − ηtgn,t (20)”), wherein the layer-j global gradient is obtained through calculation according to a first gradient corresponding to at least one layer-j node (Abad, p. 4, col. 1, paragraph 5, “each MU… computes the local gradient estimate… transmits it to the SBS… the SBS n aggregates the gradients simply by taking the average, gn,t = P k∈Cn gn,k,t |Cn| . (19)”) and... and j is any integer greater than or equal to 1 and less than or equal to (N-1), and (N-1) is the number of layers of federated learning (j is equal to 1 and layer-j is the MU layer which is less than N-1 where N is 3) Abad does not expressly teach: …a layer-j weight index corresponding to a layer-(j+1) node; the layer-j weight index is a communication index… However, Chen teaches: …a layer-j weight index corresponding to a layer-(j+1) node; the layer-j weight index is a communication index… (Chen, p. 4, col. 2, paragraph 1, “the following model aggregation method taking into account of timeliness of the local models is proposed”, p. 4, equation 1, nk/n ∗ (e/2)−(t−timestampk) shows delay and is the communication index, p. 5, col. 1, paragraph 3, “Timestamps are stored and to be used to weight the timeliness of corresponding parameters in aggregation”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Chen temporally weighting aggregation of local models on a server with the aggregation of mobile users and small cell base stations of Abad. The motivation to do so would be to enhance the accuracy and convergence of the SBS models and MBS models (Chen, p. 1, Abstract, “temporally weighted aggregation strategy is introduced on the server to make use of the previously trained local models, thereby enhancing the accuracy and convergence of the central model”). Regarding Claim 9, Abad in view of Chen teaches the method of Claim 8 as referenced above. Abad further teaches: wherein if j is 1, the first gradient corresponding to the layer-j node is an updated gradient corresponding to the layer-j node (updated gradient is obtained from training as shown by ∇fi(wn,t) which is gradient with respect to loss function, Abad, p. 4, col. 1, paragraph 5, “each MU k, k ∈ Cn for n = 1, . . . , N computes the local gradient estimate, denoted by gn,k,t = 1 |Ik| P i∈Ik ∇fi(wn,t)”); and if j is greater than 1 and less than or equal to (N-1), the first gradient corresponding to the layer-j node is an updated layer-(j-1) global gradient corresponding to the layer-j node. Regarding Claim 10, the rejection of 8 is incorporated and further, the claim is rejected for the same reasons as set forth in Claim 2. Regarding Claim 11, the rejection of 8 is incorporated and further, the claim is rejected for the same reasons as set forth in Claim 3. Regarding Claim 12, Abad teaches: A federated learning method applied to a layer-N node or a layer-N subsystem, with (N-1) being the number of layers of federated learning (Abad, p. 5, Fig 1 shows 3 N layers MU, SBS, and MBS), comprising: receiving a layer-(N-2) global gradient corresponding to and reported by at least one layer-(N-1) node under the layer-N node or the layer-N subsystem (MU is a layer-(N-2) node under SBS which is layer-(N-1) node under MBS which is layer-N node, gn is first gradient, Abad, p. 4, col. 1, 5, “each MU k, k ∈ Cn for n = 1, . . . , N computes the local gradient estimate, denoted by gn,k,t = 1 |Ik| P i∈Ik ∇fi(wn,t), and transmits it to the SBS”); and calculating a layer-(N-1) global gradient corresponding to the layer-N node or the layer-N subsystem according to the layer-(N-2) global gradient corresponding to the at least one layer-(N-1) node (Abad, p. 4, col. 1, paragraph 5, “the SBS n aggregates the gradients simply by taking the average”) and… Abad does not expressly teach: …a layer-(N-1) weight index, wherein the layer-(N-1) weight index is a communication index. However, Chen teaches: …a layer-(N-1) weight index, wherein the layer-(N-1) weight index is a communication index (Chen, p. 4, col. 2, paragraph 1, “the following model aggregation method taking into account of timeliness of the local models is proposed”, p. 4, equation 1, nk/n ∗ (e/2)−(t−timestampk) shows delay and is the communication index, p. 5, col. 1, paragraph 3, “Timestamps are stored and to be used to weight the timeliness of corresponding parameters in aggregation”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Chen temporally weighting aggregation of local models on a server with the aggregation of mobile users and small cell base stations of Abad. The motivation to do so would be to enhance the accuracy and convergence of the SBS models and MBS models (Chen, p. 1, Abstract, “temporally weighted aggregation strategy is introduced on the server to make use of the previously trained local models, thereby enhancing the accuracy and convergence of the central model”). Regarding Claim 13, the rejection of 12 is incorporated and further, the claim is rejected for the same reasons as set forth in Claim 2. Regarding Claim 14, Abad in view of Chen teaches the method of Claim 1 as referenced above. Abad further teaches: An electronic device, comprising: at least one processor; and a memory having stored thereon at least one program which, when executed by the at least one processor, causes the at least one processor to implement the federated learning method of claim 1 (Abad trains and tests the FL method in simulations demonstrating that Abad performs their method on a computer, in which processor, memory, and storage devices are inherent, Abad, p. 7, col. 2, paragraph 4, “For the simulations, we also utilize some large batch training”, p. 9, Table 3 shows testing results). Regarding Claim 15, Abad in view of Chen teaches the method of Claim 1 as referenced above. Abad further teaches: A computer-readable storage medium having a computer program stored thereon, wherein, when the computer program is executed by a processor, the federated learning method of claim 1 is implemented (Abad trains and tests the FL method in simulations demonstrating that Abad performs their method on a computer, in which processor, memory, and storage devices are inherent, Abad, p. 7, col. 2, paragraph 4, “For the simulations, we also utilize some large batch training”, p. 9, Table 3 shows testing results). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JESSE CHEN COULSON whose telephone number is (571)272-4716. The examiner can normally be reached Monday-Friday 8:30-5:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kakali Chaki can be reached at (571) 272-3719. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JESSE C COULSON/ Examiner, Art Unit 2122 /KAKALI CHAKI/Supervisory Patent Examiner, Art Unit 2122
Read full office action

Prosecution Timeline

Apr 11, 2023
Application Filed
Jan 16, 2026
Non-Final Rejection — §101, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
25%
Grant Probability
99%
With Interview (+100.0%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 4 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month