Prosecution Insights
Last updated: April 19, 2026
Application No. 17/807,871

GROUPED AGGREGATION IN FEDERATED LEARNING

Final Rejection §103
Filed
Jun 21, 2022
Examiner
GONZALES, VINCENT
Art Unit
2124
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
2 (Final)
78%
Grant Probability
Favorable
3-4
OA Rounds
3y 6m
To Grant
89%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
410 granted / 522 resolved
+23.5% vs TC avg
Moderate +10% lift
Without
With
+10.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
26 currently pending
Career history
548
Total Applications
across all art units

Statute-Specific Performance

§101
21.2%
-18.8% vs TC avg
§103
39.9%
-0.1% vs TC avg
§102
13.2%
-26.8% vs TC avg
§112
14.6%
-25.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 522 resolved cases

Office Action

§103
DETAILED ACTION This action is written in response to the remarks and amendments dated 12/15/25. This action is made final. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments The Applicants argue that the previous art of record does not anticipate or render obvious the claims as currently amended. The Examiner provides updated prior art rejections below necessitated by the current amendments. Claim Objections Claim 9 contains two periods. Appropriate correction is required. See MPEP 608.01(m) and 37 CFR 1.75(i). Subject Matter Eligibility In determining whether the claims are subject matter eligible, the examiner has considered and applied the 2019 USPTO Patent Eligibility Guidelines, as well as guidance in the MPEP chapter 2106. The examiner finds that the independent claims are directed to the practical application of improving federated learning via hierarchical aggregation. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action: (a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102 of this title, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negatived by the manner in which the invention was made. The following are the references relied upon in the rejections below: Chai (Chai, Zheng, et al. "Tifl: A tier-based federated learning system." Proceedings of the 29th international symposium on high-performance parallel and distributed computing. 2020. Cited by Applicant in IDS dated 6/21/22.) Hu (Hu, Peiyun, Zachary C. Lipton, Anima Anandkumar, and Deva Ramanan. "Active learning with partial feedback." arXiv preprint arXiv:1802.07427. 9 Jul 2019.) Verbraeken (Verbraeken, Joost, Matthijs Wolting, Jonathan Katzy, Jeroen Kloppenburg, Tim Verbelen, and Jan S. Rellermeyer. "A survey on distributed machine learning." Acm computing surveys (csur) 53, no. 2 (2020): 1-33.) Zhang (Zhang, Tao, et al. "Privacy-preserving asynchronous grouped federated learning for IoT." IEEE Internet of Things Journal 9.7 (2021): 5511-5523.) Claims 1-2, 4, 6-9, 11, 13-16, 18 and 20 are rejected under 35 U.S.C. 103(a) as being obvious over Chai and Hu. Regarding claims 1, 8 and 15, Chai discloses a processor-implemented method, (and a related system and computer program product) the method comprising: initializing a plurality of aggregation groups including a plurality of parties and a plurality of local aggregators; … PNG media_image1.png 340 422 media_image1.png Greyscale P. 128, fig. 2. ‘Aggregation groups’ :: tiers ‘Plurality of parties’ :: clients ‘Local aggregators’ :: P. 128, second col. “It is worth to note that in Fig. 2, we only show a single aggregator rather than the hierarchical master-child aggregator design for a clean presentation purpose. For large scale system in practice, TiFL supports master-child aggregator design for scalability and fault tolerance.” submitting a final response from the first local aggregator or a second local aggregator from the plurality of local aggregators to a global aggregator; and See fig. 2; the depicted ‘Aggregator’ is a global aggregator which receives updates Wi from each local aggregator. building a machine learning model based on the final response. Id. Hu discloses the following further limitation which Chai does not disclose: submitting a query to a first party from the plurality of parties, the first party having access to first party local data; P. 2, fig. 1 (reproduced below). In this image, a model queries a human annotator Does this image contain a dog? PNG media_image2.png 648 834 media_image2.png Greyscale ‘first party local data’ :: The Examiner notes that human annotators will always have access to their own local data, ie their own personal knowledge about what a dog looks like. Cf. Applicant’s specification at [0046], “In yet another embodiment, data may be gathered in response to a query. For example, if a query asks whether or not a given image contains a bicycle, the grouped federated learning program 110A, 110B may ask a user whether or not that image contains a bicycle”. (Emphasis added.) receiving an initial response to the query from the first party that does not include the first party local data or a first party local model derived from the first party local data; Id. “Yes.” submitting the initial response to a first local aggregator ….; Id. “model”. At the time of filing, it would have been obvious to a person of ordinary skill to apply the technique disclosed by Hu for querying human annotators with the system of Chai because this would provide for improved labeling data which could in turn be used to further train machine classifiers. Both disclosures pertain to machine learning. Regarding independent claims 8 and 15, the computer hardware components recited therein (ie “one or more processors, one or more computer-readable memories, [and] one or more computer-readable tangible storage medium” and “one or more computer-readable tangible storage medium”) are inherent throughout the Chai disclosure. Regarding claims 2, 9 and 16, Chai discloses the further limitation comprising: submitting an intermediary response from the first local aggregator or a first intermediary aggregator to the second local aggregator, the first intermediary aggregator, or a second intermediary local aggregator. P. 131, “[4] introduces multiple levels of server aggregators in order to achieve scalability and fault tolerance in extreme scale situations, i.e., with millions of clients.” Regarding claims 4, 11 and 18, Chai discloses the further limitation wherein the plurality of aggregation groups each correspond to a physical location. P. 125, “In conventional high-performance computing (HPC), all the data is collected and centralized in one location and proceed by supercomputers with hundreds to thousands of computing nodes. However, security and privacy concerns have led to new legislation such as the General Data Protection Regulation (GDPR) [27] and the Health Insurance Portability and Accountability Act (HIPAA) [24] that prevent transmitting data to a centralized location, thus making conventional high performance computing difficult to be applied for collecting and processing the decentralized data. Federated Learning (FL) [15] shines light on a new emerging high performance computing paradigm by addressing the security and privacy challenges through utilizing decentralized data that is training local models on the local data of each client (data parties) and using a central aggregator to accumulate the learned gradients of local models to train a global model.” (Emphasis added.) Regarding claims 6, 13 and 20, Chai discloses the further limitation wherein the submitting further comprises: submitting a plurality of initial responses to more than one local aggregator from the plurality of local aggregators. PP. 128-29, “Different from vanilla FL that employs a random client selection policy, in TiFL the scheduler selects a tier and then randomly selects targeted number of clients from that tier. After the selection of clients, the training proceeds as state-of-the-art FL system does.” P. 127, first col. “The vanilla FL algorithm is briefly summarized in Alg. 1. The aggregator first randomly initializes weights of the global model denoted by 𝜔0. At the beginning of each round, the aggregator sends the current model weights to a subset of randomly selected clients. Each selected client then trains its local model with its local data and sends back the updated weights to the aggregator after local training. At each round, the aggregator waits until all selected clients respond with their corresponding trained weights. This iterative process keeps on updating the global model until a certain number of rounds are completed or a desired accuracy is reached.” Regarding claims 7 and 14, Chai discloses the further limitation wherein each party that submits a plurality of responses submits responses to different local aggregators so each local aggregator receives an incomplete subset of the plurality of responses. P. 128, fig. 2 (reproduced supra), illustrating a hierarchical aggregation scheme. As described more fully in sec. 4.1, each ‘tier’ comprises at least one local aggregator, and worker nodes receive updates only from their immediate parent aggregator, and not from all aggregators in the system. Claims 3, 5, 10, 12, 17 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Chai, Hu and Zhang. Regarding claims 3, 10 and 17, Zhang discloses the following further limitation which Chai/Hu do not disclose wherein each local aggregator from the plurality of local aggregators selects an aggregation method, and wherein the final response is determined using the aggregation method. P. 5513, “C. Our Innovation To avoid the shortcomings of DP-based methods and cryptography-based methods in the existing FL for IoT, RDP is applied into our framework to enable strong composition privacy results. We design an adaptive RDP-based privacy budget allocation protocol since the direct using of RDP cannot enable the models’ convergence to be an optimal one. Our framework enables the server to adjust the privacy budget of the corresponding model dynamically according to the accuracy of each local model on the public validation data set, which can tradeoff the utility of the global model and privacy guarantee. Next, the local model is perturbed according to the allocated privacy budget in the local training phase.” At the time of filing, it would have been obvious to a person of ordinary skill to apply the Zhang technique for aggregator-based differential privacy within a federated learning system to the local aggregators of Zhang/Hu because this would provide for enhanced performance in the face of heterogeneous (and possibly malicious) worker nodes. Both Zhang and Chai pertain to federated learning. Regarding claims 5, 12 and 19, Zhang discloses the following further limitation which Chai/Hu do not disclose wherein a party from the plurality of parties can be removed from a first aggregation group of the plurality of aggregation groups or placed in a second aggregation group of the plurality of aggregation groups after the initializing of the plurality of application groups. PP. 5512-13, sec. A(1), “Adaptive Weight-Based Worker Selection in AFL”. The obviousness analysis of claims 3/10/17 applies equally here. Claims 1, 8 and 15 are rejected under 35 U.S.C. 103(a) as being obvious over Verbraeken and Hu. (Alternate rejection.) Regarding claims 1, 8 and 15, Verbraeken discloses a processor-implemented method, (and a related system and computer program product) the method comprising: initializing a plurality of aggregation groups including a plurality of parties and a plurality of local aggregators; … PNG media_image3.png 398 776 media_image3.png Greyscale P. 13, fig. 3(b), annotated by examiner. ‘aggregation groups’ :: leaf ML nodes, plus their respective parent nodes. submitting a final response from the first local aggregator or a second local aggregator from the plurality of local aggregators to a global aggregator; and Id. The depicted ‘aggregate’ signals propagate upwards towards the global aggregator. building a machine learning model based on the final response. Id. Hu discloses the following further limitation which Verbraeken does not disclose: submitting a query to a first party from the plurality of parties, the first party having access to first party local data; P. 2, fig. 1 (reproduced below). In this image, a model queries a human annotator Does this image contain a dog? PNG media_image2.png 648 834 media_image2.png Greyscale ‘first party local data’ :: The Examiner notes that human annotators will always have access to their own local data, ie their own personal knowledge about what a dog looks like. Cf. Applicant’s specification at [0046], “In yet another embodiment, data may be gathered in response to a query. For example, if a query asks whether or not a given image contains a bicycle, the grouped federated learning program 110A, 110B may ask a user whether or not that image contains a bicycle”. (Emphasis added.) receiving an initial response to the query from the first party that does not include the first party local data or a first party local model derived from the first party local data; Id. “Yes.” submitting the initial response to a first local aggregator ….; Id. “model”. At the time of filing, it would have been obvious to a person of ordinary skill to apply the technique disclosed by Hu for querying human annotators with the distributed machine learning system of Verbraeken because this would provide for improved labeling data which could in turn be used to further train machine classifiers. Both disclosures pertain to distributed machine learning. Regarding independent claims 8 and 15, the computer hardware components recited therein (ie “one or more processors, one or more computer-readable memories, [and] one or more computer-readable tangible storage medium” and “one or more computer-readable tangible storage medium”) are inherent throughout the Verbraeken disclosure. Additional Relevant Prior Art The following references were identified by the Examiner as being relevant to the disclosed invention, but are not relied upon in any particular prior art rejection: Liu discloses a system for asynchronous federated learning in the presences of data heterogeneity. (US 2022/0383198 A1) Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Vincent Gonzales whose telephone number is (571) 270-3837. The examiner can normally be reached on Monday-Friday 7 a.m. to 4 p.m. MT. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Miranda Huang, can be reached at (571) 270-7092. Information regarding the status of an application may be obtained from the USPTO Patent Center./Vincent Gonzales/Primary Examiner, Art Unit 2124
Read full office action

Prosecution Timeline

Jun 21, 2022
Application Filed
Sep 23, 2025
Non-Final Rejection — §103
Dec 05, 2025
Examiner Interview Summary
Dec 05, 2025
Applicant Interview (Telephonic)
Dec 15, 2025
Response Filed
Feb 27, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585920
PREDICTING OPTIMAL PARAMETERS FOR PHYSICAL DESIGN SYNTHESIS
2y 5m to grant Granted Mar 24, 2026
Patent 12580040
DIFFUSION MODEL FOR GENERATIVE PROTEIN DESIGN
2y 5m to grant Granted Mar 17, 2026
Patent 12566984
METHODS AND SYSTEMS FOR EXPLAINING ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING
2y 5m to grant Granted Mar 03, 2026
Patent 12561402
IDENTIFICATION OF A SECTION OF BODILY TISSUE FOR PATHOLOGY TESTS
2y 5m to grant Granted Feb 24, 2026
Patent 12547647
Unsupervised Machine Learning System to Automate Functions On a Graph Structure
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
78%
Grant Probability
89%
With Interview (+10.5%)
3y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 522 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month