Prosecution Insights
Last updated: April 19, 2026
Application No. 18/196,062

DEBUGGING IN FEDERATED LEARNING SYSTEMS

Non-Final OA §103
Filed
May 11, 2023
Examiner
TRAN, QUOC A
Art Unit
2145
Tech Center
2100 — Computer Architecture & Software
Assignee
Cisco Technology Inc.
OA Round
1 (Non-Final)
80%
Grant Probability
Favorable
1-2
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
590 granted / 735 resolved
+25.3% vs TC avg
Strong +29% interview lift
Without
With
+29.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
21 currently pending
Career history
756
Total Applications
across all art units

Statute-Specific Performance

§101
21.8%
-18.2% vs TC avg
§103
43.1%
+3.1% vs TC avg
§102
6.2%
-33.8% vs TC avg
§112
10.2%
-29.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 735 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This is a F.A.O.M, in responses to Patent Application filed 05/11/2023. Claim(s) 1-20 are pending. Claim(s) 1, 12 and 20 are independent. In addition, in the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Information Disclosure Statement A signed and dated copy of applicant’s IDS, which was 05/11/2023 is/are attached to this Office Action. Claims Rejection – 35 U.S.C. 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-20 rejected under 35 U.S.C. 103 as being unpatentable over Toporek et al., (“US 20230316141 A1” filed 03/28/2023 [hereinafter “Toporek”], in view of Baror et al., (“US 20230290456 A 1” filed 03/08/2023 [hereinafter “Baror”], Independent Claim 1, Toporek teaches: A method, comprising: making, by a device, a determination that performance of a global model generated by a federated learning system has experienced a degradation, wherein the global model was generated by aggregating local models trained by a plurality of trainer nodes in the federated learning system (in Toporek Para(s) 54-57, i.e., determination that performance of a global model generated by a federated learning system has experienced a degradation based on the update matrix of the local participant in the global dataset and used in retraining of the global model for the local participant with the quality score greater than other local participants... and accordingly, indicating a how the respective update matrices should be handled in training of the global model....Also, in Para(s) 96-99, further mentions the steps said, aggregating local models trained by a plurality of trainer nodes in the federated learning system..) In the BRI, the local trainer in the federated learning system of Topek is recognized as trainer nodes as claimed. Toporek further teaches: selecting, by the device and in response to the determination, a particular trainer node from among the plurality of trainer nodes to generate ... metrics; (in Toporek Para(s) 54-57, i.e., determination that performance of a global model generated by a federated learning system has experienced a degradation based on the update matrix of the local participant in the global dataset and used in retraining of the global model for the local participant with the quality score greater than other local participants... and accordingly, indicating a how the respective update matrices should be handled in training of the global model...).Also, (in Toporek in Para(s) 96-99, further mentions the steps said, aggregating local models trained by a plurality of trainer nodes in the federated learning system.. Moreover, in Toporek Para(s) 67-71, further mentions produces many types and levels of performance reports including differentiated performance information according to the recipient and share the performance reports accordingly from an individual to the local participants in the federated learning system). Also, Toporek further teaches:... and providing, by the device, an indication that the particular trainer node is a root cause of the degradation; (in Toporek Para(s) 54-57 , i.e., determination that performance of a global model generated by a federated learning system has experienced a degradation based on the update matrix of the local participant in the global dataset and used in retraining of the global model for the local participant with the quality score greater than other local participants... and accordingly, indicating a how the respective update matrices should be handled in training of the global model...Also, in Toporek in Para(s) 96-99, further mentions the steps said, aggregating local models trained by a plurality of trainer nodes in the federated learning system.. Moreover, in Toporek Para 76, further mentions the quality score of the local participant represents the level of contribution to be made by the update matrix of the local participant to the global model, as noted above. Accordingly, the global server selects local participants for the subset such that a distribution of the quality scores amongst the local participants in the subset is substantially similar to the distribution of the quality scores amongst all local participants). It is noted, Toporek is related to determination that performance of a global model generated by a federated learning system has experienced a degradation based on the update matrix of the local participant in the global dataset and used in retraining of the global model for the local participant with the quality score greater than other local participants.... However, Toporek fail to teach the limitations, said, ... debugging metrics...; obtaining, by the device, the debugging metrics from the particular trainer node; But the combination of Toporek and Baror teach these limitation in (Baror Para(s) 42, 46 and 53-55, which is describing the steps said, federated learning running end-to-end federating learning projects typically involves several steps, including data collection, data preparation, data review, model training and validation, iteration and experimentation on model architectures and hyper parameters, and result analysis. Some of these steps use a federated learning network to coordinate learning across multiple nodes, ... and using federated learning to train a mode... for debugging purposes and updates...this can also include setting up various containers, initiating the databases (e.g., databases and servers); the installation procedure can be done using an installation script for repeatability. Additionally, organizations, workgroups, and user accounts can also be defined. As described herein, an organization refers to an entity working with the entity that manages the system Organizations can include hospitals, model developers, etc. Organizations can also include one or more workgroups. A workgroup can refer to a department/team within an organization. In this case, where client agents at two locations are being used, the agents will be referred to as client agent(s) at the first/second location ...wherein each client agent that participated in the training, such as precision-recall curves and other performance metrics and additional ancillary data generating during the training of a model, loss-cases, a list of errors encountered during training, etc., Moreover, ( Baror in Para 65, further mentions the training process finishes running and validation is performed on the validation cohorts; the training and validation results are imported into the system (either automatically or via a user generated request to import results) and, at block 780, the project leader reviews summary results. At block 785, the project leader can manually review training metrics and loss cases, e.g. false negatives (FNs), false positive (FPs). At block 730, the project leader reviews local loss-cases, which refer to poor model performance on specific cases in the project lead's cohort....) Accordingly, it would have been obvious to one having ordinary skill in the art at the time before the effective filing date of the claimed invention was made to modify Toporek’ s method, to include a means of said, ... debugging metrics...; obtaining, by the device, the debugging metrics from the particular trainer node, ...as taught by Baror. That would provide improved techniques for federated learning by accounting for statistical heterogeneity and biases in the global model inherent in conventional federated learning a...[Toporek Para 4]. It is noted the KSR ruling recommends references directed to similar subject matter to be combined. Claim 2, Toporek and Baror further teach: the determination that the performance of the global model generated by the federated learning system has experienced the degradation using a validation data set; (in Toporek Para(s) 13-14, i.e., experienced a quality of data generated from the local participant from the validation data set....) Claim 3, Toporek and Baror further teach: one or more debugging functions to the particular trainer node based, at least in part, on the debugging metrics from the particular trainer node; (Baror Para(s) 42, 46 and 53-55, which is describing the steps said, federated learning running end-to-end federating learning projects typically involves several steps, including data collection, data preparation, data review, model training and validation, iteration and experimentation on model architectures and hyper parameters, and result analysis. Some of these steps use a federated learning network to coordinate learning across multiple nodes, ... and using federated learning to train a mode... for debugging purposes and updates.....wherein each client agent that participated in the training, such as precision-recall curves and other performance metrics and additional ancillary data generating during the training of a model, loss-cases, a list of errors encountered during training, etc., Moreover, ( Baror in Para 65, further mentions the training process finishes running and validation is performed on the validation cohorts; the training and validation results are imported into the system (either automatically or via a user generated request to import results) and, at block 780, the project leader reviews summary results. At block 785, the project leader can manually review training metrics and loss cases, e.g. false negatives (FNs), false positive (FPs). At block 730, the project leader reviews local loss-cases, which refer to poor model performance on specific cases in the project lead's cohort....) Accordingly, it would have been obvious to one having ordinary skill in the art at the time before the effective filing date of the claimed invention was made to modify Toporek’ s method, to include a means of said, ... one or more debugging functions to the particular trainer node based, at least in part, on the debugging metrics from the particular trainer node, ...as taught by Baror. That would provide improved techniques for federated learning by accounting for statistical heterogeneity and biases in the global model inherent in conventional federated learning a...[Toporek Para 4]. It is noted the KSR ruling recommends references directed to similar subject matter to be combined. Claim 4, Toporek and Baror further teach: sending a request to the particular trainer node to obtain the debugging metrics from the particular trainer node; (Baror Para(s) 42, 46 and 53-55, which is describing the steps said, federated learning running end-to-end federating learning projects typically involves several steps, including data collection, data preparation, data review, model training and validation, iteration and experimentation on model architectures and hyper parameters, and result analysis. Some of these steps use a federated learning network to coordinate learning across multiple nodes, ... and using federated learning to train a mode... for debugging purposes and updates.....wherein each client agent that participated in the training, such as precision-recall curves and other performance metrics and additional ancillary data generating during the training of a model, loss-cases, a list of errors encountered during training, etc., Moreover, ( Baror in Para 65, further mentions the training process finishes running and validation is performed on the validation cohorts; the training and validation results are imported into the system (either automatically or via a user generated request to import results) and, at block 780, the project leader reviews summary results. At block 785, the project leader can manually review training metrics and loss cases, e.g. false negatives (FNs), false positive (FPs). At block 730, the project leader reviews local loss-cases, which refer to poor model performance on specific cases in the project lead's cohort....) Accordingly, it would have been obvious to one having ordinary skill in the art at the time before the effective filing date of the claimed invention was made to modify Toporek’ s method, to include a means of said, ... sending a request to the particular trainer node to obtain the debugging metrics from the particular trainer node, ...as taught by Baror. That would provide improved techniques for federated learning by accounting for statistical heterogeneity and biases in the global model inherent in conventional federated learning a...[Toporek Para 4]. It is noted the KSR ruling recommends references directed to similar subject matter to be combined. Claim 5, Toporek and Baror further teach: further comprising selecting, by the device and in response to the determination, the particular trainer node based, at least in part, on a determined degree of influence a local model executed on the particular trainer node had on the global mode; (in Toporek Para(s) 54-57 , i.e., determination that performance of a global model generated by a federated learning system has experienced a degradation based on the update matrix of the local participant in the global dataset and used in retraining of the global model for the local participant with the quality score greater than other local participants... and accordingly, indicating a how the respective update matrices should be handled in training of the global model... Also, in Toporek in Para(s) 96-99, further mentions the steps said, aggregating local models trained by a plurality of trainer nodes in the federated learning system...Moreover, (in Toporek Para(s) 67-71, further mentions produces many types and levels of performance reports including differentiated performance information according to the recipient and share the performance reports accordingly from an individual to the local participants in the federated learning system.. such as , configured to perform a learning cycle as presented herein in a certain preconfigured interval, upon receiving a certain number of update matrices from all local participants, upon receiving a certain number of update matrices from a selected group of local participants based on a preconfigured condition, upon receiving a certain number of update matrices that represents changes outside of threshold ranges, or any combinations thereof..) Claim 6, Toporek and Baror further teach: ranking, by the device, the plurality of trainer nodes based, at least in part, on a determined degree of influence each of the local models had on the global model to generate a ranked list of trainer nodes; and selecting, by the device and in response to the determination, the particular trainer node based, at least in part, on the ranked list of trainer nodes; (in Toporek Para(s) 54-57 , i.e., determination that performance of a global model generated by a federated learning system has experienced a degradation based on the update matrix of the local participant in the global dataset and used in retraining of the global model for the local participant with the quality score greater than other local participants... and accordingly, indicating a how the respective update matrices should be handled in training of the global model... Also, in Toporek in Para(s) 96-99, further mentions the steps said, aggregating local models trained by a plurality of trainer nodes in the federated learning system...Moreover, (in Toporek Para(s) 67-71, further mentions produces many types and levels of performance reports including differentiated performance information according to the recipient and share the performance reports accordingly from an individual to the local participants in the federated learning system.. such as , configured to perform a learning cycle as presented herein in a certain preconfigured interval, upon receiving a certain number of update matrices from all local participants, upon receiving a certain number of update matrices from a selected group of local participants based on a preconfigured condition, upon receiving a certain number of update matrices that represents changes outside of threshold ranges, or any combinations thereof..) Claim 7, Toporek and Baror further teach: wherein the debugging metrics and the indication do not reveal any training data used by the particular trainer node to generate a local model associated with the particular trainer node; (Baror Para(s) 42, 46 and 53-55, which is describing the steps said, federated learning running end-to-end federating learning projects typically involves several steps, including data collection, data preparation, data review, model training and validation, iteration and experimentation on model architectures and hyper parameters, and result analysis. Some of these steps use a federated learning network to coordinate learning across multiple nodes, ... and using federated learning to train a mode... for debugging purposes and updates.....wherein each client agent that participated in the training, such as precision-recall curves and other performance metrics and additional ancillary data generating during the training of a model, loss-cases, a list of errors encountered during training, etc., Moreover, ( Baror in Para 65, further mentions the training process finishes running and validation is performed on the validation cohorts; the training and validation results are imported into the system (either automatically or via a user generated request to import results) and, at block 780, the project leader reviews summary results. At block 785, the project leader can manually review training metrics and loss cases, e.g. false negatives (FNs), false positive (FPs). At block 730, the project leader reviews local loss-cases, which refer to poor model performance on specific cases in the project lead's cohort....) In the BRI, is recognized as .. the indication do not reveal any training data.. as claimed. Accordingly, it would have been obvious to one having ordinary skill in the art at the time before the effective filing date of the claimed invention was made to modify Toporek’ s method, to include a means of said, ... wherein the debugging metrics and the indication do not reveal any training data used by the particular trainer node to generate a local model associated with the particular trainer node, ...as taught by Baror. That would provide improved techniques for federated learning by accounting for statistical heterogeneity and biases in the global model inherent in conventional federated learning a...[Toporek Para 4]. It is noted the KSR ruling recommends references directed to similar subject matter to be combined. Claim 8, Toporek and Baror further teach: wherein the indication indicates that the degradation is attributable to a training data set associated with the particular trainer node being aggregated into the global model; (in Toporek Para(s) 13-14 and 82-86, i.e., experienced a quality of data generated from the local participant with particular trainer node being aggregated into the global model....) Claim 9, Toporek and Baror further teach: wherein the indication comprises a root cause score associated with the particular trainer node; (in Toporek Para(s) 54-57 , i.e., determination that performance of a global model generated by a federated learning system has experienced a degradation based on the update matrix of the local participant in the global dataset and used in retraining of the global model for the local participant with the quality score greater than other local participants... and accordingly, indicating a how the respective update matrices should be handled in training of the global model...Also, in Toporek in Para(s) 96-99, further mentions the steps said, aggregating local models trained by a plurality of trainer nodes in the federated learning system.. Moreover, in Toporek Para 76, further mentions the quality score of the local participant represents the level of contribution to be made by the update matrix of the local participant to the global model, as noted above. Accordingly, the global server selects local participants for the subset such that a distribution of the quality scores amongst the local participants in the subset is substantially similar to the distribution of the quality scores amongst all local participants). Claim 10, Toporek and Baror further teach: .. wherein the ... metrics include a data drift score; (in Toporek Para(s) 67-71, further mentions produces many types and levels of performance reports including differentiated performance information according to the recipient and share the performance reports accordingly from an individual to the local participants in the federated learning system.. such as , configured to perform a learning cycle as presented herein in a certain preconfigured interval, upon receiving a certain number of update matrices from all local participants, upon receiving a certain number of update matrices from a selected group of local participants based on a preconfigured condition, upon receiving a certain number of update matrices that represents changes outside of threshold ranges (i.e., drift score), or any combinations thereof..) Also, Toporek and Baror further teach...the debugging metrics ...( Baror Para(s) 42, 46 and 53-55, which is describing the steps said, federated learning running end-to-end federating learning projects typically involves several steps, including data collection, data preparation, data review, model training and validation, iteration and experimentation on model architectures and hyper parameters, and result analysis. Some of these steps use a federated learning network to coordinate learning across multiple nodes, ... and using federated learning to train a mode... for debugging purposes and updates.....wherein each client agent that participated in the training, such as precision-recall curves and other performance metrics and additional ancillary data generating during the training of a model, loss-cases, a list of errors encountered during training, etc., Moreover, ( Baror in Para 65, further mentions the training process finishes running and validation is performed on the validation cohorts; the training and validation results are imported into the system (either automatically or via a user generated request to import results) and, at block 780, the project leader reviews summary results. At block 785, the project leader can manually review training metrics and loss cases, e.g. false negatives (FNs), false positive (FPs). At block 730, the project leader reviews local loss-cases, which refer to poor model performance on specific cases in the project lead's cohort....) In the BRI, is recognized as .. the indication do not reveal any training data.. as claimed. Accordingly, it would have been obvious to one having ordinary skill in the art at the time before the effective filing date of the claimed invention was made to modify Toporek’ s method, to include a means of said, ... ...the debugging metrics, ...as taught by Baror. That would provide improved techniques for federated learning by accounting for statistical heterogeneity and biases in the global model inherent in conventional federated learning a...[Toporek Para 4]. It is noted the KSR ruling recommends references directed to similar subject matter to be combined. Claim 11, Toporek and Baror further teach: .. wherein the...metrics include an independent and identically distributed data score; (in Toporek Para 56 and 67-71, further mentions n degraded performance in a resulting federated model as federated learning assumes that contributions to the federated learning model from respective participants would be independent and identically distributed.. wherein, configured to perform a learning cycle as presented herein in a certain preconfigured interval, upon receiving a certain number of update matrices from all local participants, upon receiving a certain number of update matrices from a selected group of local participants based on a preconfigured condition, upon receiving a certain number of update matrices that represents changes outside of threshold ranges, or any combinations thereof..) Also, Toporek and Baror further teach...the debugging metrics ...( Baror Para(s) 42, 46 and 53-55, which is describing the steps said, federated learning running end-to-end federating learning projects typically involves several steps, including data collection, data preparation, data review, model training and validation, iteration and experimentation on model architectures and hyper parameters, and result analysis. Some of these steps use a federated learning network to coordinate learning across multiple nodes, ... and using federated learning to train a mode... for debugging purposes and updates.....wherein each client agent that participated in the training, such as precision-recall curves and other performance metrics and additional ancillary data generating during the training of a model, loss-cases, a list of errors encountered during training, etc., Moreover, ( Baror in Para 65, further mentions the training process finishes running and validation is performed on the validation cohorts; the training and validation results are imported into the system (either automatically or via a user generated request to import results) and, at block 780, the project leader reviews summary results. At block 785, the project leader can manually review training metrics and loss cases, e.g. false negatives (FNs), false positive (FPs). At block 730, the project leader reviews local loss-cases, which refer to poor model performance on specific cases in the project lead's cohort....) In the BRI, is recognized as .. the indication do not reveal any training data.. as claimed. Accordingly, it would have been obvious to one having ordinary skill in the art at the time before the effective filing date of the claimed invention was made to modify Toporek’ s method, to include a means of said, ... ...the debugging metrics, ...as taught by Baror. That would provide improved techniques for federated learning by accounting for statistical heterogeneity and biases in the global model inherent in conventional federated learning a...[Toporek Para 4]. It is noted the KSR ruling recommends references directed to similar subject matter to be combined. Regarding claim(s) 12-20 (respectively) is/are fully incorporated similar subject of claim(s) 1-8 and 1 (respectively) cited above, and is/are similarly rejected along the same rationale. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Akdeniz et al.,(“US 20230068386 A1 ” filed 12/26/2020, discloses a processor to perform rounds of federated machine learning training including: processing client reports from a plurality of clients of the edge computing network; selecting a candidate set of clients from the plurality of clients for an epoch of the federated machine learning training; causing a global model to be sent to the candidate set of clients; and performing the federated machine learning training on the candidate set of clients. The processor may perform rounds of federated machine learning training including: obtaining coded training data from each of the selected clients; and performing machine learning training on the coded training data... [the Abstract]. Any inquiry concerning this communication or earlier communications from the examiner should be directed to QUOC A TRAN whose telephone number is (571)272-8664. The examiner can normally be reached Monday-Friday 9am-5pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Cesar Paula can be reached at 571-272-4128. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /QUOC A TRAN/ Primary Examiner, Art Unit 2145
Read full office action

Prosecution Timeline

May 11, 2023
Application Filed
Jan 10, 2026
Non-Final Rejection — §103
Apr 08, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586003
Method and Apparatus for Generating Operator
2y 5m to grant Granted Mar 24, 2026
Patent 12585951
METHOD AND ELECTRONIC DEVICE FOR GENERATING OPTIMAL NEURAL NETWORK (NN) MODEL
2y 5m to grant Granted Mar 24, 2026
Patent 12572772
SCALABLE DIGITAL TWIN SERVICE SYSTEM AND METHOD
2y 5m to grant Granted Mar 10, 2026
Patent 12561617
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Feb 24, 2026
Patent 12561610
METHOD AND APPARATUS FOR PRESENTING CANDIDATE CHARACTER STRING, AND METHOD AND APPARATUS FOR TRAINING DISCRIMINATIVE MODEL
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
80%
Grant Probability
99%
With Interview (+29.4%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 735 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month